PySpark Interview questions and answers

Pyspark is a structure that runs on a group of item equipment and performs information unification i.e., perusing and composing of a wide assortment of information from different sources.  In Spark, an undertaking is an activity that can be a guide task. The execution of the activity is handled by Flash Context and furthermore gives API’s in various dialects i.e., Scala, Java and Python to create applications and quicker execution when contrasted with MapReduce.

In this article, you can go through the set of Pyspark interview questions most frequently asked in the interview panel. This will help you crack the interview as the topmost industry experts curate these at HKR training.

Let us have a quick review of the Pyspark interview questions & Pyspark Coding Interview Questions

Most Frequently Asked PySpark Interview Questions and Answers  

1. Explain how an object is implemented in python?

Ans: An object is an instantiation of a class. A class can be instantiated by calling the class using the class name.

Syntax:

 = ()

Example:

class Student:  

    id = 25;  

    name = "HKR Trainings" 

   estb = 10 

    def display (self):  

        print("ID: %d \n Name: %s \n Estb: %d "%(self.id,self.name,self.estb))  

stud = Student()  

stud.display()  

Output:

ID: 25 

Name: HKR Trainings

Estb: 10
2. Explain Methods in Python

Ans: In Python, a method is a function that is associated with an object. Any object type can have methods.

Example:

class Student:  

    roll = 17;  

    name = "gopal" 

   age = 25 

    def display (self):  

        print(self.roll,self.name,self.age)

In the above example, a class named Student is created which contains three fields as Student’s roll, name, age and a function “display()” which is used to display the information of the Student.

3. What is encapsulation in Python?

Ans: Encapsulation is used to restrict access to methods and variables. Here, the methods and data/variables are wrapped together within a single unit so as to prevent data from direct modification.

Below is the example of encapsulation whereby the max price of the product cannot be modified as it is set to 75.

Example:

class Product:


   def __init__(self):

       self.__maxprice = 75


   def sell(self):

       print("Selling Price: {}".format(self.__maxprice))


   def setMaxPrice(self, price):

       self.__maxprice = price


p = Product()

p.sell()


# change the price

p.__maxprice = 100

p.sell()

Output:

Selling Price: 75

Selling Price: 75


4. Explain the concept of Python Inheritance

Ans: Inheritance refers to a concept where one class inherits the properties of another. It helps to reuse the code and establish a relationship between different classes.

Amongst following two types of classes, inheritance is performed:

  • Parent class (Super or Base class): A class whose properties are inherited.
  • Child class (Subclass or Derived class): A class which inherits the properties.

In python, a derived class can inherit base class by just mentioning the base in the bracket after the derived class name.

The syntax to inherit a base class into the derived class is shown below:

Syntax:

class derived-class(base class):  

     

The syntax to inherit multiple classes is shown below by specifying all of them inside the bracket.

Syntax:

class derive-class(, , ..... ):  

     
5. What is a python for loop?

Ans: A for loop in Python requires at least two variables to work. The first is the iterable object such as a list, tuple or a string and second is the variable to store the successive values from the sequence in the loop.

Syntax:

for iter in sequence:

    statements(iter)
  • The “iter” represents the iteration variable. It gets assigned with the successive values from the input sequence.
  • The “sequence” may refer to any of the following Python objects such as a list, a tuple or a string.
6. What is a for-else in python?

Ans: Python allows us to handle loops in an interesting manner by providing a function to write else blocks for cases when the loop does not satisfy a certain condition.

Example :

x = [ ]

  for i in x:

      print "in for loop"

  else:

      print "in else block"

Output: 

in else block
7. What are errors and exceptions in python programming?

Ans: In Python, there are two types of errors - syntax error and exceptions.

Syntax Error: It is also known as parsing errors. Errors are issues in a program which may cause it to exit abnormally. When an error is detected, the parser repeats the offending line and then displays an arrow which points at the earliest point in the line.

Exceptions: Exceptions take place in a program when the normal flow of the program is interrupted due to the occurrence of an external event. Even if the syntax of the program is correct, there are chances of detecting an error during execution, this error is nothing but an exception. Some of the examples of exceptions are - ZeroDivisionError, TypeError and NameError.

8. What is the key difference between list and tuple?

Ans: 

  • The key difference between lists and tuples is the fact that lists have mutable nature and tuples have immutable nature.
  • It is said to be a mutable data type when a python object can be modified. On the other hand, immutable data types cannot be modified. Let us see an example to modify an item list vs tuple.

Example:

list_num[3] = 7

print(list_num)

tup_num[3] = 7

Output:

[1,2,5,7]

Traceback (most recent call last):

File "python", line 6, in

TypeError: 'tuple' object does not support item assignment

In this code, we had assigned 7 to list_num at index 3 and in the output, we can see 7 is found in index 3. However, we had assigned 7 to tup_num at index 3 but we got type error on the output. This is because we cannot modify tuples due to its immutable nature.

9. How to convert a string to a number in python?

Ans: The method provided by Python, is a standard built-in function which converts a string into an integer value.

It can be called with a string containing a number as the argument, and it will return the number converted to an actual integer.

Example:

print int("1") + 2

The above prints 3.
10. What is data cleaning?

Ans: Data cleaning is the process of preparing data for analysis by removing or modifying data that is incorrect, incomplete, irrelevant, duplicated, or improperly formatted. 

11. What is data visualization and why is it important?

Ans: Data visualization is the representation of data or information in a graph, chart, or other visual format. It communicates relationships of the data with images. The data visualizations are important because it allows trends and patterns to be more easily seen. 

PySpark Training Certification

  • Master Your Craft
  • Lifetime LMS & Faculty Access
  • 24/7 online expert support
  • Real-world & Project Based Learning

 


12. What is Pyspark and explain its characteristics?

Ans: To support Python with Spark, the Spark community has released a tool called PySpark. It is primarily used to process structured and semi-structured datasets and also supports an optimized API to read data from the multiple data sources containing different file formats. Using PySpark, you can also work with RDDs in the Python programming language using its library name Py4j.

The main characteristics of PySpark are listed below:

  • Nodes are Abstracted.
  • Based on MapReduce.
  • API for Spark.
  • The network is abstracted.
13. Explain RDD and also state how you can create RDDs in Apache Spark.

Ans: RDD stands for Resilient Distribution Datasets, a fault-tolerant set of operational elements that are capable of running in parallel. These RDDs, in general, are the portions of data, which are stored in the memory and distributed over many nodes.

All partitioned data in an RDD is distributed and immutable.

There are primarily two types of RDDs are available:

  • Hadoop datasets: Those who perform a function on each file record in Hadoop Distributed File System (HDFS) or any other storage system.
  • Parallelized collections: Those existing RDDs which run in parallel with one another.
14. How do we create RDDs in Spark?

Ans: Spark provides two methods to create RDD:

  • By parallelizing a collection in your Driver program.
  • This makes use of SparkContext’s ‘parallelize’.
method val DataArray = Array(2,4,6,8,10)

val DataRDD = sc.parallelize(DataArray)
  • By loading an external dataset from external storage like HDFS, HBase, shared file system.
15. Name the components of Apache Spark?

Ans: The following are the components of Apache Spark.

  • Spark Core: Base engine for large-scale parallel and distributed data processing.
  • Spark Streaming: Used for processing real-time streaming data.
  • Spark SQL: Integrates relational processing with Spark’s functional programming API.
  • GraphX: Graphs and graph-parallel computation.
  • MLlib: Performs machine learning in Apache Spark.
16. How DAG functions in Spark?

Ans: When an Action is approached at a certain point, Spark RDD at an abnormal state, Spark presents the heredity chart to the DAG Scheduler.

Activities are separated into phases of the errand in the DAG Scheduler. A phase contains errands dependent on the parcel of the info information. The DAG scheduler pipelines administrators together. It dispatches tasks through the group chief. The conditions of stages are obscure to the errand scheduler. The Workers execute the undertaking on the slave.

17. What do you mean by Spark Streaming?

Ans: Stream processing is an extension to the Spark API that lets stream processing of live data streams. Data from multiple sources such as Flume, Kafka, Kinesis, etc., is processed and then pushed to live dashboards, file systems, and databases. Compared to the terms of input data, it is just similar to batch processing, and data is segregated into streams like batches in processing.

18. What does MLlib do?

Ans: MLlib is a scalable Machine Learning library offered by Spark. It supports making Machine Learning secure and scalable with standard learning algorithms and use cases such as regression filtering, clustering, dimensional reduction.

19. What are the different MLlib tools available in Spark?

Ans: 

  • ML Algorithms: Classification, Regression, Clustering, and Collaborative filtering.
  • Featurization: Feature extraction, Transformation, Dimensionality reduction, and Selection.
  • Pipelines: Tools for constructing, evaluating, and tuning ML pipelines
  • Persistence: Saving and loading algorithms, models and pipelines.
  • Utilities: Linear algebra, statistics, data handling.
20. Explain the functions of SparkCore.

Ans: SparkCore implements several key functions such as 

  • Memory management. 
  • Fault-tolerance. 
  • Monitoring jobs. 
  • Job scheduling.
  • Interaction with storage systems. 

Moreover, additional libraries, built atop the core, let diverse workloads for streaming, machine learning, and SQL. This is useful for:

  • Memory management.
  • fault recovery.
  • Interacting with storage systems.
  • Scheduling and monitoring jobs on a cluster.
21. What is the module used to implement SQL in Spark? How does it work?

Ans: The module used is Spark SQL, which integrates relational processing with Spark’s functional programming API. It helps to query data either through Hive Query Language or SQL. These are the four libraries of Spark SQL.

  • Data Source API.
  • Interpreter & Optimizer.
  • DataFrame API.
  • SQL Service.
22. List the functions of Spark SQL.

Ans: Spark SQL is capable of:

  • Loading data from a variety of structured sources.
  • Querying data using SQL statements, both inside a Spark program and from external tools that connect to Spark SQL through standard database connectors (JDBC/ODBC). For instance, using business intelligence tools like Tableau. 
  • Providing rich integration between SQL and regular Python/Java/Scala code, including the ability to join RDDs and SQL tables, expose custom functions in SQL, and more.

Subscribe to our youtube channel to get new updates..!

 


23. What are the various algorithms supported in PySpark?

Ans: The different algorithms supported by PySpark are:

  • Spark.mllib.
  • mllib.clustering.
  • mllib.classification.
  • mllib.regression.
  • mllib.recommendation.
  • mllib.linalg.
  • mllib.fpm.
24. Explain the purpose of serializations in PySpark?

Ans: For improving performance, PySpark supports custom serializers to transfer data. They are:

  • MarshalSerializer: It supports only fewer data types, but compared to PickleSerializer, it is faster.
  • PickleSerializer: It is by default used for serializing objects. Supports any Python object but at a slow speed.
25. What is PySpark StorageLevel?25. What is PySpark StorageLevel?

Ans: PySpark Storage Level controls storage of an RDD. It also manages how to store RDD in the memory or over the disk, or sometimes both. Moreover, it even controls the replicate or serializes RDD partitions. The code for StorageLevel is as follows

class pyspark.StorageLevel( useDisk, useMemory, useOfHeap, deserialized, replication = 1)
26. What is PySpark SparkContext?

Ans: PySpark SparkContext is treated as an initial point for entering and using any Spark functionality. The SparkContext uses py4j library to launch the JVM, and then create the JavaSparkContext. By default, the SparkContext is available as ‘sc’.

27. What is PySpark SparkFiles?

Ans: PySpark SparkFiles is used to load our files on the Apache Spark application. It is one of the functions under SparkContext and can be called using sc.addFile to load the files on the Apache Spark. SparkFIles can also be used to get the path using SparkFile.get or resolve the paths to files that were added from sc.addFile. The class methods present in the SparkFiles directory are getrootdirectory() and get(filename).

28. Explain Spark Execution Engine?

Ans: Apache Spark is a graph execution engine that enables users to analyze massive data sets with high performance. For this, Spark first needs to be held in memory to improve performance drastically, if data needs to be manipulated with multiple stages of processing.

29. What do you mean by SparkConf in PySpark?

Ans: SparkConf helps in setting a few configurations and parameters to run a Spark application on the local/cluster. In simple terms, it provides configurations to run a Spark application.

PySpark Training Certification

Weekday / Weekend Batches

 


30. Name a few attributes of SparkConf.

Ans: Few main attributes of SparkConf are listed below:

  • set(key, value): This attribute helps in setting the configuration property.
  • setSparkHome(value): This attribute enables in setting Spark installation path on worker nodes.
  • setAppName(value): This attribute helps in setting the application name.
  • setMaster(value): This attribute helps in setting the master URL.
  • get(key, defaultValue=None): This attribute supports in getting a configuration value of a key.

you can use above pyspark interview questions for experienced data engineer as well

Find our upcoming PySpark Training Certification Online Classes

  • Batch starts on 2nd Oct 2023, Weekday batch

  • Batch starts on 6th Oct 2023, Fast Track batch

  • Batch starts on 10th Oct 2023, Weekday batch

 
Global Promotional Image
 

Categories

Request for more information

Gayathri
Gayathri
Research Analyst
As a senior Technical Content Writer for HKR Trainings, Gayathri has a good comprehension of the present technical innovations, which incorporates perspectives like Business Intelligence and Analytics. She conveys advanced technical ideas precisely and vividly, as conceivable to the target group, guaranteeing that the content is available to clients. She writes qualitative content in the field of Data Warehousing & ETL, Big Data Analytics, and ERP Tools. Connect me on LinkedIn.