Users Online
· Members Online: 0
· Total Members: 188
· Newest Member: meenachowdary055
Forum Threads
Latest Articles
Articles Hierarchy
Apache Spark MCQ 1====
Ans: Below are few reasons.
· Iterative Algorithm: Generally MapReduce is not good to process iterative algorithms like Machine Learning and Graph processing. Graph and Machine Learning algorithms are iterative by nature and less saves to disk, this type of algorithm needs data in memory to run algorithm steps again and again or less transfers over network means better performance.
· In Memory Processing: MapReduce uses disk storage for storing processed intermediate data and also read from disks which is not good for fast processing. . Because Spark keeps data in Memory (Configurable), which saves lot of time, by not reading and writing data to disk as it happens in case of Hadoop.
· Near real-time data processing: Spark also supports near real-time streaming workloads via Spark Streaming application framework.
2. Why both Spark and Hadoop needed?
Ans: Spark is often called cluster computing engine or simply execution engine. Spark uses many concepts from Hadoop MapReduce. Both Spark and Hadoop work together well. Spark with HDFS and YARN gives better performance and also simplifies the work distribution on cluster. As HDFS is storage engine for storing huge volume of data and Spark as a processing engine (In memory as well as more efficient data processing).
HDFS: It is used as a Storage engine for Spark as well as Hadoop.
YARN: It is a framework to manage Cluster using pluggable scedular.
Run other than MapReduce: With Spark you can run MapReduce algorithm as well as other higher level of operators for instance map(), filter(), reduceByKey(), groupByKey() etc.
3. How can you use Machine Learning library “SciKit library” which is written in Python, with Spark engine?
Ans: Machine learning tool written in Python, e.g. SciKit library, can be used as a Pipeline API in Spark MLlib or calling pipe().
4. Why Spark is good at low-latency iterative workloads e.g. Graphs and Machine Learning?
Ans: Machine Learning algorithms for instance logistic regression require many iterations before creating optimal resulting model. And similarly in graph algorithms which traverse all the nodes and edges. Any algorithm which needs many iteration before creating results can increase their performance when the intermediate partial results are stored in memory or at very fast solid state drives.
Spark can cache/store intermediate data in memory for faster model building and training.
Also, when graph algorithms are processed then it traverses graphs one connection per iteration with the partial result in memory. Less disk access and network traffic can make a huge difference when you need to process lots of data.
5. Which all kind of data processing supported by Spark?
Ans: Spark offers three kinds of data processing using batch, interactive (Spark Shell), and stream processing with the unified API and data structures.
6. How do you define SparkContext?
Ans: It’s an entry point for a Spark Job. Each Spark application starts by instantiating a Spark context. A Spark application is an instance of SparkContext. Or you can say, a Spark context constitutes a Spark application.
SparkContext represents the connection to a Spark execution environment (deployment mode).
A Spark context can be used to create RDDs, accumulators and broadcast variables, access Spark services and run jobs.
A Spark context is essentially a client of Spark’s execution environment and it acts as the master of your Spark.
7. How can you define SparkConf?
Ans: Spark properties control most application settings and are configured separately for each application. These properties can be set directly on a SparkConf passed to your SparkContext. SparkConf allows you to configure some of the common properties (e.g. master URL and application name), as well as arbitrary key-value pairs through the set() method. For example, we could initialize an application with two threads as follows:
Note that we run with local[2], meaning two threads - which represents “minimal” parallelism, which can help detect bugs that only exist when we run in a distributed context.
val conf = new SparkConf()
.setMaster("local[2]")
.setAppName("CountingSheep")
val sc = new SparkContext(conf)
8. Which all are the, ways to configure Spark Properties and order them least important to the most important.
Ans: There are the following ways to set up properties for Spark and user programs (in the order of importance from the least important to the most important):
· conf/spark-defaults.conf - the default
· --conf - the command line option used by spark-shell and spark-submit
· SparkConf
9. What is the Default level of parallelism in Spark?
Ans: Default level of parallelism is the number of partitions when not specified explicitly by a user.
10. Is it possible to have multiple SparkContext in single JVM?
Ans: No, When an RDD is created; it belongs to and is completely owned by the Spark context it originated from. RDDs can’t be shared between SparkContexts.
12. In Spark-Shell, which all contexts are available by default?
Ans: SparkContext and SQLContext
13. Give few examples , how RDD can be created using SparkContext
Ans: SparkContext allows you to create many different RDDs from input sources like:
· Scala’s collections: i.e. sc.parallelize(0 to 100)
· Local or remote filesystems : sc.textFile("README.md")
· Any Hadoop InputSource : using sc.newAPIHadoopFile
14. How would you brodcast, collection of values over the Sperk executors?
Ans: sc.broadcast("hello")
15. What is the advantage of broadcasting values across Spark Cluster?
Ans: Spark transfers the value to Spark executors once, and tasks can share it without incurring repetitive network transmissions when requested multiple times.
16. Can we broadcast an RDD?
Ans: Yes, you should not broadcast a RDD to use in tasks and Spark will warn you. It will not stop you, though.
17. How can we distribute JARs to workers?
Ans: The jar you specify with SparkContext.addJar will be copied to all the worker nodes.
18. How can you stop SparkContext and what is the impact if stopped?
Ans: You can stop a Spark context using SparkContext.stop() method. Stopping a Spark context stops the Spark Runtime Environment and effectively shuts down the entire Spark application.
19. Which scheduler is used by SparkContext by default?
20 .How would you the amount of memory to allocate to each executor?