hadoop面試時可能遇到的問題
阿新 • • 發佈:2018-12-27
面試hadoop可能被問到的問題,你能回答出幾個 ?
1、hadoop執行的原理?
2、mapreduce的原理?
3、HDFS儲存的機制?
4、舉一個簡單的例子說明mapreduce是怎麼來執行的 ?
5、面試的人給你出一些問題,讓你用mapreduce來實現?
比如:現在有10個資料夾,每個資料夾都有1000000個url.現在讓你找出top1000000url。
6、hadoop中Combiner的作用?
Q1. Name the most common InputFormats defined in Hadoop? Which one is default ? Following 2 are most common InputFormats defined inTextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper
KeyValueInputFormat: Reads text file and parses lines into key, val pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.
Q3. What is InputSplit in HadoopWhen a hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split Q4. How is the splitting of file invoked in Hadoop Framework It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user Q5. Consider case scenario: In M/R system,
Q21. What is the characteristic of streaming API that makes it flexible run map reduce jobs in languages like perl, ruby, awk etc. Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a Map Reduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.Q22. Whats is Distributed Cache in HadoopDistributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.Q23. What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it
This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR job then every mapper will try to access it from HDFS hence if a task tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.
Q.24 What mechanism does Hadoop framework provides to synchronize changes made in Distribution Cache during runtime of the application
This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution
Q25. Have you ever used Counters in Hadoop. Give us an example scenario
Anybody who claims to have worked on a Hadoop project is expected to use counters
Q26. Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job
Yes, The input format class provides methods to add multiple directories as input to a Hadoop job
Q27. Is it possible to have Hadoop job output in multiple directories. If yes then how
Yes, by using Multiple Outputs class
Q28. What will a hadoop job do if you try to run it with an output directory that is already present? Will it
- overwrite it
- warn you and continue
- throw an exception and exit
The hadoop job will throw an exception and exit.
Q29. How can you set an arbitary number of mappers to be created for a job in Hadoop
This is a trick question. You cannot set it
Q30. How can you set an arbitary number of reducers to be created for a job in Hadoop
You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting