Vous êtes sur la page 1sur 65

Print Exit Print Mode

About the exam


Dear Participant,
Greetings!
You have completed the "Final Exam" exam.
At this juncture, it is important for you to understand your strengths and focus on them to
achieve the best results.
We present here a snapshot of your performance in "Final Exam" exam in terms of marks
scored by you in each section, question-wise response pattern and difficulty-wise analysis of
your performance.
This Report consists of the following sections that can be accessed using the left navigation
panel:

Overall Performance: This part of report shows the summary of marks


scored by you across all sections of the exam and the comparison of your performance
across all sections.

Section-wise Performance: You can click on a section name in the left


navigation panel to check your performance in that section. Section-wise performance
includes the details of your response at each question level and difficulty-wise analysis of
your performance for that section.
NOTE : For Short Answer, Subjective, Typing and Programing Type Questions participant will
not be able to view Bar Chart Report in the Performance Analysis.

Subject
Final

Questions Attempted

Correct

Score

40/99

31

31

Final, 100%FinalMarks Obtained Subject Wise


NOTE : Subject having negative marks are not considered in the pie chart. Pie chart
will not be shown if all the subject contains 0 marks.

Final
The Final section comprises of a total of 99 questions with the following difficulty level

distribution: -

Difficulty Level

No. of questions

Easy

Moderate

99

Hard

Question wise details


Please click on question to view detailed analysis

= Not Evaluated
= Evaluated
= Correct
= Incorrect
= Not Attempted
= Marked For Review
= Correct Option
= Your Option

Question Details

Q1.Key/Value is considered as hadoop format.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

True
False

Q2.What kind of servers are used for creating a hadoop cluster?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 2

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Server grade machines.


Commodity hardware.
Only supercomputers
None of the above.

Q3.Hadoop was developed by:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Doug Cutting
Lars George
Tom White
Eric Sammer

Q4.One of the features of hadoop is you can achieve parallelism.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :

False
True

Q5.Hadoop can only work with structured data.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

False
True

Q6.Hadoop cluster can scale out:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 2

Option 1 :
Option 2 :
Option 3 :
Option 4 :

By upgrading existing servers


By increasing the area of the cluster.
By downgrading existing servers
By adding more hardware

Q7.Hadoop can solve only use cases involving data from Social media.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :

1
False

Q8.Hadoop can be utilized for demographic analysis.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

True
False


Q9.Hadoop is inspired from which file system.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :
Option 3 :
Option 4 :

AFS
GFS
MPP
None of the above.

Q10.For Apache Hadoop one needs licensing before leveraging it.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 2

Option 1 :
Option 2 :

True
False

Q11.HDFS runs in the same namespace as that of local filesystem.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 :
Option 2 :

False
True

Q12.HDFS follows a master-slave architecture.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 2

Option 1 :
Option 2 :

False
True

Q13.Namenode only responds to:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

FTP calls
SFTP calls.
RPC calls
MPP calls

Q14.Perfect balancing can be achieved in a Hadoop cluster.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

False
True

Q15.What does Namenode periodically expects from Datanodes?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :

EditLogs

Option 2 :
Option 3 :
Option 4 :

Block report and Status


FSImages
None of the above

Q16.After client requests JobTracker for running an application, whom does JT


contacts?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

DataNodes
Tasktracker
Namenode
None of the above.

Q17.Intertaction to HDFS is done through which script.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Fsadmin
Hive
Mapreduce
Hadoop

Q18.What is the usage of put command in HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :

It deletes files from one file system to another.


It copies files from one file system to another
It puts configuration parameters in configuration files

Option 4 :

None of the above.

Q19.Each directory or file has three kinds of permissions:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

read,write,execute
read,write,run
read,write,append
read,write,update

Q20.Mapper output is written to HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

False
True

Q21.A Reducer writes its output in what format.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Key/Value
Text files
Sequence files
None of the above

Q22.Which of the following is a pre-requisite for hadoop cluster installation?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Gather Hardware requirement


Gather network requirement
Both
None of the above

Q23.Nagios and Ganglia are tools provided by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
Hortonworks
MapR
None of the above

Q24.Which of the following are cloudera management services?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Activity Monitor
Host Monitor
Both
None of the above

Q25.Which of the following is used to collect information about activities running


in a hadoop cluster?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Report Manager
Cloudera Navigator
Activity Monitor
All of the above

Q26.Which of the following aggregates events and makes them available for
alerting and searching?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Event Server
Host Monitor
Activity Monitor
None of the above

Q27.Which tab in the cloudera manager is used to add a service?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Hosts
Activities
Services
None of the above

Q28.Which of the following provides http access to HDFS?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

HttpsFS
Name Node
Data Node
All of the above

Q29.Which of the following is used to balance a load in case of addition of a new


node and in case of a failure?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Gateway
Balancer
Secondary Name Node
None of the above

Q30.Which of the following is used to designate a host for a particular service?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Gateway
Balancer
Secondary Name Node
All of the above

Q31.Which of the following are the configuration files?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Core-site.xml
Hdfs-site.xml
Both
None of the above

Q32.Which are the commercial leading Hadoop distributors in the market?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera , Intel, MapR


MapR, Cloudera, Teradata
Hortonworks, IBM, Cloudera
MapR, Hortonworks, Cloudera

Q33.What are the core Apache components enclosed in its bundle?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

HDFS, Map-reduce,YARN,Hadoop Commons


HDFS, NFS, Combiners, Utility Package
HDFS, Map-reduce, Hadoop core
MapR-FS, Map-reduce,YARN,Hadoop Commons

Q34.Apart from its basic components Apache Hadoop also provides:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Apache Hive
Apache Pig
Apache Zookeeper
All the above

Q35.Rolling upgrades is not possible in which of the following?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
Hortonworks
MapR
Possible in all of the above

Q36.In which of the following Hbase Latency is low with respect to each other:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
Hortonworks
MapR
IBM BigInsights

Q37.MetaData Replication is possible in:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
Hortonworks
MapR
Teradata

Q38.Disastor recovery management is not handled by:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 2

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Hortonworks
MapR
Cloudera
Amazon Web Services EMR

Q39.Mirroring concept is possible in Cloudera.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q40.Does MapR supports only Streaming Data Ingestion ?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 1

Option 1 :
Option 2 :

True
False


Q41.Hcatalog is open source metadata framework developed by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
MapR
Hortonworks
Amazon EMR

Q42.BDA can be applicable to gain knowledge on user behaviour, prevents


customer churn in Media and Telecommunications Industry.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q43.What is the correct sequence of Big Data Analytics stages?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Big Data Production > Big Data Consumption > Big Data Management
Big Data Management > Big Data Production > Big Data Consumption
Big Data Production > Big Data Management > Big Data Consumption
None of these

Q44.Big Data Consumption involves:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Mining
Analytic
Search and Enrichment
All of the above

Q45.Big Data Integration and Data Mining are the phases of Big Data
Management.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 1

Option 1 :
Option 2 :

True
False

Q46.RDBMS, Social Media data, Sensor data are the possible input sources to a
big data environment.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

True
False

Q47.For which of the following type of data it is not possible to store in big data
environment and then process/parse it?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 4

Option 1 :
Option 2 :
Option 3 :
Option 4 :

XML/JSON type of data


RDBMS
Semi-structured data
None of the above

Q48.Software framework for writing applications that parallely process vast


amounts of data is known as:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Map-reduce
Hive
Impala
None of the above

Q49.In proper flow of the map-reduce, reducer will always be executed after
mapper.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

True
False

Q50.Which of the following are the features of Map-reduce?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 4

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Automatic parallelization and distribution


Fault-Tolerance
Platform independent
All of the above

Q51.Where does the intermediate output of mapper gets written to?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 4

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Local disk of node where it is executed.


HDFS of node where it is executed.
On a remote server outside the cluster.
Mapper output gets written to the local disk of Name node machine.

Q52.Reducer is required in map-reduce job for:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 :
Option 2 :
Option 3 :
Option 4 :

It combines all the intermediate data collected from mappers.


It reduces the amount of data by half of what is supplied to it.
Both a and b
None of the above

Q53.Output of every map is passed to which component.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Partitioner
Combiner
Mapper
None of the above

Q54.Data Locality concept is used for:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Localizing data
Avoiding network traffic in hadoop system
Both A and B
None of the above

Q55.No of files in the output of map reduce job depends on:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :
Option 3 :
Option 4 :

No of reducer used for the process


Size of the data
Both A and B
None of the above

Q56.Input format of the map-reduce job is specified in which class?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 3

Option 1 :
Option 2 :

Combiner class
Reducer class

Option 3 :
Option 4 :

Mapper class
Any of the above

Q57.The intermediate keys, and their value lists, are passed to the Reducer in
sorted key order.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 :
Option 2 :

True
False

Q58.In which stage of the map-reduce job data is transferred between mapper
and reducer?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Transfer
Combiner
Distributed Cache
Shuffle and Sort

Q59.Maximum three reducers can run at any time in a MapReduce Job.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :

True
False

Q60.Functionality of the Jobtracker is to:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Coordinate the job run


Sorting the output
Both A and B
None of the above

Q61.The submit() method on Job creates an internal JobSummitter instance and


calls _____ on it.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

jobSubmitInternal()
internalJobSubmit()
submitJobInternal()
None of these

Q62.Which method polls the job's progress and after how many seconds?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

WaitForCompletion() and after each second


WaitForCompletion() after every 15 seconds
Not possible to poll
None of the above

Q63.Job Submitter tells the task tracker that the job is ready for execution.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q64.Hadoop 1.0 runs 3 instances of job tracker for parallel execution on hadoop
cluster.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
Flase

Q65.Map and Reduce tasks are created in job initialization phase.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q66.Based on heartbeats received after how many seconds does it help the job
tracker to decide regarding health of task tracker?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

After every 3 seconds


After every 1 second
After every 60 seconds
None of the above

Q67.Task tracker has assigned fixed number of slots for map and reduce tasks.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q68.To improve the performance of the map-reduce task jar that contains mapreduce code is pushed to each slave node over HTTP.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

Q69.Map-reduce can take which type of format as input?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :

Text
CSV
Arbitrary

Option 4 :

None of these

Q70.Input files can be located at hdfs or local system for map-reduce.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 2

Option 1 :
Option 2 :

True
False

Q71.Is there any default InputFormat for input files in map-reduce process?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

KeyValueInputFormat
TextInputFormat.
A and B
None of these

Q72.An InputFormat is a class that provides the following functionality:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Selects the files or other objects that should be used for input
Defines the InputSplits that break a file into tasks
Provides a factory for RecordReader objects that read the file
All of the above

Q73.An InputSplit describes a unit of work that comprises a ____ map task in a
MapReduce program.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

One
Two
Three
None of these

Q74.The FileInputFormat and its descendants break a file up into ____MB


chunks.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 2

Option 1 :
Option 2 :
Option 3 :
Option 4 :

128
64
32
256

Q75.What allows several map tasks to operate on a single file in parallel?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Processing of a file in chunks


Configuration file properties
Both A and B
None of the above

Q76.The Record Reader is invoked ________ on the input until the entire
InputSplit has been consumed.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Once
Twice
Repeatedly
None of these

Q77.Which of the following is KeyValueTextInputFormat?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Key is separated from the value by Tab


Data is specified in binary sequence
Both A and B
None of the above

Q78.In map-reduce programming model mappers can communicate with each


other is:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 :
Option 2 :

True
False

Q79.User can define own partitioner class.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q80.The Output Format class is a factory for RecordWriter objects; these are
used to write the individual records to the files as directed by the OutputFormat
is:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q81.Which of the following are part of Hadoop ecosystem.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Talend,MapR,NFS
Mysql,Shell
Pig,Hive,Hbase
None of the above

Q82.Default Metostore location for Hive is:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Mysql
Derby
PostgreSQL
None of the above

Q83.Extend the following class to write a User Defined Function in Hive.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

HiveMapper
Eval
UDF
None of the above

Q84.Which component of hadoop ecosystem supports updation?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Zookeeper
Hive
Pig
Hbase

Q85.Which hadoop component should be used if a join of dataset is required?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Hbase
Hive
Zookeeper
None of the above

Q86.Which hadoop component can be used for ETL?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Pig
Zookeeper
Hbase
None of the above

Q87.Which hadoop component is best suited for pulling data from the web?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Hive
Zookeeper
Hbase
Flume

Q88.Which hadoop component can be used to transfer data from relational DB


to HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

Zookeeper
Pig

Option 3 :
Option 4 :

Sqoop
None of the above

Q89.In an application more than one hadoop component cannot be used on top
of HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

Q90.Hbase supports join.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q91.Pig can work only with data present in HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

Q92.Which tool out of the following can be used for an OLTP application?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Pentaho
Hive
Hbase
None of the above

Q93.Which tool is best suited for real time writes?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Pig
Hive
Hbase
Cassandra

Q94.Which out of the following hadoop component is called as ETL of hadoop?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Pig
Hbase
Talend
None of the above

Q95.Hadoop can completely replace tradtional Dbs.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :

True
False

Q96.Zookeeper can be used as data transfer also.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

False
True

Q97.Map-reduce cannot be tested on data/files present in local file system.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 :
Option 2 :

True
False

Q98.Hive was developed by:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 4

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Tom White
Cloudera
Doug Cutting
Facebook


Q99.Mrv1 programs cannot be run on top of clusters configured for Mrv2.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

WgI

Marks Obtained Subject Wise


Final
WgI

Print Exit Print Mode

About the exam


Dear Participant,
Greetings!
You have completed the "Final Exam" exam.
At this juncture, it is important for you to understand your strengths and focus on them to
achieve the best results.
We present here a snapshot of your performance in "Final Exam" exam in terms of marks

scored by you in each section, question-wise response pattern and difficulty-wise analysis of
your performance.
This Report consists of the following sections that can be accessed using the left navigation
panel:

Overall Performance: This part of report shows the summary of marks


scored by you across all sections of the exam and the comparison of your performance
across all sections.

Section-wise Performance: You can click on a section name in the left


navigation panel to check your performance in that section. Section-wise performance
includes the details of your response at each question level and difficulty-wise analysis of
your performance for that section.

NOTE : For Short Answer, Subjective, Typing and Programing Type Questions participant will
not be able to view Bar Chart Report in the Performance Analysis.

Subject

Questions Attempted

Correct

Score

40/99

17

17

Final

Final, 100%FinalMarks Obtained Subject Wise


NOTE : Subject having negative marks are not considered in the pie chart. Pie chart
will not be shown if all the subject contains 0 marks.

Final
The Final section comprises of a total of 99 questions with the following difficulty level
distribution: -

Difficulty Level

No. of questions

Easy

Moderate

99

Hard

Question wise details


Please click on question to view detailed analysis

= Not Evaluated
= Evaluated

= Correct
= Incorrect
= Not Attempted
= Marked For Review
= Correct Option
= Your Option

Question Details

Q1.Key/Value is considered as hadoop format.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

True
False

Q2.What kind of servers are used for creating a hadoop cluster?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Server grade machines.


Commodity hardware.
Only supercomputers
None of the above.

Q3.Hadoop was developed by:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Doug Cutting
Lars George
Tom White
Eric Sammer

Q4.One of the features of hadoop is you can achieve parallelism.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :

False
True

Q5.Hadoop can only work with structured data.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

False
True

Q6.Hadoop cluster can scale out:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :

By upgrading existing servers

Option 2 :
Option 3 :
Option 4 :

By increasing the area of the cluster.


By downgrading existing servers
By adding more hardware

Q7.Hadoop can solve only use cases involving data from Social media.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 2

Option 1 :
Option 2 :

1
False

Q8.Hadoop can be utilized for demographic analysis.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q9.Hadoop is inspired from which file system.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

AFS
GFS
MPP
None of the above.

Q10.For Apache Hadoop one needs licensing before leveraging it.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q11.HDFS runs in the same namespace as that of local filesystem.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

False
True

Q12.HDFS follows a master-slave architecture.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :

False
True

Q13.Namenode only responds to:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 4

Option 1 :
Option 2 :
Option 3 :
Option 4 :

FTP calls
SFTP calls.
RPC calls
MPP calls

Q14.Perfect balancing can be achieved in a Hadoop cluster.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 2

Option 1 :
Option 2 :

False
True

Q15.What does Namenode periodically expects from Datanodes?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

EditLogs
Block report and Status
FSImages
None of the above

Q16.After client requests JobTracker for running an application, whom does JT


contacts?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

DataNodes
Tasktracker
Namenode
None of the above.


Q17.Intertaction to HDFS is done through which script.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Fsadmin
Hive
Mapreduce
Hadoop

Q18.What is the usage of put command in HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

It deletes files from one file system to another.


It copies files from one file system to another
It puts configuration parameters in configuration files
None of the above.

Q19.Each directory or file has three kinds of permissions:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :
Option 3 :
Option 4 :

read,write,execute
read,write,run
read,write,append
read,write,update

Q20.Mapper output is written to HDFS.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 2

Option 1 :
Option 2 :

False
True

Q21.A Reducer writes its output in what format.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Key/Value
Text files
Sequence files
None of the above

Q22.Which of the following is a pre-requisite for hadoop cluster installation?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 4
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Gather Hardware requirement


Gather network requirement
Both
None of the above

Q23.Nagios and Ganglia are tools provided by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
Hortonworks
MapR
None of the above

Q24.Which of the following are cloudera management services?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Activity Monitor
Host Monitor
Both
None of the above

Q25.Which of the following is used to collect information about activities running


in a hadoop cluster?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Report Manager
Cloudera Navigator
Activity Monitor
All of the above

Q26.Which of the following aggregates events and makes them available for
alerting and searching?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Event Server
Host Monitor
Activity Monitor
None of the above

Q27.Which tab in the cloudera manager is used to add a service?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Hosts
Activities
Services
None of the above

Q28.Which of the following provides http access to HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

HttpsFS
Name Node
Data Node
All of the above

Q29.Which of the following is used to balance a load in case of addition of a new


node and in case of a failure?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Gateway
Balancer
Secondary Name Node
None of the above

Q30.Which of the following is used to designate a host for a particular service?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Gateway
Balancer
Secondary Name Node
All of the above

Q31.Which of the following are the configuration files?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Core-site.xml
Hdfs-site.xml
Both
None of the above

Q32.Which are the commercial leading Hadoop distributors in the market?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera , Intel, MapR


MapR, Cloudera, Teradata
Hortonworks, IBM, Cloudera
MapR, Hortonworks, Cloudera

Q33.What are the core Apache components enclosed in its bundle?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

HDFS, Map-reduce,YARN,Hadoop Commons


HDFS, NFS, Combiners, Utility Package
HDFS, Map-reduce, Hadoop core
MapR-FS, Map-reduce,YARN,Hadoop Commons

Q34.Apart from its basic components Apache Hadoop also provides:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 4

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Apache Hive
Apache Pig
Apache Zookeeper
All the above

Q35.Rolling upgrades is not possible in which of the following?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 2

Option 1 :
Option 2 :
Option 3 :

Cloudera
Hortonworks
MapR

Option 4 :

Possible in all of the above

Q36.In which of the following Hbase Latency is low with respect to each other:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
Hortonworks
MapR
IBM BigInsights

Q37.MetaData Replication is possible in:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
Hortonworks
MapR
Teradata

Q38.Disastor recovery management is not handled by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Hortonworks
MapR
Cloudera
Amazon Web Services EMR


Q39.Mirroring concept is possible in Cloudera.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

Q40.Does MapR supports only Streaming Data Ingestion ?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q41.Hcatalog is open source metadata framework developed by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Cloudera
MapR
Hortonworks
Amazon EMR

Q42.BDA can be applicable to gain knowledge on user behaviour, prevents


customer churn in Media and Telecommunications Industry.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q43.What is the correct sequence of Big Data Analytics stages?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Big Data Production > Big Data Consumption > Big Data Management
Big Data Management > Big Data Production > Big Data Consumption
Big Data Production > Big Data Management > Big Data Consumption
None of these

Q44.Big Data Consumption involves:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Mining
Analytic
Search and Enrichment
All of the above

Q45.Big Data Integration and Data Mining are the phases of Big Data
Management.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

Q46.RDBMS, Social Media data, Sensor data are the possible input sources to a
big data environment.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q47.For which of the following type of data it is not possible to store in big data
environment and then process/parse it?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

XML/JSON type of data


RDBMS
Semi-structured data
None of the above

Q48.Software framework for writing applications that parallely process vast


amounts of data is known as:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Map-reduce
Hive
Impala
None of the above

Q49.In proper flow of the map-reduce, reducer will always be executed after
mapper.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

Q50.Which of the following are the features of Map-reduce?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Automatic parallelization and distribution


Fault-Tolerance
Platform independent
All of the above

Q51.Where does the intermediate output of mapper gets written to?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Local disk of node where it is executed.


HDFS of node where it is executed.
On a remote server outside the cluster.
Mapper output gets written to the local disk of Name node machine.


Q52.Reducer is required in map-reduce job for:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 :
Option 2 :
Option 3 :
Option 4 :

It combines all the intermediate data collected from mappers.


It reduces the amount of data by half of what is supplied to it.
Both a and b
None of the above

Q53.Output of every map is passed to which component.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Partitioner
Combiner
Mapper
None of the above

Q54.Data Locality concept is used for:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Localizing data
Avoiding network traffic in hadoop system
Both A and B
None of the above

Q55.No of files in the output of map reduce job depends on:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

No of reducer used for the process


Size of the data
Both A and B
None of the above

Q56.Input format of the map-reduce job is specified in which class?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Combiner class
Reducer class
Mapper class
Any of the above

Q57.The intermediate keys, and their value lists, are passed to the Reducer in
sorted key order.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

Q58.In which stage of the map-reduce job data is transferred between mapper
and reducer?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Transfer
Combiner
Distributed Cache
Shuffle and Sort

Q59.Maximum three reducers can run at any time in a MapReduce Job.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 :
Option 2 :

True
False

Q60.Functionality of the Jobtracker is to:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Coordinate the job run


Sorting the output
Both A and B
None of the above

Q61.The submit() method on Job creates an internal JobSummitter instance and


calls _____ on it.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

jobSubmitInternal()
internalJobSubmit()
submitJobInternal()
None of these

Q62.Which method polls the job's progress and after how many seconds?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

WaitForCompletion() and after each second


WaitForCompletion() after every 15 seconds
Not possible to poll
None of the above

Q63.Job Submitter tells the task tracker that the job is ready for execution.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 1

Option 1 :
Option 2 :

True
False

Q64.Hadoop 1.0 runs 3 instances of job tracker for parallel execution on hadoop
cluster.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 1

Option 1 :

True

Option 2 :

Flase

Q65.Map and Reduce tasks are created in job initialization phase.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 2

Option 1 :
Option 2 :

True
False

Q66.Based on heartbeats received after how many seconds does it help the job
tracker to decide regarding health of task tracker?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

After every 3 seconds


After every 1 second
After every 60 seconds
None of the above

Q67.Task tracker has assigned fixed number of slots for map and reduce tasks.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

True
False

Q68.To improve the performance of the map-reduce task jar that contains mapreduce code is pushed to each slave node over HTTP.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q69.Map-reduce can take which type of format as input?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Text
CSV
Arbitrary
None of these

Q70.Input files can be located at hdfs or local system for map-reduce.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

Q71.Is there any default InputFormat for input files in map-reduce process?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

KeyValueInputFormat
TextInputFormat.
A and B
None of these

Q72.An InputFormat is a class that provides the following functionality:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Selects the files or other objects that should be used for input
Defines the InputSplits that break a file into tasks
Provides a factory for RecordReader objects that read the file
All of the above

Q73.An InputSplit describes a unit of work that comprises a ____ map task in a
MapReduce program.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

One
Two
Three
None of these

Q74.The FileInputFormat and its descendants break a file up into ____MB


chunks.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

128
64
32
256

Q75.What allows several map tasks to operate on a single file in parallel?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Processing of a file in chunks


Configuration file properties
Both A and B
None of the above

Q76.The Record Reader is invoked ________ on the input until the entire
InputSplit has been consumed.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Once
Twice
Repeatedly
None of these

Q77.Which of the following is KeyValueTextInputFormat?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Key is separated from the value by Tab


Data is specified in binary sequence
Both A and B
None of the above

Q78.In map-reduce programming model mappers can communicate with each


other is:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 2

Option 1 :
Option 2 :

True
False

Q79.User can define own partitioner class.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 :
Option 2 :

True
False

Q80.The Output Format class is a factory for RecordWriter objects; these are
used to write the individual records to the files as directed by the OutputFormat
is:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

Option 1 :
Option 2 :

True
False

Q81.Which of the following are part of Hadoop ecosystem.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Talend,MapR,NFS
Mysql,Shell
Pig,Hive,Hbase
None of the above

Q82.Default Metostore location for Hive is:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Mysql
Derby
PostgreSQL
None of the above

Q83.Extend the following class to write a User Defined Function in Hive.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 1

Option 1 :
Option 2 :
Option 3 :
Option 4 :

HiveMapper
Eval
UDF
None of the above


Q84.Which component of hadoop ecosystem supports updation?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :
Option 3 :
Option 4 :

Zookeeper
Hive
Pig
Hbase

Q85.Which hadoop component should be used if a join of dataset is required?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Hbase
Hive
Zookeeper
None of the above

Q86.Which hadoop component can be used for ETL?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Pig
Zookeeper
Hbase
None of the above

Q87.Which hadoop component is best suited for pulling data from the web?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 4

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Hive
Zookeeper
Hbase
Flume

Q88.Which hadoop component can be used to transfer data from relational DB


to HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Zookeeper
Pig
Sqoop
None of the above

Q89.In an application more than one hadoop component cannot be used on top
of HDFS.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 :
Option 2 :

True
False

Q90.Hbase supports join.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 1

Option 1 :
Option 2 :

True
False

Q91.Pig can work only with data present in HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q92.Which tool out of the following can be used for an OLTP application?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 2

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Pentaho
Hive
Hbase
None of the above

Q93.Which tool is best suited for real time writes?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 :

Pig

Option 2 :
Option 3 :
Option 4 :

Hive
Hbase
Cassandra

Q94.Which out of the following hadoop component is called as ETL of hadoop?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0
Response : 3

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Pig
Hbase
Talend
None of the above

Q95.Hadoop can completely replace tradtional Dbs.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q96.Zookeeper can be used as data transfer also.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

False
True

Q97.Map-reduce cannot be tested on data/files present in local file system.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :

True
False

Q98.Hive was developed by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0
Response :

Option 1 :
Option 2 :
Option 3 :
Option 4 :

Tom White
Cloudera
Doug Cutting
Facebook

Q99.Mrv1 programs cannot be run on top of clusters configured for Mrv2.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 :
Option 2 :

True
False

WgI

Marks Obtained Subject Wise


Final
WgI

Vous aimerez peut-être aussi