Vous êtes sur la page 1sur 2

Blue Ridge society,

Hinjewadi, Pune

KUMAR 9146030312
Kmrshanu3@gmail.com

SHANU

OBJECTIVE
Seeking an opportunity in Big Data and Analytics for the position of responsibility where I can use
my professional skills and efficiency to communicate my ideas and commit myself for achieving
organizational objectives with the team effort, positive attitude and performance.

EDUCATION
B.E | LNCT Groups of colleges
2011 – 2015
Electronics and Communication Engineering from RGPV University, Bhopal, with 7.5 CGPA.
AISSCE(XII) | Public School
2009 – 2010
Completed 12TH from CBSE board with an Aggregate of 72%
AISSE(X) | SAV
2007 – 2008
Completed 10TH from CBSE board with an Aggregate of 87%

EXPERIENCE
Environment: Hadoop, HDFS, Spark, PIG, Hive Sqoop, Flume, MongoDB, HBase, Oozie, Scala,
Python, Linux shell Scripting, Tableau, Vanilla Hadoop distribution, Cloudera, Hortonworks.
Hadoop Developer | Infosys
• Experience of working on various Hadoop ecosystem components.
• Involve in Requirement Analysis, Design, and Development.
• Used Sqoop (version 1.4.3) to extract (import/export) batch data from RDBMS into HDFS and
vice versa.
• Created and worked Sqoop jobs with incremental load to populate Hive External tables.
• Experience with Flume (version 1.6) to collect the data from various sources, import/export
logs and real time data to sinks.
• Experience in writing Pig (version 0.11) scripts to transform raw data from several data
sources into forming baseline data.
• Involve in create Hive (version 0.10) tables, loading with data and writing Hive queries.
• Very good understanding of Partitions, bucketing concepts in Hive and designed both
Managed and External tables in Hive to optimize performance.
• Experience in using various files format viz a viz Text files, Sequence files, RCFile, AVRO and
Parquet file formats.
• Good working knowledge of Mongo DB, performed CRUD operations, taking MongoDB
backups and importing data from MongoDB to hive and pig using mongo connectors.

Roles and Responsibilities:

Transient Phase:
We implemented a POC for Energy card data model which previously designed in MySQL but as
per the requirement we completely migrated in Hadoop with proper reporting. In this POC we
extracted data from MySQL and then loaded into HDFS with the help of Sqoop data migration
tool.
Pre-processing of data, data transformation and analysis was done using Hive. We integrated
Hive with reporting tool (using Tableau) and generated multiple reports as per the requirement.

Hadoop and Spark Developer | Cognizant

• Training on Spark Context, RDD, Spark-SQL, Data Frames and Pair RDD's.
• Load the data into Spark RDD and do in memory data computation to generate the output
response.
• Working on Actions and transformations
• Involved in converting Hive/SQL queries into spark transformations using spark RDD in Scala.

SKILLS
• Worked on Big Data Ecosystems: Hadoop, • Platforms: Windows and Linux
Map Reduce, HDFS, Hive, Pig, Sqoop, Flume, (Ubuntu).
Oozie and Hue. • Databases: HBase, MySQL, SQL
• Training on: Apache Spark, Zookeeper, server.
Apache Ambari. • NoSQL Database with hands on:
• Data Platform: Cloudera (CDH 4) and MongoDB
Hortonworks (HDP 2.6) • BI Tool: Tableau
• Knowledge of Shell Scripting

ACTIVITIES
• Food lover and Travel Enthusiast.
• Passionate about Movies, Politics and Cricket.
• Excellent communication, interpersonal and analytical skills.
• Hard working, enthusiastic and having exceptional ability to learn new concepts.

DECLARATION
I hereby declare that all the information furnished by me is true to the best of my
knowledge.

(Kumar Shanu)

Vous aimerez peut-être aussi