Big Data Hadoop Sample Resumes

Need Big Data Hadoop resume? Check out Hadoop Admin resume- FREE Downloadable Resume Templates | Easily Editable. Download Now

Just 3 steps to unlock your resumes.

  1. First click on the download button available based on your choice.
  2. In order to receive your resume document, you need to enter your email id.
  3. Just download it, edit according to your requirements and get ready for the interview.

For more information on the Big Data Hadoop Training, fill the form and in a short time,our experts will contact you to provide complete details on the Mulesoft.

Register here for the Big Data Hadoop Training Course from the leading industry experts where you can get Technical Job Support* and Live Mock Interview Sessions*

Moreover, you can also get benefited with the free training course material. Our training material includes

  1. Comprehensive guide to Big Data Hadoop Training Tutorials
  2. Advanced Big Data Hadoop Training Interview Questions and Answers
  3. Big Data Hadoop Training Community Questions and Answers

Whatever may be the technology you adopt, your experts will assist, guide you to achieve your dream job.

Sample Resumes

Designation Hadoop Developer

Location – Mumbai

About employer – xxxxxxx

Job description:

Responsibilities

Qualification and Skills Required

·         1-2 years of experience in defining, designing and developing Big Data applications using Hadoop and its related technologies such as Map/Reduce, Pig, Hive, HBase, Sqoop, Flume, etc.

·         Knowledge about different Hadoop ecosystems and practical experience with at least one of them.

·         Must have implemented at least one project with large data sets processing using Map/Reduce jobs in a Hadoop distributed cluster environment.

·         Good understanding and analytical skills to come up with solutions to problems, design and develop solutions.

·         Sound knowledge in SQL.

·         Bachelor’s degree in computer science or computer engineering or equivalent degree required

·         Desired Skills and Experience

·         Skills and Qualifications:

·         Bachelor’s degree in Computer Science or related discipline

·         2+ years of experience as a Hadoop Developer

·         Should have strong analytical and problem solving skills.

·         Self-starter with ability to take ownership, anticipate and handle critical situations.

·         Ability to thrive in fast-paced, flexible and collaborative work environment

Interested people can apply for this job can mail their CV to [email protected] with subject as Hadoop Developer – Gracenote – Mumbai

 

 

Overview

3 years of experience in software development life cycle design, development, and support of systems application architecture. 

·         More than two years of experience in Hadoop Development/Administration built on six years of experience in Java Application Development. 
Good knowledge of Hadoop ecosystem, HDFS, Big Data, RDBMS. 

 Experienced on working with Big Data and Hadoop File System (HDFS). 

·         Hands on Experience in working with ecosystems like Hive, Pig, Sqoop, Map Reduce, Flume, OoZie. 

·         Strong Knowledge of Hadoop and Hive and Hive's analytical functions. 

·         Capturing data from existing databases that provide SQL interfaces using Sqoop. 

·         Efficient in building hive, pig and map Reduce scripts. 

·         Implemented Proofs of Concept on Hadoop stack and different big data analytic tools, migration from different databases (i.e Teradata, Oracle,MYSQL ) to Hadoop.

·         Successfully loaded files to Hive and HDFS from MongoDB, Cassandra, HBase 

·         Loaded the dataset into Hive for ETL Operation. 

·         Good knowledge on Hadoop Cluster architecture and monitoring the cluster. 

·         Experience in using DBvisualizer, Zoo keeper and cloudera Manager. 

·         Hands on experience in IDE tools like Eclipse, Visual Studio. 

·         Experience in database design using Stored Procedure, Functions, Triggers and strong experience in writing complex queries for DB2, SQL Server. 

·          Experience with Business Objects and SSRS, created Universe, developed many      Crystal reports and webi reports. 

·         Excellent problem solving skills, high analytical skills, good communication and interpersonal skills.

Work Experience

Hadoop Developer

Xyz comapny

February 2013 to Present

Install raw Hadoop and NoSQL applications and develop programs for sorting and analyzing data

Responsibilities: 

·         Replaced default Derby metadata storage system for Hive with MySQL system.  Executed queries using Hive and developed Map-Reduce jobs to analyze data. 

·         Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS. 

·         Developed the Pig UDF's to preprocess the data for analysis. 

·         Developed Hive queries for the analysts

  Utilized Apache Hadoop environment by Hortonworks. 

·         Involved in loading data from LINUX and UNIX file system to HDFS. 

·         Supported in setting up QA environment and updating configurations for implementing scripts with Pig. 

·         Environment: Core Java, Apache Hadoop (Horton works), HDFS, Pig, Hive,        Cassandra, Shell Scripting, My Sql, LINUX, UNIX

Hadoop Developer

abc company,Banglore

March 2012 to January 2013

Import-export data into HDFS format, analyze V=Big data using Hadoop environment, Developed UDFs using Hive, Pig Latin and Java. 

Responsibilities: 

·         Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, Hbase NoSQL database and Sqoop. 

·      Importing and exporting data in HDFS and Hive using Sqoop. 

·      Extracted files from MongoDB through Sqoop and placed in HDFS and processed. 

·      Experience with NoSQL databases. 

·      Written Hive UDFS to extract data from staging tables. 

·      Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way. 

·      Familiarized with job scheduling using Fair Scheduler so that CPU time is well distributed amongst all the jobs. 

Involved in the regular Hadoop Cluster maintenance such as patching security      holes and updating system packages. 

·      Managed Hadoop log files. 

·      Analyzed the web log data using the HiveQL. 

·      Environment: Java 6, Eclipse, Hadoop, Hive, Hbase, MangoDB, Linux, Map Reduce, HDFS, Shell Scripting, Mysql

Education:
Master of Engineering
Jntu 2011

Technical Skills

Programming Language: Java, C++, C, SQL, Python 

Java Technologies: JDBC,JSP, Servlets 

RDBMS/NoSQL: SQL server, DB2, HBase, Cassandra, MangoDB 

Scripting: Shell Scripting 

IDE: Eclipse, Netbeans 

Operating Systems: Linux, UNIX, Windows 98/00/xp 

Hadoop Ecosystem: Map Reduce, Sqoop, Hive, Pig, Hbase,Cassandra, HDFS, Zookeeper

 

xxxxxxx

[email protected]

xxxxxxxx

 

EXPERIENCE

                                                                                                                       Houston, TX

SENIOR HADOOP DEVELOPER

09/2016 – present

·         Manages maintenance of applications and performs technical change requests scheduled according to Release Management processes

·         Provide L0/L1/L2/L3 development effort

·         Work to tight deadlines and provide regular progress updates against agreed milestones

·         Proactively, initiate, develop and maintain effective working relationships with team members. Coordinate with all team members, including 3rd party suppliers

·         Collaborate with other teams including Software Developers, Data Scientist, and Data Warehouse Developers

·         Work in Agile Scrum process

·         Create High Level Design and detailed design documents with support of architects for the assigned scope items

                                                                                                                      Houston, TX

JAVA / HADOOP DEVELOPER

02/2011 – 04/2016

·         Directly impacts the business by ensuring the quality of work provided by self and others; impacts own team and closely related work teams

·         Design and develop framework components

·         Work on design, development of complex applications using various programming languages and technologies

·         Full Software Development Lifecycle

·         Providing expertise in technical analysis and solving technical issues during project delivery

·         Monitor implementations to help ensure adherence to established standards

·         Perform analysis of large data sets using components from the Hadoop ecosystem

                                                                                                                       Phoenix, AZ

HADOOP DEVELOPER

01/2005 – 12/2010

·         Develop best practices for developing and deploying Hadoop applications and assist the team to manage compliance to the standards

·         Execute change management activities supporting production deployment to Developers, Quality Control Analysts, and Environment Management personnel

·         Design, plan, and develop programs to perform automated extract, transform and load data between data sources when working with large data sets (TBs+ range)

·         Work with the application team to design and develop an effective Hadoop solution. Be actively engaged and responsible in the development process

·         Create and execute unit tests and perform basic application testing

·         Provide work guidance or technical assistance to less senior engineers

·         Developing and testing workflow scheduler job scripts in Apache Oozie

EDUCATION

Bachelor’s Degree in Computer Science

TOWSON UNIVERSITY

SKILLS

·         Be able to design scalable, configurable, maintainable for complex business problems

·         Should be proficient in Business Analysis, Business Knowledge, Software Engineering Leadership, Architecture Knowledge and Technical Solution Design

·         Writing high-performance, reliable and maintainable code

·         Strong knowledge on Data Warehouses, RDBMS and MPP database skills, including query optimization, and performance tuning

·         Strong knowledge on batch and streaming data sources with structured and unstructured dataSearch Jobs US

·         Good understanding and working knowledge of any Application Server

·         Good understanding and working knowledge of SOA architecture and its related technologies

·         Deep knowledge and strong deployment experience in the Hadoop and Big Data ecosystem - Hadoop, Flume, Hive, HBase, Pig, HDFS, MapReduce, Linux, etc

·         Good knowledge of database structures, theories, principles, and practices

·         Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala

 

 

Request for more information