Hadoop / Big Data Training
Trending 
Courses

Both Online and Offline

100% Guarantee in placement

Course Details

Course Features

Instructor led Sessions
The most traditional way to learn with increased visibility, monitoring, and control over learners with ease to learn at any time from internet-connected devices.
Real-life Case Studies
Case studies based on top industry frameworks help you to relate your learning with real-time based industry solutions.
Assignment
Adding the scope of improvement and fostering the analytical abilities and skills through the perfect piece of academic work.
Certification
Each certification associated with the program is affiliated with the top universities providing edge to gain epitome in the course.
Instructor led Sessions
With no limits to learning and in-depth vision from all-time available support to resolve all your queries related to the course.

Hadoop / Big Data Training

Oranium Tech introducing some amazing content on Hadoop. The Hadoop Training in Chennai at Oranium Tech is an extensive training course that makes you familiar with Hadoop Distributed File System, Hadoop Clusters, Hadoop MapReduce, and its Ecosystem for Big Data Processing with hands-on training provided by our Expert Big Data Professionals. Also, this Big Data Training in Chennai aids you to get demonstrable knowledge on key tools such as PIG, HDFS, Pig, Apache Hive, Java, Apache Spark, Flume, and Sqoop that are highly-sought after in the Big Data domain.

Course Syllabus

Complete knowledge of Big Data and Hadoop including HDFS (Hadoop Distributed File System), YARN (Yet
• Another Resource Negotiator) and MapReduce
• Comprehensive knowledge of various tools that is a part of Hadoop Ecosystem like Pig, Hive, Sqoop, Flume, Oozie, and HBase
• Capability to ingest data in HDFS using Sqoop and Flume, and analyze those large datasets stored in the HDFS.
• The exposure to many real world industry-based projects which will be executed in CloudLab
• Projects which are very diverse i.e. different from each other covering various data sets from multiple domains such as banking, telecommunication, social media, insurance, and e-commerce.

• Introduction to Data and System
• Types of Data
• Traditional way of dealing large data and its problems
• Types of Systems & Scaling
• What is Big Data
• Challenges in Big Data
• Challenges in Traditional Application
• New Requirements
• What is Hadoop? Why Hadoop?
• Brief history of Hadoop
• Features of Hadoop
• Hadoop and RDBMS
• Hadoop Ecosystem’s overview

• Installation in detail
• Creating Ubuntu image in VMware
• Downloading Hadoop
• Installing SSH
• Configuring Hadoop, HDFS & MapReduce
• Download, Installation & Configuration Hive
• Download, Installation & Configuration Pig
• Download, Installation & Configuration Sqoop
• Download, Installation & Configuration Hive
• Configuring Hadoop in Different Modes

• File System – Concepts
• Blocks
• Replication Factor
• Version File
• Safe mode
• Namespace IDs
• Purpose of Name Node
• Purpose of Data Node
• Purpose of Secondary Name Node
• Purpose of Job Tracker
• Purpose of Task Tracker
• HDFS Shell Commands – copy, delete, create directories etc.
• Reading and Writing in HDFS
• Difference of Unix Commands and HDFS commands
• Hadoop Admin Commands
• Hands on exercise with Unix and HDFS commands
• Read / Write in HDFS – Internal Process between Client, NameNode & DataNodes
• Accessing HDFS using Java API
• Various Ways of Accessing HDFS
• Understanding HDFS Java classes and methods.

• About MapReduce
• Understanding block and input splits
• MapReduce Data types
• Understanding Writable
• Data Flow in MapReduce Application
• Understanding MapReduce problem on datasets
• MapReduce and Functional Programming
• Writing MapReduce Application
• Understanding Mapper function
• Understanding Reducer Function
• Understanding Driver
• Usage of Combiner
• Usage of Distributed Cache
• Passing the parameters to mapper and reducer
• Analysing the Results
• Log files
• Input Formats and Output Formats
• Counters, Skipping Bad and unwanted Records
• Writing Join’s in MapReduce with 2 Input files. Join Types
• Execute MapReduce Job – Insights
• Exercise’s on MapReduce

• Hive concepts
• Hive architecture
• Install and configure hive on cluster
• Different type of tables in hive
• Hive library functions
• Buckets
• Partitions
• Joins in hive
• Inner joins
• Outer Joins
• Hive UDF
• Hive Query Language

• Pig basics
• Install and configure PIG on a cluster
• PIG Library functions
• Pig Vs Hive
• Write sample Pig Latin scripts
• Modes of running PIG
• Running in Grunt shell
• Running as Java program
• PIG UDFs

• HBase concepts
• HBase architecture
• Region server architecture
• File storage architecture
• HBase basics
• Column access
• Scans
• HBase use cases
• Install and configure HBase on a multi node cluster
• Create database, Develop and run sample applications
• Access data stored in HBase using Java API
• Map Reduce client to access the HBase data

• Install and configure Sqoop on cluster
• Connecting to RDBMS
• Installing Mysql
• Import data from Mysql to hive
• Export data to Mysql
• Internal mechanism of import/export

• Resource Manager (RM)
• Node Manager (NM)
• Application Master (AM)

Related Courses

Microsoft Azure Data

Sales Force Training

Azure Training