What is Hadoop?
Hadoop is an open-source software framework for storing and processing big data in a distributed/parallel fashion on large clusters of commodity hardware. Essentially, it accomplishes two tasks: massive data storage and faster processing. The core Hadoop consists of HDFS and Hadoop's implementation of MapReduce.
What is HDFS?
HDFS stands for Hadoop Distributed File System. HDFS is a highly fault-tolerant file system and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets.
What is Map-Reduce?
MapReduce is a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster.
Lets go to the slide deck for more information: https://docs.google.com/a/nyu.edu/presentation/d/1z961_ynRuh271oH9WuElUKfH7v-pKb9oX5Q6UtquHxg/edit?usp=sharing
Phases in MapReduce
A MapReduce job splits a large data set into independent chunks and organizes them into key, value pairs for parallel processing. A key-value pair (KVP) is a set of two linked data items: a key, which is a unique identifier for some item of data, and the value, which is either the data that is identified or a pointer to the location of that data. The mapping and reducing functions receive not just values, but (key, value) pairs.This parallel processing improves the speed and reliability of the cluster, returning solutions more quickly and with greater reliability.
Every MapReduce job consists of at-least three parts:
- The driver
- The Mapper
- The Reducer
The first phase of a MapReduce program is called mapping. A list of data elements are provided, one at a time, to a function called the Mapper, which transforms each element individually to an output data element.
The Map function divides the input into ranges by the InputFormat and creates a map task for each range in the input. The JobTracker distributes those tasks to the worker nodes. The output of each map task is partitioned into a group of key-value pairs for each reduce.
Mapping creates a new output list by applying a function to individual elements of an input list.
Reducing let's you aggregate values together. A reducer function receives an iterator of input values from an input list. It then combines these values together, returning a single output value.
The Reduce function then collects the various results and combines them to answer the larger problem that the master node needs to solve. Each reduce pulls the relevant partition from the machines where the maps executed, then writes its output back into HDFS. Thus, the reduce is able to collect the data from all of the maps for the keys and combine them to solve the problem.
Reducing a list iterates over the input values to produce an aggregate value as output.
MapReduce Data Flow
What is dumbo?
Dumbo is the stand alone Hadoop cluster running on Cloudera Enterprise (CDH 5.9.0). Cloudera Enterprise(CDH) combines Apache Hadoop with a number of other open source projects to create a single, massively scalable system where you can unite storage with an array of powerful processing and analytic frameworks.
To access dumbo the Hadoop cluster
Please follow the instructions on this link: https://wikis.nyu.edu/display/NYUHPC/Clusters+-+Dumbo#Clusters-Dumbo-Dumbo-Hadoopcluster
Note: Make sure to follow the instructions for Web UI access using the above link before following the steps below
- Now on the Desktop provided to you please follow the instructions below: (Mac OS X)
On the Terminal enter the commands below:
- cd /users/NetID
- mkdir .ssh
- cd .ssh
- touch config
- vi config
- ESC i (insert mode)
Copy and insert the below onto the config file:
Steps to connect to hadoop cluster (ie., dumbo) when you are in NYU campus.
What are the components of the dumbo Cluster @NYU and what can they be used for?
Lets see the UIs for a better understanding:
Cloudera Manager: http://babar.es.its.nyu.edu:7180/
Resource Manager : http://babar.es.its.nyu.edu:8088/
Commands for HDFS & MapReduce:
TO UPLOAD DATA TO HDFS
hadoop fs -put <filename_in_lfs> <hdfs_name>
hadoop fs -copyFromLocal <filename_in_lfs> <hdfs_name>
hdfs dfs -put <filename_in_lfs> <hdfs_name>
TO GET DATA FROM HDFS
hadoop fs -get <hdfs_name> <filename_in_lfs>
hadoop fs -copyToLocal <hdfs_name> <filename_in_lfs>
TO CHECK HDFS FOR YOUR FILE
usage: hadoop [--config confdir] COMMAND where COMMAND is one of: fs run a generic filesystem user client version print the version jar <jar> run a jar file distcp <srcurl> <desturl> copy file or directories recursively archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive classpath prints the class path needed to get the Hadoop jar and the required libraries daemonlog get/set the log level for each daemon or CLASSNAME run the class named CLASSNAME
To compile java files with maven:
TO TRIGGER THE JOB
TO CHECK RUNNING JOB
hadoop job -list
yarn application -list
TO KILL THE JOB
hadoop job -kill <job_id>
yarn application -kill <job_id>
Example Map-Reduce job:
Word Count: The objective here is to count the number of occurrences of each word by using key-value pairs.
ssh into dumbo
Copy example1 folder to /home/<net_id>/
cp -r /share/apps/Tutorials/Tutorial1/example1 /home/<net_id>/
It includes 5 files
book.txt ------ Input file
WordCountReducer.java ------ This is the reducer
WordCountMapper.java ------ This is the mapper
WordCount.java ------- This is the driver
WordCount.jar - Complied jar file by java compiler which then can be used to run the mapreduce job
Place the book.txt file on to hdfs
Compile code with java compiler and create jar file using generated class files.
Run the mapreduce job using WordCount.jar
Check output by accessing HDFS directories
hadoop fs -ls /user/<net_id>/wordcountoutput
hadoop fs -cat /user/<net_id>/wordcountoutput/part-r-00000
hadoop fs -getmerge /user/<net_id>/wordcountoutput $HOME/output.txt
Standard Deviation : The objective is to find the standard deviation of the length of the words.
copy example2 folder to /home/nextid/
example2.txt - Input file
StandardDeviation.jar - compiled jar file
Place the example2.txt file on to hdfs
Run the mapreduce job using StandardDeviation.jar
Check output by accessing HDFS directories
hadoop fs -ls /user/<net_id>/standarddeviationoutput
hadoop fs -cat /user/<net_id>/standarddeviationoutput/part-r-00000
More examples :
1. Numerical Summarization
2. Inverted Index Summarization
3. Counting with Counters
2. Bloom Filtering
3. Top Ten
Data Organization Patterns
1. Structured to Hierarchical
1. Reduce Side Join
2. Replicated Join
3. Composite Join
4. Cartesian Product
Create a directory to work with example3
Copy the input to local directory, then to HDFS.
cp /share/apps/Tutorials/Tutorial1/example3/MapReduce-master/examples/inputComments.xml /home/<net_id>/example3/
hadoop fs -put /home/<net_id>/example3/inputComments.xml /user/<net_id>/
Clone a git repository to create a local copy of the code
Build/compile using maven. Make sure pom.xml is present in the same directory. This command will generate "target" directory.
Extract jar file. This command creates directory "com".
jar -xvf MapReduce-0.0.1-SNAPSHOT.jar
Execute the process using the class files created in directory "com".
hadoop jar MapReduce-0.0.1-SNAPSHOT.jar $JAVA_CLASS/Average /user/<net_id>/inputComments.xml /user/<net_id>/AverageOutput
Check output by accessing HDFS directories.
hadoop fs -ls /user/<net_id>/AverageOutput
hadoop fs -cat /user/<net_id>/AverageOutput/part-r-00000
(Note: Twitter Sentiment analysis can be done using this cluster. It requires the use of java for mapreduce and pig script for sorting the twitter users based on number of tweets. The next steps would be setting up oozie workflow and observe the analysis on Hue. To learn more about sentiment analysis please contact firstname.lastname@example.org)
Even though the Hadoop framework is written in Java, programs for Hadoop need not to be coded in Java but can also be developed in other languages like Python, shell scripts or C++. Hadoop streaming is a utility that comes with the Hadoop distribution. This utility allows you to create and run Map/Reduce jobs with any executable or script as the mapper and/or the reducer.
Streaming runs a MapReduce Job from the command line. You specify a map script, a reduce script, an input and an output. Streaming takes care of the Map Reduce details such as making sure that your job is split into separate tasks, that the map tasks are executed where the data is stored. Hadoop Streaming works a little differently (your program is not presented with one record at a time, you have to iterate yourself)
-input– The data in hdfs that you want to process
-output– The directory in hdfs where you want to store the output
-map script– the program script command line or process that you want to use for your mapper
-reduce script– the program script command or process that you want to use for your reducer.
-file– Make the mapper, reducer, or combiner executable available locally on the compute nodes.There is an example of Hadoop-streaming at /share/apps/examples/hadoop-streaming on Dumbo. The README file explains how to run the example and where to find the hadoop-streaming.jar
Steps to copy example:
Command used to run a mapreduce job using streaming:
cp -r /share/apps/examples/ $HOME/example/
An example of how to run an Hadoop-streaming job is:
Command used to run a mapreduce job using streaming:
hadoop jar $HADOOP_LIPATH/hadoop-mapreduce/hadoop-streaming.jar -numReduceTasks 2 -file $HOME/example/hadoop-streaming/src -mapper src/mapper.sh -reducer src/reducer.sh -input /user/<net_id>/book.txt -output /user/<net_id>/example.out
(Please contact email@example.com to learn more)