27
GUIDE FOR INSTALLING HADOOP 2.2.0 ON UBUNTU 12.04 References Michael G. Knoll Running Hadoop on Ubuntu Linux (Single-Node Cluster) http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ Codes Fusion Setup newest Hadoop 2.x (2.2.0) on Ubuntu http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1

Hadoop Installation Guide

  • Upload
    jaytd27

  • View
    945

  • Download
    0

Embed Size (px)

DESCRIPTION

Hadoop Installation Guide

Citation preview

Page 1: Hadoop Installation Guide

GUIDE FOR INSTALLING

HADOOP 2.2.0

ON

UBUNTU 12.04

References

Michael G. Knoll

Running Hadoop on Ubuntu Linux (Single-Node Cluster)

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

Codes Fusion

Setup newest Hadoop 2.x (2.2.0) on Ubuntu

http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1

Page 2: Hadoop Installation Guide

1. Prerequisites 1.1 Sun Java 7

Hadoop requires a working Java 1.5+ (aka Java 5) installation. However,

using Java 1.6+ (aka Java 6) is recommended for running Hadoop. For the sake of

this tutorial, I will therefore describe the installation of Java 1.7.

# Add the WebUpd8Team’s repository to your apt repositories

# See https://launchpad.net/~webupd8team

$ sudo apt-get install python-software-properties

Page 3: Hadoop Installation Guide

$ sudo sudo add-apt-repository ppa:webupd8team/java

Page 4: Hadoop Installation Guide

# Update the source list

$ sudo apt-get update

Page 5: Hadoop Installation Guide

# Install Sun Java 6 JDK

$ sudo apt-get install openjdk-7-jdk

Page 6: Hadoop Installation Guide

# Renaming location to jdk

$ cd /usr/lib/jvm

$ ln -s java-7-openjdk-amd64 jdk

The full JDK which will be placed in /usr/lib/jvm/jdk (well, this directory is

actually a symlink on Ubuntu).

After installation, make a quick check whether Sun’s JDK is correctly set up:

$ java -version

Page 7: Hadoop Installation Guide

1.2 Adding a dedicated Hadoop system user

We will use a dedicated Hadoop user account for running Hadoop. While

that’s not required it is recommended because it helps to separate the Hadoop

installation from other software applications and user accounts running on the

same machine (think: security, permissions, backups, etc).

$ sudo addgroup hadoop $ sudo adduser --ingroup hadoop hduser $ sudo adduser hduser sudo

This will add the user hduser and the group hadoop to your local machine.

Page 8: Hadoop Installation Guide

1.3 Configuring SSH

Hadoop requires SSH access to manage its nodes, i.e. remote machines plus

your local machine if you want to use Hadoop on it (which is what we want to

do in this short tutorial). For our single-node setup of Hadoop, we therefore

need to configure SSH access to localhost for the hduseruser we created in

the previous section.

Installing Open SSH Server

$ sudo apt-get install openssh-server

Page 9: Hadoop Installation Guide

Secondly, we have to generate an SSH key for the hduser user.

$ su - hduser //REMEMBER TO LOGIN AS hduser EVERYTIME

$ ssh-keygen -t rsa -P ""

The second line will create an RSA key pair with an empty password.

Generally, using an empty password is not recommended, but in this case it is

needed to unlock the key without your interaction (you don’t want to enter the

passphrase every time Hadoop interacts with its nodes).

Second, you have to enable SSH access to your local machine with this newly

created key.

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Page 10: Hadoop Installation Guide

The final step is to test the SSH setup by connecting to your local machine

with the hduser user. The step is also needed to save your local machine’s host

key fingerprint to the hduser user’s known_hosts file. If you have any special SSH

configuration for your local machine like a non-standard SSH port, you can

define host-specific SSH options in $HOME/.ssh/config (see man ssh_config for

more information).

$ ssh localhost //Enter ‘yes’ when prompted.

Page 11: Hadoop Installation Guide

1.4 Disabling IPv6

One problem with IPv6 on Ubuntu is that using 0.0.0.0 for the various

networking-related Hadoop configuration options will result in Hadoop

binding to the IPv6 addresses of my Ubuntu box. To disable IPv6 on Ubuntu

12.04 LTS, open /etc/sysctl.conf in the editor of your choice and add the

following lines to the end of the file:

# Press Alt+F2 and type,

$ gksudo gedit

# Open /etc/sysctl.conf and Disable ipv6, Add these lines at end of file and save.

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

Page 12: Hadoop Installation Guide

You have to reboot your machine in order to make the changes take effect.

You can check whether IPv6 is enabled on your machine with the following

command:

$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6

Page 13: Hadoop Installation Guide

2. Hadoop

2.1 Installation

Download Hadoop from the Apache Download Mirrors and extract the contents of

the Hadoop package to a location of your choice. I picked /usr/local/hadoop.

Make sure to change the owner of all the files to the hduser user

and hadoop group, for example:

$ cd ~

$ sudo wget http://www.interior-dsgn.com/apache/hadoop/core/stable/hadoop-2.2.0.tar.gz

$ sudo tar vxzf hadoop-2.2.0.tar.gz -C /usr/local

$ cd /usr/local

$ sudo mv hadoop-2.2.0 hadoop

$ sudo chown -R hduser:hadoop hadoop

Page 14: Hadoop Installation Guide

2.2 Setup Hadoop Environment Variables

Add the following lines to the end of the $HOME/.bashrc file of user hduser. If

you use a shell other than bash, you should of course update its appropriate

configuration files instead of .bashrc.

#Open .bashrc using gksudo gedit

export JAVA_HOME=/usr/lib/jvm/jdk/

export HADOOP_INSTALL=/usr/local/hadoop

export PATH=$PATH:$HADOOP_INSTALL/bin

export PATH=$PATH:$HADOOP_INSTALL/sbin

export HADOOP_MAPRED_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_HOME=$HADOOP_INSTALL

export HADOOP_HDFS_HOME=$HADOOP_INSTALL

export YARN_HOME=$HADOOP_INSTALL

Page 15: Hadoop Installation Guide

Modify JAVA_HOME Variable

$ cd /usr/local/hadoop/etc/hadoop

$ vi hadoop-env.sh

#modify JAVA_HOME

export JAVA_HOME=/usr/lib/jvm/jdk/

Page 16: Hadoop Installation Guide

Re-login into Ubuntu using hdser and check hadoop version

$ hadoop version

At this point, hadoop is installed.

Page 17: Hadoop Installation Guide

2.3 Excursus: Hadoop Distributed File System (HDFS)

Before we continue let us briefly learn a bit more about Hadoop’s distributed

file system.

The Hadoop Distributed File System (HDFS) is a distributed file system designed to

run on commodity hardware. It has many similarities with existing distributed file

systems. However, the differences from other distributed file systems are significant.

HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware.

HDFS provides high throughput access to application data and is suitable for

applications that have large data sets. HDFS relaxes a few POSIX requirements to

enable streaming access to file system data. HDFS was originally built as

infrastructure for the Apache Nutch web search engine project. HDFS is part of the

Apache Hadoop project, which is part of the Apache Lucene project.

The Hadoop Distributed File System: Architecture and

Design hadoop.apache.org/hdfs/docs/…

The following picture gives an overview of the most important HDFS

components.

Page 18: Hadoop Installation Guide
Page 19: Hadoop Installation Guide

2.4 Configuration

Our goal in this tutorial is a single-node setup of Hadoop. More information of

what we do in this section is available on the Hadoop Wiki.

In this section, we will configure the directory where Hadoop will store its data

files, the network ports it listens to, etc. Our setup will use Hadoop’s

Distributed File System, HDFS, even though our little “cluster” only contains

our single local machine.

You can leave the settings below “as is” with the exception of

the hadoop.tmp.dir parameter – this parameter you must change to a

directory of your choice. We will use the directory /app/hadoop/tmp in this

tutorial. Hadoop’s default configurations use hadoop.tmp.dir as the base

temporary directory both for the local file system and HDFS, so don’t be

surprised if you see Hadoop creating the specified directory automatically on

HDFS at some later point.

Now we create the directory and set the required ownerships and permissions:

$ sudo mkdir -p /app/hadoop/tmp

$ sudo chown hduser:hadoop /app/hadoop/tmp

# ...and if you want to tighten up security, chmod from 755 to 750...

$ sudo chmod 755 /app/hadoop/tmp

Page 20: Hadoop Installation Guide

If you forget to set the required ownerships and permissions, you will see

a java.io.IOExceptionwhen you try to format the name node in the next

section).

Add the following snippets between the <configuration> ...

</configuration> tags in the respective configuration XML file.

Use gksudo gedit:

In file etc/hadoop/core-site.xml:

<property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property>

Page 21: Hadoop Installation Guide

In file etc/hadoop/ yarn-site.xml:

<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property>

Page 22: Hadoop Installation Guide

In file etc/hadoop/mapred-site.xml:

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

$ cd ~

$ mkdir -p mydata/hdfs/namenode

$ mkdir -p mydata/hdfs/datanode

Page 23: Hadoop Installation Guide
Page 24: Hadoop Installation Guide

In file etc/hadoop/hdfs-site.xml:

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/home/hduser/mydata/hdfs/namenode</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/home/hduser/mydata/hdfs/datanode</value>

</property>

See Getting Started with Hadoop and the documentation in Hadoop’s API

Overview if you have any questions about Hadoop’s configuration options.

Page 25: Hadoop Installation Guide

2.5 Formatting the HDFS filesystem via the NameNode

The first step to starting up your Hadoop installation is formatting the Hadoop

filesystem which is implemented on top of the local filesystem of your “cluster”

(which includes only your local machine if you followed this tutorial). You

need to do this the first time you set up a Hadoop cluster.

Do not format a running Hadoop filesystem as you will lose all the data

currently in the cluster (in HDFS)!

To format the filesystem (which simply initializes the directory specified by

the dfs.name.dirvariable), run the command

$ hdfs namenode –format //Run as hduser

Page 26: Hadoop Installation Guide

2.6 Starting your single-node cluster

Run the command:

$ start-dfs.sh

$ start-yarn.sh

$ jps

If everything is sucessful, you should see following services running

2583 DataNode

2970 ResourceManager

3461 Jps

3177 NodeManager

2361 NameNode

2840 SecondaryNameNode

Page 27: Hadoop Installation Guide

3. Run Hadoop Example

$ cd /usr/local/hadoop

$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5