Install Hadoop on Linux - Ultimate tutorial

Warning
This article was last updated on 2022-05-21, the content may be out of date.

Lately, in our Big Data course at University, we were required to install Hadoop and made a report about the installation progress. Having completed it, I thought I should share my experience.

If you are into Big data, you must have already heard about Hadoop.

Apache Hadoop is an open source framework that is used to efficiently store and process large datasets ranging in size from gigabytes to petabytes of data. Instead of using one large computer to store and process the data, Hadoop allows clustering multiple computers to analyze massive datasets in parallel more quickly.

Though as popular as it seems, the installation progress is a bit intimidating for new users. You must have guessed that “Oh, its popular so it must have a straight forward installation progress”, but nope, at least it’s not easy with the manual installation progress.

Hadoop installation is scary

In this post, I will demonstrate two approaches to install Hadoop. Manual installation and Docker.

Info

The distribution installed in this post is Apache Hadoop downloaded from:

https://hadoop.apache.org/releases.html.

:(far fa-file-archive fa-fw): Version: Version NEW | 3.2.3 (released on March 28, 2022).

Firstly, to install hadoop, we need to install the Java environment for the operating system. Check if java is available on the machine by typing the following command:

shell

java -version
Java checking

According to the results displayed from the terminal, Java is already installed on my desktop, so we do not have to reinstall Java anymore. Java JDK used is openjdk 17.0.3. If you don’t have Java installed, please head over to ArchWiki for the tutorial.

We need OpenSSH for this installation so let’s install it, shall we? To install OpenSSH on Arch Linux, type the following command into the terminal:

shell

sudo pacman -S openssh
Install openSSH

Press Y to confirm the installation. Next step, you have to enable systemd service for ssh:

shell

sudo systemctl start sshd.service

Finally, we need to configure SSH passwordless, type those commands into your terminal:

shell

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

If ssh has been properly set up, you should get the following output:

shell

Generating public/private rsa key pair.
Your identification has been saved in /home/ashpex/.ssh/id_rsa
Your public key has been saved in /home/ashpex/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:Ic5SYgbyl1S4gUbsEoi3pgLu/fA3FHkLmUNFXJLjaU ashpex@archlinux
The key's randomart image is:
+---[RSA 3072]----+
|..=o=+.     +.o=.|
| oS=S=     o *.o.|
| .. Xo+ o   = E  |
|   +.* = . =     |
|    . + S o .    |
|     . + . o     |
|      . = .      |
|       o.o       |
|      oo...      |
+----[SHA256]-----+
Caution
If you get the error ssh: connect to host localhost port 22: Connection refused it means that openssh is not installed or the ssh service has not been started. Please check the installation process again.

We can go to the Apache Hadoop home page to select and download the installation file.

Or use wget to download the package directly:

shell

wget https://archive.apache.org/dist/hadoop/common/hadoop-3.2.3/hadoop-3.2.3.tar.gz
Output

Check the downloaded archive:

File has been downloaded

After downloading the installation package to your computer, to make sure the package is safe, you can check the signature of the file with PGP or check the checksum SHA-512 by typing the following command (at the directory path containing the downloaded file):

shell

shasum -a 512 hadoop-3.2.3.tar.gz
Check checksum

Compare the results with the Apache Hadoop checksum file over here: https://downloads.apache.org/hadoop/common/hadoop-3.2.3/hadoop-3.2.3.tar.gz.sha512

Type the following command to extract the installation file:

shell

tar xzf hadoop-3.2.3.tar.gz

After extracting, we will get the following files (in the directory hadoop-3.2.3)

Hadoop directory

To make it easier to work for later steps in this tutorial, we will rename the folder hadoop-3.2-3 (after extracting) to hadoop. Now the hadoop directory will be located at ~/Downloads/hadoop.

shell

mv hadoop-3.2.3 hadoop
Caution
This is the most important step, failure in following these instructions may lead to incorrect hadoop installation.

Next, we need to set the environment variable by editing the file .zshrc (depending on the shell in use, you may need to edit other file, in most cases we usually edit the .bashrc file. as this is the default shell in most Linux distributions):

Edit the file .zshrc at ~/.zhrc by typing the command:

shell

vim ~/.zshrc 

Add the following environment variables:

shell

export JAVA_HOME='/usr/lib/jvm/java-17-openjdk'
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=~/Downloads/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
Editting ~/.zshrc

After editing the .zshrc file, we need to run the following command to update the shell:

shell

source ~/.zshrc

Next, we need to add the Java environment variable to the file hadoop-env.sh at the path ~/Downloads/hadoop/etc/hadoop/hadoop-env.sh:

Use a text editor (vim) to edit the file:

shell

vim ~/Downloads/hadoop/etc/hadoop/hadoop-env.sh

Add the following Java environment variable:

shell

export JAVA_HOME='/usr/lib/jvm/java-17-openjdk'

Similarly, edit the file /etc/hadoop/core-site.xml to add the following lines:

xml

<configuration>
     <property>
         <name>fs.defaultFS</name>
         <value>hdfs://localhost:9000</value>
     </property>
</configuration>
/etc/hadoop/core-site.xml

Next step, edit etc/hadoop/hdfs-site.xml to add the following lines:

xml

<configuration>
     <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
</configuration>
hdfs-site.xml

Edit etc/hadoop/mapred-site.xml to add the following lines:

xml

<configuration>
     <property>
         <name>mapreduce.framework.name</name>
         <value>yarn</value>
     </property>
     <property>
         <name>mapreduce.application.classpath</name>
        <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
    </property>
</configuration>
mapred-site.xml

Finally, edit etc/hadoop/yarn-site.xml:

html

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    
    <property>
 		<name>yarn.resourcemanager.hostname</name>
 		<value>127.0.0.1</value>
    </property>
    
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
     <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
</configuration>
yarn-site.xml

Format HDFS namenode:

shell

hdfs namenode -format
Format HDFS namenode

Run the following command to start the NameNode and DataNode daemon:

shell

cd ~/Downloads/hadoop/sbin
./start-dfs.sh
Start namenode and datanode

After the NameNode and DataNode are successfully started, we proceed to start the YARN resource manager and nodemanager:

shell

./start-yarn.sh
Start resource manager

Check the status of jps by typing the command:

shell

jps

When the services are started successfully, we will see four processes as shown bellow:

jps output

To stop Hadoop services, type the following commands:

shell

cd ~/Downloads/hadoop/sbin
./stop-all.sh
Stop Hadoop services

Congratulations! You have successfully install Hadoop in your Linux machine. If you have any questions, feel free to ask in the comment section bellow or contact me directly. Until next time!