Troubleshoot HDFS Connection Exception

1 minute read

Error Message

ubuntu@ip-172-31-43-69:~/apps/hadoop$ hadoop fs -ls / ls: Call From to localhost:9000 failed on connection exception: Connection refused; For more details see:


hadoop fs -ls / can list all the folders and files under / in the HDFS


It happens every time after rebooting EC2 hosting Hadoop 3.0.0


Format the filesystem: $ bin/hdfs namenode -format

Start NameNode daemon and DataNode daemon: $ sbin/

The hadoop daemon log output is written to the HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs) Browse the web interface for the NameNode; by default it is available at: NameNode - http://localhost:50070/

Make the HDFS directories required to execute MapReduce jobs:

$hadoop fs -mkdir /user
$hadoop fs -mkdir /user/[username]

Note: the [username] is should be replaced by actual name like ‘ubuntu’ ‘ec2-user’ etc.

Root Cause

When EC2 reboots, temp files in the system’s temp folder may be cleaned, which leads to the HDFS cannot startup correctly.


  1. create a new temp folder (supposing /home/ubuntu/apps/hadoop/tmp)
  2. add the hadoop.tmp.dir property in the core-site.xml file (located under etc/hadoop in the $HADOOP_ROOT)
  3. format the HDFS
  4. restart hadoop

Note: The section will look like the following after the fix.


Leave a Comment