Hadoop Optimization Tips

Source

Over the last few years Hadoop has become one of the most popular platforms for big data processing. The MapReduce paradigm has shown itself to be incredibly powerful for many data analytics applications. As powerful as it may be, optimization of Hadoop and MapReduce is still more of an art than a science. The following are a number of tips for improving the performance of Hadoop jobs.

Use more Reducers

Many first time users to Hadoop make the mistake of running a job with only one reducer. While using only one reducer is fine for some applications, it isn't for the majority. Be sure to use a much larger number of reducers, minimally one per node in your cluster.

Increase HDFS Blocksize

The default block size of files in HDFS is 64M. The value is quite small for modern day hardware. In addition, smaller block sizes lead to more block files and map tasks to be executed. The overhead that comes with managing such a small block size inhibits performance in most jobs. The HDFS blocksize should be set to at least 128M, but you may find that 256M or even 512M will perform better on many systems. In order to change the HDFS blocksize set the dfs.block.size parameter for Hadoop 1.0 and the dfs.blocksize parameter in Hadoop 2.0. You'll find the setting in the hdfs-site.xml configuration file.

Increase namenode and datanode threads

By default both the HDFS namenode and HDFS datanodes have 10 threads that are capable of handling requests. As you begin to scale up the size of your Hadoop cluster, these threads can get overwhelmed and won't be able to handle the number of requests coming in. This is especially true for the namenode. Some Hadoop engineers recommend setting the thread count to 10% of the number of nodes in your cluster, however I found this to be poor for hardware with high core counts. Personally I recommend a thread count equal to 50% of the number of nodes for the namenode and 25% of the number of nodes for the datanode. Needless to say, if your cluster is small enough that the thread count would fall below 10, keep the default of 10 threads. The thread count can be modified by setting the dfs.namenode.handler.count and dfs.datanode.handler.count parameters in hdfs-site.xml.

Increase parallel copies

The parallel copies parameter limits how many nodes a reducer can copy data from in parallel. This value defaults to 5, which is usually way to small for larger clusters. A general rule of thumb is to set this to square root of the number of nodes in your cluster. For example, if you have a 100 node cluster, this should be increased from 5 to 10. If the square root of the number of nodes in your cluster is below 5, keep the default of 5. The parallel copies parameter can be set via the mapred.reduce.parallel.copies parameter in Hadoop 1.0 and the mapreduce.reduce.shuffle.parallelcopies parameter in Hadoop 2.0. You'll find the setting in the mapred-site.xml configuration file.

Enable compression

If the amount of data you are sending across the network during the MapReduce shuffle phase is large, compressing it may help decrease the time spent in the shuffle phase, and ultimately help your job performance. Just set mapred.compress.map.output (Hadoop 1.0) or mapreduce.map.output.compress (Hadoop 2.0) to true to enable compression on map outputs.

Similarly, if the final output of your job is large, it may be worthwhile to compress the data before writing it to disk. This can save time on I/O writing time. Just set mapred.output.compress (Hadoop 1.0) or mapreduce.output.fileoutputformat.compress (Hadoop 2.0) to true to enable final output compression. You'll find all these configuration values in mapred-site.xml.

Increase slowstart

One of the more complex MapReduce configuration parameters is the slowstart parameter. You'll find this parameter set via mapred.reduce.slowstart.completed.maps in Hadoop 1.0 and mapreduce.job.reduce.slowstart.completedmaps in Hadoop 2.0, both found in mapred-site.xml. This configuration value controls when MapReduce reduce tasks begin. It is configured as a percentage of map tasks that must complete before reducers will begin and defaults to 5% (or 0.05). For example, if the default of 5% is set and your job will have 100 map tasks executed, the reducers will begin after the 5th map task is completed.

If your job does not have a lot of data to send during the shuffle phase, the default slowstart value is way too low. You'll find your reducers will mostly sit around not doing anything during your job. As they sit around, they are taking valuable cluster resources away from map tasks that may be able to run instead.

Configuration of this value is highly dependent on your application, so some trial and error will be needed. As a general rule of thumb, 5% is usually far too low for most applications. Setting the slowstart to 25% or 50% will usually lead to an increase in job performance.

Adjust task heap size

The default heap size used in Hadoop is only 200M. On modern day systems with large amounts of memory, this is usually way to small of a value and does not utilize the memory of the system effectively. It would be a good idea to set this to a much larger value. The value would depend on the amount of memory on your system. If you have a system with 8G of memory and 8 cores, setting a heap size of 512M or 768M would be a good place to start. Be careful not to use too much memory, as the remaining Hadoop daemons and other operating system processes need memory as well. The heap size can be adjusted via mapred.child.java.opts in mapred-site.xml in both Hadoop 1.0 and Hadoop 2.0. In Hadoop 2.0, users have the option to adjust the heap size for map tasks and reduce tasks differently, so further tuning could be done if you are running Hadoop 2.0.

Monitor your system

Perhaps the most important thing you can do to increase job performance is to monitor your system carefully and adjust accordingly. Without proper monitoring, you won't know how much memory your job uses, how heavy your I/O is or is not, or if you have any failing hardware in your cluster that requires maintenance. There are many monitoring solutions, however the most popular one used with Hadoop is Ganglia.

More by this Author


Comments

No comments yet.

    Sign in or sign up and post using a HubPages Network account.

    0 of 8192 characters used
    Post Comment

    No HTML is allowed in comments, but URLs will be hyperlinked. Comments are not for promoting your articles or other sites.


    Click to Rate This Article
    working