ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
  • »
  • Technology»
  • Computers & Software»
  • Computer Science & Programming

Hadoop Optimization Tips

Updated on November 10, 2014
Source

Over the last few years Hadoop has become one of the most popular platforms for big data processing. The MapReduce paradigm has shown itself to be incredibly powerful for many data analytics applications. As powerful as it may be, optimization of Hadoop and MapReduce is still more of an art than a science. The following are a number of tips for improving the performance of Hadoop jobs.

Use more Reducers

Many first time users to Hadoop make the mistake of running a job with only one reducer. While using only one reducer is fine for some applications, it isn't for the majority. Be sure to use a much larger number of reducers, minimally one per node in your cluster.

Increase HDFS Blocksize

The default block size of files in HDFS is 64M. The value is quite small for modern day hardware. In addition, smaller block sizes lead to more block files and map tasks to be executed. The overhead that comes with managing such a small block size inhibits performance in most jobs. The HDFS blocksize should be set to at least 128M, but you may find that 256M or even 512M will perform better on many systems. In order to change the HDFS blocksize set the dfs.block.size parameter for Hadoop 1.0 and the dfs.blocksize parameter in Hadoop 2.0. You'll find the setting in the hdfs-site.xml configuration file.

Increase namenode and datanode threads

By default both the HDFS namenode and HDFS datanodes have 10 threads that are capable of handling requests. As you begin to scale up the size of your Hadoop cluster, these threads can get overwhelmed and won't be able to handle the number of requests coming in. This is especially true for the namenode. Some Hadoop engineers recommend setting the thread count to 10% of the number of nodes in your cluster, however I found this to be poor for hardware with high core counts. Personally I recommend a thread count equal to 50% of the number of nodes for the namenode and 25% of the number of nodes for the datanode. Needless to say, if your cluster is small enough that the thread count would fall below 10, keep the default of 10 threads. The thread count can be modified by setting the dfs.namenode.handler.count and dfs.datanode.handler.count parameters in hdfs-site.xml.

Increase parallel copies

The parallel copies parameter limits how many nodes a reducer can copy data from in parallel. This value defaults to 5, which is usually way to small for larger clusters. A general rule of thumb is to set this to square root of the number of nodes in your cluster. For example, if you have a 100 node cluster, this should be increased from 5 to 10. If the square root of the number of nodes in your cluster is below 5, keep the default of 5. The parallel copies parameter can be set via the mapred.reduce.parallel.copies parameter in Hadoop 1.0 and the mapreduce.reduce.shuffle.parallelcopies parameter in Hadoop 2.0. You'll find the setting in the mapred-site.xml configuration file.

Enable compression

If the amount of data you are sending across the network during the MapReduce shuffle phase is large, compressing it may help decrease the time spent in the shuffle phase, and ultimately help your job performance. Just set mapred.compress.map.output (Hadoop 1.0) or mapreduce.map.output.compress (Hadoop 2.0) to true to enable compression on map outputs.

Similarly, if the final output of your job is large, it may be worthwhile to compress the data before writing it to disk. This can save time on I/O writing time. Just set mapred.output.compress (Hadoop 1.0) or mapreduce.output.fileoutputformat.compress (Hadoop 2.0) to true to enable final output compression. You'll find all these configuration values in mapred-site.xml.

Increase slowstart

One of the more complex MapReduce configuration parameters is the slowstart parameter. You'll find this parameter set via mapred.reduce.slowstart.completed.maps in Hadoop 1.0 and mapreduce.job.reduce.slowstart.completedmaps in Hadoop 2.0, both found in mapred-site.xml. This configuration value controls when MapReduce reduce tasks begin. It is configured as a percentage of map tasks that must complete before reducers will begin and defaults to 5% (or 0.05). For example, if the default of 5% is set and your job will have 100 map tasks executed, the reducers will begin after the 5th map task is completed.

If your job does not have a lot of data to send during the shuffle phase, the default slowstart value is way too low. You'll find your reducers will mostly sit around not doing anything during your job. As they sit around, they are taking valuable cluster resources away from map tasks that may be able to run instead.

Configuration of this value is highly dependent on your application, so some trial and error will be needed. As a general rule of thumb, 5% is usually far too low for most applications. Setting the slowstart to 25% or 50% will usually lead to an increase in job performance.

Adjust task heap size

The default heap size used in Hadoop is only 200M. On modern day systems with large amounts of memory, this is usually way to small of a value and does not utilize the memory of the system effectively. It would be a good idea to set this to a much larger value. The value would depend on the amount of memory on your system. If you have a system with 8G of memory and 8 cores, setting a heap size of 512M or 768M would be a good place to start. Be careful not to use too much memory, as the remaining Hadoop daemons and other operating system processes need memory as well. The heap size can be adjusted via mapred.child.java.opts in mapred-site.xml in both Hadoop 1.0 and Hadoop 2.0. In Hadoop 2.0, users have the option to adjust the heap size for map tasks and reduce tasks differently, so further tuning could be done if you are running Hadoop 2.0.

Monitor your system

Perhaps the most important thing you can do to increase job performance is to monitor your system carefully and adjust accordingly. Without proper monitoring, you won't know how much memory your job uses, how heavy your I/O is or is not, or if you have any failing hardware in your cluster that requires maintenance. There are many monitoring solutions, however the most popular one used with Hadoop is Ganglia.

Comments

    0 of 8192 characters used
    Post Comment

    No comments yet.

    working

    This website uses cookies

    As a user in the EEA, your approval is needed on a few things. To provide a better website experience, hubpages.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so.

    For more information on managing or withdrawing consents and how we handle data, visit our Privacy Policy at: "https://hubpages.com/privacy-policy#gdpr"

    Show Details
    Necessary
    HubPages Device IDThis is used to identify particular browsers or devices when the access the service, and is used for security reasons.
    LoginThis is necessary to sign in to the HubPages Service.
    Google RecaptchaThis is used to prevent bots and spam. (Privacy Policy)
    AkismetThis is used to detect comment spam. (Privacy Policy)
    HubPages Google AnalyticsThis is used to provide data on traffic to our website, all personally identifyable data is anonymized. (Privacy Policy)
    HubPages Traffic PixelThis is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.
    Amazon Web ServicesThis is a cloud services platform that we used to host our service. (Privacy Policy)
    CloudflareThis is a cloud CDN service that we use to efficiently deliver files required for our service to operate such as javascript, cascading style sheets, images, and videos. (Privacy Policy)
    Google Hosted LibrariesJavascript software libraries such as jQuery are loaded at endpoints on the googleapis.com or gstatic.com domains, for performance and efficiency reasons. (Privacy Policy)
    Features
    Google Custom SearchThis is feature allows you to search the site. (Privacy Policy)
    Google MapsSome articles have Google Maps embedded in them. (Privacy Policy)
    Google ChartsThis is used to display charts and graphs on articles and the author center. (Privacy Policy)
    Google AdSense Host APIThis service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy)
    Google YouTubeSome articles have YouTube videos embedded in them. (Privacy Policy)
    VimeoSome articles have Vimeo videos embedded in them. (Privacy Policy)
    PaypalThis is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy)
    Facebook LoginYou can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy)
    MavenThis supports the Maven widget and search functionality. (Privacy Policy)
    Marketing
    Google AdSenseThis is an ad network. (Privacy Policy)
    Google DoubleClickGoogle provides ad serving technology and runs an ad network. (Privacy Policy)
    Index ExchangeThis is an ad network. (Privacy Policy)
    SovrnThis is an ad network. (Privacy Policy)
    Facebook AdsThis is an ad network. (Privacy Policy)
    Amazon Unified Ad MarketplaceThis is an ad network. (Privacy Policy)
    AppNexusThis is an ad network. (Privacy Policy)
    OpenxThis is an ad network. (Privacy Policy)
    Rubicon ProjectThis is an ad network. (Privacy Policy)
    TripleLiftThis is an ad network. (Privacy Policy)
    Say MediaWe partner with Say Media to deliver ad campaigns on our sites. (Privacy Policy)
    Remarketing PixelsWe may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.
    Conversion Tracking PixelsWe may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.
    Statistics
    Author Google AnalyticsThis is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy)
    ComscoreComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy)
    Amazon Tracking PixelSome articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)