Hardware requirements

Hardware planning cannot be full without clear understanding of the workload. When you are planning your Hadoop cluster, it is very important to accurately predict the volume, type, and number of tasks. You can use a lot of metrics to get actual workloads during a pilot project. In this way, you can easily scale the pilot environment without making significant changes to existing infrastructure.

The number of machines and their specifications depend on a few factors:

  • the total volume of data;

  • the data retention policy (by default the replication index is 3);

  • the type of workload you have (see Hardware requirements depending on workload patterns);

  • the data storage mechanism (data container, type of compression used if any).

 
Every Hadoop cluster needs at least the following types of servers:

There are different recommendations for a pilot cluster.

Power supply and consumption

Power is a major concern when designing Hadoop clusters. Before purchasing the biggest and fastest nodes, analyze the power consumption of your existing hardware. We have observed huge savings in pricing and power by avoiding fastest CPUs, redundant power supplies, and other hardware.

Vendors today are building machines for cloud data centers that are designed to reduce cost, power, and weight. Supermicro, Dell, and HP have such product lines for cloud providers. So, if you are buying equipment for large clusters, take a look at these stripped-down "cloud servers".

For DataNodes, a single power supply unit (PSU) is sufficient, but for a NameNode, use redundant PSUs. Server designs that share PSUs across adjacent servers can offer increased reliability without increased cost.

Some co-location sites bill based on the maximum-possible power budget and not the actual budget. In such a location, the benefits of the power saving features of the latest CPUs are not realized completely. We therefore recommend checking the power billing options of the site in advance.

Network

This is the most challenging parameter to estimate because Hadoop workloads vary a lot. The key is buying enough network capacity at reasonable cost so that all nodes in the cluster can communicate with each other at reasonable speeds. Large clusters typically use dual 1 Gbps links for all nodes in each 20-node rack and 2*10 Gbps interconnect links per rack going up to a pair of central switches.

A good network design considers the possibility of unacceptable congestion at critical points in the network under realistic loads. Generally accepted oversubscription ratios are around 4:1 at the server access layer and 2:1 between the access layer and the aggregation layer or core. Lower oversubscription ratios can be considered if higher performance is required. Additionally, we also recommend having 1 Gbps oversubscription between racks.

It is critical to have dedicated switches for the cluster instead of trying to allocate a VC (virtual connect) in existing switches — the load of a Hadoop cluster would impact the rest of the users of the switch. It is also equally critical to work with the networking team to ensure that the switches suit both Hadoop and their monitoring tools.

Design the networking so as to retain the option of adding more racks of Hadoop servers. Getting the networking wrong can be expensive to fix. The quoted bandwidth of a switch is analogous to the miles per gallon ratings of an automobile — you are unlikely to replicate it. "Deep buffering" is preferable to low-latency in switches. Enabling Jumbo Frames across the cluster improves bandwidth through better checksums and possibly may also provide packet integrity.

Found a mistake? Seleсt text and press Ctrl+Enter to report it