Riding the Spotted Elephant

Start Free Trial
November 12, 2015 by Updated January 8th, 2024

spotted_elephoant

Introduction:

One of the benefits of moving Hadoop workloads to the cloud is reducing cost and risk. No upfront capital expenditure on hardware is required and ongoing expenditure scales only in response to actual usage. This greatly lowers risk. Services like Qubole eliminates administration overhead as well. Amazon EC2 offers multiple instance purchasing options. Spot Instance purchasing option provides an effective way of reducing compute costs with many documented case studies showing cost reduction of 50-90% compared to regular purchasing options. Few data analysts are, however, in a position to exploit these capabilities – as they require sophisticated setup and working knowledge of cloud APIs. In this post, we describe how we have modified Qubole’s auto-scaling clusters to take advantage of Spot Instances and thereby exposed their cost-saving capabilities to a wide audience of data analysts and engineers.

Why Spot Instances?

Spot Instances allow users to bid on unused Amazon EC2 capacity and run those instances for as long as their bid exceeds the Spot Price. The Spot Price is based on available supply and demand. Users with bids exceeding the Spot Price gain access to available Spot Instances. The workload in the Hadoop cluster is not uniform and there can be spikes that cannot be predicted. Qubole’s auto-scaling clusters can add more compute power when required and scale down the cluster size when the load recedes. Automatic addition of compute capacity gives an opportunity to use Spot Instances for auto-scaling at significantly lower costs compared to On-Demand Instances. The following graph shows the Spot Price over the last three months for the m1.xlarge instance type in the us-east-1b availability zone. The Spot Price over the time range is significantly lower than the On-Demand Instance price of 52¢ for the same instance type.

Ride1

Spot Instance Challenges

A number of challenges became obvious as we thought about integrating Spot Instances with our auto-scaling clusters:

  1. Availability of Spot Instances is not guaranteed if the market is saturated. As a result, relying only on Spot Instances does not guarantee the clusters to scale up when required – and this can lead to poor response times (for ad-hoc workloads) or missed deadlines (for periodic batch workloads).
  2. Nodes dynamically added to a Hadoop cluster can be used as compute nodes (ie. running TaskTracker) or as data nodes (running a DataNode daemon) or both. With a vanilla distribution of Hadoop – either of these is problematic: The bid amount for Spot Instance cannot be easily determined. Spot Price depends on the unused capacity in Amazon EC2 which is influenced by many factors like availability zone, time of the day, etc. Using a very high bid amount in order to guarantee instance availability can have very unexpected financial implications (see for instance Slideshare experience). The graph below for m1.xlarge instance over the past 3 months has a Spot Price of more than $20 per hour a number of times (compared to the On-Demand instance price of 52¢).
    1. If spot nodes are used only as compute nodes – then the core nodes (data nodes that are part of HDFS) can quickly become an IO bottleneck and can slow down the entire cluster
    2. If spot nodes are used as data nodes – then the cluster can become unstable. Spot instances can get terminated anytime without notice when the Spot Price exceeds the bid price. Frequent spikes in the Spot Price are common and aggravate the problem. Spot nodes working as data nodes will lose replicas of HDFS files residing on these nodes and the HDFS can become corrupt as a result.

Ride2

Spotted Hadoop Clusters

In extending our auto-scaling technology to use Spot Instances – we decided to address the above challenges. We also wanted to let users control their own pricing strategies while making it very simple for them to do so. Our overall strategy is described next. When Qubole Hadoop clusters are first started (with a single Hadoop master node and a minimum number of slaves as configured by the user), all the nodes are On-Demand instances. The reason for this is to start with a small stable cluster that can be used to orchestrate advanced provisioning and data placement strategies. Depending on the workload, the cluster can then automatically auto-scale adding more nodes.

Cluster Provisioning Strategies

Users can choose from one of the following broad categories of auto-scaled clusters:

  1. On-Demand Instances only: The auto-scaled nodes added are only On-Demand Instances. This mode has maximum stability and maximum cost associated with it.
  2. Spot Instances only: The auto-scaled nodes added are only Spot Instances. This mode is suitable for workloads that are not time-critical and users want to have a minimum cost.
  3. Hybrid: The auto-scaled nodes are a mix of Spot and On-Demand Instances. Users can configure the maximum percentage of Spot Instances in the cluster.

Note that these provisioning strategies are followed both while up-scaling and down-scaling. In particular, when a cluster returns back to its minimal size, it is again composed only of On-Demand instances.

Spot Instance Bidding Strategy

We realized that it was very hard for users to come up with the right bid. So our user interface asks users to simply set a percentage of the regular On-Demand price they are willing to bid and a timeout on that bid. For example, a 90% bid level for m1.xlarge in the us-east availability zone translates to a maximum bid of around 45¢/hour. If a Spot Instance is not allocated within the user-configurable timeout period, the Hybrid cluster auto-scales using regular On-Demand instances. These simple controls allow users to now easily choose one of the well-known bidding strategies:

  1. Bid very low and set a large timeout. This minimizes cost by getting instances only when available at a low price
  2. Bid at about 100% (or just above) and achieve good general cost reduction while occasionally falling back to On-Demand instances
  3. Bid very high (say 200%) and be virtually guaranteed to get an instance even when AWS is running low on excess capacity

Hybrid Clusters and Data Placement

The Hybrid auto-scaled clusters are best suited for applications that can use Spot Instances for lower costs but also need better stability for avoiding data corruption. Hybrid clusters have a new HDFS block placement policy which ensures that at least one replica of the file local to a Spot instance resides on a stable On-Demand instance. If the cluster loses Spot instances (many times in bulk) – a replica of the data continues to exist on On-Demand instances which will eventually be replicated to other nodes conforming to the cluster’s replication factor. Because of the above strategy, all Spot instances can now participate as data nodes in the cluster without compromising on the availability and integrity of HDFS. When spot nodes are added, the storage capacity and the bandwidth of the cluster increase, and the cluster is no longer bottle-necked by the core nodes.

Example and User Interface

The following example illustrates one of the many possible strategies using a Hybrid auto-scale cluster. In this case, the user has chosen m1.xlarge instance for the Hybrid auto-scale cluster. The bid price has been specified as a percentage of the On-Demand Instance price (90% in the example below). The timeout for Spot Instance requests has been set to 5 minutes and the cluster can have 50% of the nodes running as Spot Instances. 

Ride3

Start Free Trial
Read Share RDDs Across Jobs with Qubole’s Spark Job Server