How we built it: designing a globally consistent transaction engine


Alex Baranau was a software engineer at Cask where he was responsible for building and designing software fueling the next generation of Big Data applications. Alex is a contributor to HBase and Flume, and has created several open-sourced projects. He also writes frequently about Big Data technologies.


At Cask, we are committed to contributing back to the open source community. One of our latest open-sourced projects is Tephra, a system that adds complete transaction support to Apache HBase™.

As an XA-style transaction system, Tephra is designed to be agnostic to the underlying data stores, so its usage is not limited to HBase. However, the choice of HBase as the first supported distributed data store was natural since it is widely used at Cask as a core component of our data application platform (CDAP) and boasts growing adoption in a wide range of organizations. It also shares similar design decisions with many other NoSQL databases, and as such, serves as a great example of how other databases can be integrated with Tephra.


In order to understand Tephra’s design, let’s take a look at some of the major requirements that drove it. First, the user or client of the transaction system should have maximum control over what the transaction system does. The importance of this should be clear to anyone who has worked with the Apache Hadoop™ ecosystem’s projects. One of the biggest selling points that attracted many early Hadoop adopters was its ability to act as a white box, i.e., you can make it do exactly what an application needs it to do.

What happens underneath Hadoop is clear and well understood. There are many configuration/API options to optimize it for your use case’s best performance. Similarly, since there are so many completely different use cases (from batch to real-time, from random to sequential data access) that require transaction support, we want the client to have great insight and fine-grained control of what the transaction system should do in any given case. Having full control on the client side not only allows you to make the best decisions for optimizing for specific use cases, but it also makes integration with third-party systems simpler.

Second, the transaction system should support transactions spanning across different data stores and third-party systems. A useful application that operates on massive amounts of data usually involves multiple different technologies working together (e.g., you may ingest your data through Apache Flume™, store it on HDFS, process it with MapReduce jobs, and deliver results by uploading to your favorite BI tool). As your system grows, you may have additional ways of ingesting data like Apache Kafka™, HTTP, etc. You may want to do real-time processing in parallel to serve results more quickly (even though potentially less accurately) where possible, and so on.

When different types of components in your application share the data and update the data in multiple data stores in many different ways, it is important for the transaction system to support you. You don’t want to end up with “some of your data” being consistent 🙂

Third, the transaction system should scale well while adding minimal overhead to your system. A system designed to run applications that process massive amounts of data has to perform well at scale. Adding support for transactions in a system cannot affect its scalability; otherwise its usage is very limited.

Transactions as a service

One major component of Tephra is a transaction service that provides support for the clients that perform transactional operations. The service does not perform transactional operations itself; rather, clients “use” the service during the transaction lifecycle to apply logic to a specific transaction model:

Screen Shot 2014-11-10 at 2.29.26 PM

This approach allows transactional clients to gain full control over what and how transaction logic is applied–for example, where applicable the client can decide which resources transaction logic should be applied to and which should be accessed outside of the scope of transaction. This provides room for several other optimizations, as we will explain below, and makes it possible to integrate transactions with other third-party data stores on the client-side without changes to the central transaction service itself.

Currently Tephra supports a transaction model that implements optimistic concurrency control and multi-version concurrency control methods. Optimistic concurrency control avoids the cost of locks but is most efficient when conflicts are rare. When applications are given lots of control over transaction logic, it is important to apply this control for the best outcome. For instance, in data intensive applications like high-throughput real-time stream processing, it is well worth the effort to design the processing flow so that it avoids conflicts, allowing lock-free data operations.

Use of multi-version concurrency control enables optimistic concurrency control and is supported natively by many of the distributed data stores (like HBase, Cassandra, etc.). This made HBase an ideal first choice to support in Tephra.

Separating the transaction service allows you to scale it independently and control its resource allocation to get the desired quality of service. Different workloads may result in completely different transaction service utilization: for example, batch processing may require much less interaction with the transaction system than real-time stream processing. Being able to configure the system to utilize the right amount of resources for the right tasks is crucial to building an efficient solution.

With great power comes great responsibility

The greater control over transaction logic that Tephra provides allows you to do numerous optimizations in your applications. But you need to know what you are doing. When you write data to a table in HBase, you can control which level of conflict detection to use — column level, row level or even table level — by choosing how to report changes made during transactions within the transaction system. You can even decide to skip conflict detection for specific transactions dynamically at runtime if you know that the data processing is partitioned in a way that conflicts are simply not possible. These optimizations can boost the performance of your application dramatically.

While experienced Hadoop engineers may benefit from utilizing Tephra directly and telling it exactly what to do, other application developers may be better off when provided with the integrated platform experience of our recently open-sourced Cask Data Application Platform (CDAP). CDAP makes interaction with the transaction system transparent for you by providing framework-level guarantees like exactly-once processing or the ability to mix real-time and batch jobs on the same datasets in a consistent way. This allows you to focus on solving business problems while allowing the platform to provide the data and processing consistency guarantees required. Then, once the solution is built, there’s room for optimization.

Where to Go Next

If you’d like to get involved, try out Tephra, join the Tephra user and developer discussion list, and check out our open source community at Please stay tuned as we’ll cover more details in upcoming blog posts including a deep dive into Tephra APIs.

<< Return to Cask Blog