The System Design Newsletter

The System Design Newsletter

Share this post

The System Design Newsletter
The System Design Newsletter
How Amazon Scaled E-commerce Shopping Cart Data Infrastructure

How Amazon Scaled E-commerce Shopping Cart Data Infrastructure

#41: Break Into Amazon Dynamo White Paper (8 minutes)

Neo Kim's avatar
Neo Kim
Mar 26, 2024
109
Error

Share this post

The System Design Newsletter
The System Design Newsletter
How Amazon Scaled E-commerce Shopping Cart Data Infrastructure
2
13
Error
Share

Get my system design playbook for FREE on newsletter signup:

Error

This post outlines the Amazon Dynamo white paper. If you want to learn more, scroll to the bottom and find the references.

  • Share this post & I'll send you some rewards for the referrals.

The holiday season, 2004.

Amazon's e-commerce platform is running the Oracle enterprise database.

Scaling a Traditional Database
Scaling a Relational Database

They set up clustering and leader-follower replication topology for scalability. Yet the database couldn’t scale well for critical services like the shopping cart. And caused several outages during peak shopping hours with 10+ million requests a day.

They could only scale reads by adding database followers. While they had to vertically scale the database leader to scale writes. That means write operations suffered from a single point of failure.

Yet each customer must be able to view and add items to the shopping cart all the time. Otherwise they will lose money. Also the shopping cart infrastructure should be scalable.

So they did post-mortem debugging. And found that 70% of their operations need only a simple key-value data model. That means a relational database would be inefficient for their use case.

So they modeled the shopping cart as a key-value service. The cart ID is the key while the items in the shopping cart are represented as a list of values.

Also they didn’t want to lose any item added to the shopping cart by customers. In other words, they want extremely high write availability and durability.

Amazon Dynamo Architecture

So a group of distributed systems experts designed a horizontally scalable distributed database. They created it to scale for both reads and writes.

system design newsletter

Amazon Dynamo Architecture

They didn't create new technologies. Instead combined many distributed systems techniques to solve the problems. Here’s how they did it:

1. Partitioning Data

They partitioned the data to solve the write scalability problem. Because each partition could accept writes in parallel.

Partitioning Data for Write Scalability
Partitioning Data for Write Scalability

A simple approach is to partition and sort data. Yet it could create a hot shard problem as some shards might receive more traffic than others.

Regular Hashing to Partition Data
Regular Hashing to Partition Data

An alternative approach is regular hashing. The idea is to hash the data keys and do modulo-operation on the result against the number of servers. Thus finding the server to interact with. And it distributes data more uniformly.

But it won't scale while adding or removing servers. Because data associations will break and cause large data movement.

Partitioning Data Using Consistent Hashing
Partitioning Data Using Consistent Hashing

So they use consistent hashing to solve this problem. Think of consistent hashing as a distributed systems technique to reduce data movement while adding or removing servers.

Consistent hashing treats the output range of the hash function as a ring. That means the largest hash value wraps around to the smallest hash value. And each server gets a random position on the hash ring.

They find the position of a server by hashing the server ID. And doing a modulo-operation on that result against the number of available positions on the hash ring.

While they add a data item to the server by hashing its key. And doing a modulo-operation on that result against the number of available positions on the hash ring. After that, they traverse the ring in a clockwise direction to find the responsible server.

Put another way, a server is responsible for data that falls in the region between it and its predecessor server on the hash ring.

In the example above, the data key falls between 75 and 100 range. So the server at position 100 stores the data.

Besides they store the server positions in a sorted data structure. So a server could be found in logarithmic time complexity.

Data Movement When a Server Is Added in Consistent Hashing
Data Movement When a Server Is Added in Consistent Hashing

The data movement is low when a new server gets added to the ring. Because they move only the data in the range that falls between the new server and the preceding server. Thus making it easier to scale.

Life was better.

Until one day when they noticed an unbalanced load across the servers.

So they introduced virtual nodes. It’s a variant of consistent hashing.

Virtual nodes mean a server position in the hash ring doesn’t map only to a single physical server. Instead it could contain many physical servers. Thus reducing the risk of hot shards and making it easy to include heterogeneous servers.

2. Ensuring Data Durability

But one fine day a server fails and they lose some data.

Replicating Data Across Servers for Durability
Replicating Data Across Servers for Durability

So they replicate data asynchronously across N number of servers for durability. That means they store a data item in the next server on the ring. And also in the next (N-1) servers. While N is the replication factor.

Yet they need data consistency.

So they use sloppy quorum.

Achieving Consistency Through Sloppy Quorum
Achieving Consistency Through Sloppy Quorum

At least a single server will have the latest data using sloppy quorum.

And they consider the system consistent if it satisfies the equation: R+W > N.

R represents the read quorum. That means the number of servers that must reply to consider a read operation successful.

While W represents the write quorum. That means the number of servers that must reply to consider a write operation successful.

So the set of servers in reads and writes will overlap considering the equation and the system will return the latest data.

Life was good again.

3. Resolving Data Conflicts

Yet one day concurrent writes with sloppy quorum caused conflict on a data item.

They could resolve the conflict by applying the last write.

A simple solution to find the last write is to use the system clock - Network Time Protocol (NTP). But clock skew is normal in distributed systems, thus it's unreliable. That means different clocks run at different rates.

Vector Clocks to Find the Causality of Events

So they use vector clocks to find data version history and merge them during reads. In other words, they use vector clocks to find causality between versions.

The different data versions get automatically merged if there are no conflicts. Otherwise the client logic must resolve the conflicts.

The idea behind vector clocks is simple. Each server maintains a vector of integers and the integers start at zero. The server increments its integer whenever an event gets sent.

While the receiver finds the maximum between its integer and the received value. So event A occurred before event B if all integers in A are lesser than those of event B.

Also the whole vector gets sent along with an event. That means vector clocks are passed between the system and the client.

They have solved another problem.

4. Handling Temporary Failures

Imagine their write quorum is 3. That means 3 servers must reply to consider a write operation successful.

Yet one morning, one of those servers was temporarily down.

Writing Data Temporarily to the Next Healthy Server Using Hinted Handoff
Writing Data Temporarily to the Next Healthy Server Using Hinted Handoff

So they use hinted handoff. It stores the data temporarily in the next healthy server. That means instead of the next N servers, it picks the next N healthy servers.

While the temporary server sends the data once the failed server is back online. Thus offering high write availability.

And life was good.

5. Handling Permanent Failures

Until one day when the server storing temporary data from hinted handoff also crashed.

And it happened before sending temporary data to the server that was back online after a failure.

Synchronising Data Using the Merkle Tree
Synchronizing Data Using the Merkle Tree

So they use Merkle trees to synchronize data between servers in the background.

Think of it as a data structure that allows finding data differences efficiently.

They find data differences by comparing the hash value of the root in the Merkle trees. After that, they check the children nodes if the parent nodes aren’t the same. Thus in the worst case, they need logarithmic time complexity to transfer data.

Things are looking better.

6. Detecting Server Membership

But one day they add more servers to scale the system.

And it was important to find the available servers in the system. Yet a centralized approach using heartbeats isn’t scalable.

Tracking System State Information Using Gossip Protocol
Tracking System State Information Using Gossip Protocol

So they use gossip protocol to track the system state. It finds the servers by pinging random servers periodically.

While each server transfers its entire membership list to a set of servers on each epoch. And a server is considered dead if it’s unavailable for a specific number of epochs.

Gossip protocol is a decentralized approach to service discovery and failure detection. Also the system would reach eventual consistency in logarithmic time with gossip protocol.

It wasn't a perfect system. But they chose the tradeoffs carefully to solve problems.

system design newsletter

3 years later…

They published the results in a white paper and called it Amazon Dynamo.

It laid the foundation for many NoSQL databases.

And distributed databases like Apache Cassandra, Riak, and Amazon DynamoDB are based on Amazon Dynamo principles.


Consider subscribing to get simplified case studies delivered straight to your inbox:

Error

Author NK; System design case studies
Follow me on LinkedIn | YouTube | Threads | Twitter | Instagram

Thank you for supporting this newsletter. Consider sharing this post with your friends and get rewards. Y’all are the best.

system design newsletter

Share


How Khan Academy Scaled to 30 Million Users

How Khan Academy Scaled to 30 Million Users

Neo Kim
·
March 12, 2024
Read full story
How Tinder Scaled to 1.6 Billion Swipes per Day

How Tinder Scaled to 1.6 Billion Swipes per Day

Neo Kim
·
March 19, 2024
Read full story

References

  • Dynamo: Amazon’s Highly Available Key-value Store

  • A Decade of Dynamo: Powering the next wave of high-performance, internet-scale applications

  • Amazon DynamoDB – a Fast and Scalable NoSQL Database Service Designed for Internet Scale Applications

  • Amazon’s DynamoDB — 10 years later

  • Dynamo – A Followup and Re-Rebuttals

  • Consistent Hashing Explained

  • Gossip Protocol

  • Hinted Handoff

  • What Is Service Discovery?

  • Consistency Patterns

  • Image taken from Vector Clocks

Ashok Maurya's avatar
Bay's avatar
Nil's avatar
Madan Kumar Y's avatar
Prabhjot Singh Saini's avatar
109 Likes∙
13 Restacks
109
Error

Share this post

The System Design Newsletter
The System Design Newsletter
How Amazon Scaled E-commerce Shopping Cart Data Infrastructure
2
13
Error
Share

Discussion about this post

User's avatar
Han's avatar
Han
Mar 27, 2024

For the merkle tree, is tree designed such all data blocks are on the leafs?

If yes, then i assume any difference between 2 data blocks will cause all parent hashes to be different. So when searching from root downwards, we also to go all the way from root to the leafs. There won't be a case of the search stopping somewhere in the middle right?

"Thus in the worst case, they need logarithmic time complexity to transfer data."

So i don't understand why is it a worst case. I thought it is always the case that we have to search until leafs, which are where the data blocks exist, and anything above leaf level are meaningless hashes

Also could you clarify statements in section 4+5?

"While the temporary server sends the data once the failed server is back online. Thus offering high write availability."

Temporary server send data to where? The failed server after it's recovered?

" it happened before sending temporary data to the server that was back online after a failure."

Which server is sending to which server?

- Failed (before failing) to temporary?

- Temporary to Failed (after recovering)?

Expand full comment
Like (3)
Reply
Share
1 reply by Neo Kim
1 more comment...
8 Reasons Why WhatsApp Was Able to Support 50 Billion Messages a Day With Only 32 Engineers
#1: Learn More - Awesome WhatsApp Engineering (6 minutes)
Aug 27, 2023 â€¢ 
Neo Kim
756

Share this post

The System Design Newsletter
The System Design Newsletter
8 Reasons Why WhatsApp Was Able to Support 50 Billion Messages a Day With Only 32 Engineers
25
How PayPal Was Able to Support a Billion Transactions per Day With Only 8 Virtual Machines
#30: Learn More - Awesome PayPal Engineering (4 minutes)
Dec 26, 2023 â€¢ 
Neo Kim
281

Share this post

The System Design Newsletter
The System Design Newsletter
How PayPal Was Able to Support a Billion Transactions per Day With Only 8 Virtual Machines
14
How Stripe Prevents Double Payment Using Idempotent API
#45: A Simple Introduction to Idempotent API (4 minutes)
May 9, 2024 â€¢ 
Neo Kim
419

Share this post

The System Design Newsletter
The System Design Newsletter
How Stripe Prevents Double Payment Using Idempotent API
30

Ready for more?

Error
© 2025 Neo Kim
Publisher Privacy
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

ErrorError

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

User's avatar

Fran Soto, a subscriber of The System Design Newsletter, shared this with you.