The System Design Newsletter

The System Design Newsletter

Share this post

The System Design Newsletter
The System Design Newsletter
How LinkedIn Scaled to 930 Million Users
Copy link
Facebook
Email
Notes
More
User's avatar
Discover more from The System Design Newsletter
Download my system design playbook for free on newsletter signup
Over 159,000 subscribers
Already have an account? Sign in

How LinkedIn Scaled to 930 Million Users

#15: Software Architecture Evolution at LinkedIn (3 minutes)

Neo Kim's avatar
Neo Kim
Oct 17, 2023
61

Share this post

The System Design Newsletter
The System Design Newsletter
How LinkedIn Scaled to 930 Million Users
Copy link
Facebook
Email
Notes
More
9
5
Share

Get my system design playbook for FREE on newsletter signup:


This post outlines how LinkedIn evolved its software architecture over time. I will walk you through their architectural evolution in 11 stages. Each stage shows how they changed the architecture to meet increasing scalability needs.

If you want to learn more, scroll to the bottom and find the references.

  • Share this post & I'll send you some rewards for the referrals.

Scalable Software Architecture

LinkedIn started with 2,700 users in 2003 and now has around 930 million users. Here is the chronological order of their evolutionary stages:

1. The Monolith

They installed a single monolith application and a few databases to do all the work.

Scalable Software Architecture; Monolith

2. Graph Service

As number of users grew, they wanted to manage connections between users in an efficient way.

Scalable Software Architecture; Graph service

So they created a separate in-memory graph service and communicated with it via RPC.

3. Scaling Database

The database became a performance bottleneck. As a workaround, they scaled it vertically by adding CPU and memory. But soon it hit limitations and became expensive.

Scalable Software Architecture; Replica database

So they installed (replica) follower databases to scale further. It served reads.

But this solution worked only for the medium term because it didn’t scale the writes. So they partitioned the database to scale writes.

4. Service-Oriented Architecture

They needed high availability but it was hard for the app server to keep up with high traffic.

So they broke the monolith into many small stateless services:

  • Frontend service - presentation logic

  • Mid-tier service - API access to data models

  • Backend data service - access to the database

Scalable Software Architecture; Service oriented architecture

They kept the services stateless because it is easy to scale out by replication. Besides they did load testing and performance monitoring.

LinkedIn had 750 services in 2015.

5. Caching

Caching was the right choice for them to meet scalability needs with hypergrowth.

Scalable Software Architecture; Caching

Also they relied on CDN and browser cache.

Besides they stored precomputed results in the database.

6. Birth of Kafka

They needed data to flow into the data warehouse and Hadoop for analytics. So they created Kafka to stream and queue data.

Scalable Software Architecture; Kafka

Kafka is a distributed pub-sub messaging platform.

7. Scaling Engineering Teams

They put the entire focus on improving tooling, deployment, infrastructure, and developer productivity. And paused feature development.

Scalable Software Architecture; Scaling engineering teams

It improved their engineering agility to build scalable products.

8. Birth of Rest.li Framework

They wanted to decouple services, so they switched to RESTful API and sent JSON over HTTP.

Scalable Software Architecture; Rest.li framework

And built the Rest.li web framework to abstract many parts of data exchange.

9. Super Blocks

Service-oriented architecture caused many downstream calls and it became a problem. So they grouped the backend services to create a single access API.

10. Multi Data Center

They needed high availability and wanted to avoid single points of failure.

Scalable Software Architecture; Multi data center

So they replicated data across many data centers. And redirected user requests to nearby data centers.

11. Ditch JSON

JSON data serialization became a performance bottleneck. So they moved from JSON to Protobuf to reduce latency.


LinkedIn followed the idea: keep it simple. And changed their architecture based on needs. They remain the biggest network for Professionals in 2023.


Consider subscribing to get simplified case studies delivered straight to your inbox:


Author NK; System design case studies
Follow me on LinkedIn | YouTube | Threads | Twitter | Instagram

Thank you for supporting this newsletter. Consider sharing this post with your friends and get rewards. Y’all are the best.

system design newsletter

Share


How Giphy Delivers 10 Billion GIFs a Day to 1 Billion Users

How Giphy Delivers 10 Billion GIFs a Day to 1 Billion Users

NK
·
October 12, 2023
Read full story
7 Simple Ways to Fail System Design Interview

7 Simple Ways to Fail System Design Interview

NK
·
October 10, 2023
Read full story

References

  • https://engineering.linkedin.com/architecture/brief-history-scaling-linkedin

  • https://newsletter.systemdesign.one/p/protocol-buffers-vs-json

Anvesh k's avatar
Bhuvan Saddy's avatar
Derrick Qu's avatar
Mohamed Dyab's avatar
Kehinde Adeleke's avatar
61 Likes∙
5 Restacks
61

Share this post

The System Design Newsletter
The System Design Newsletter
How LinkedIn Scaled to 930 Million Users
Copy link
Facebook
Email
Notes
More
9
5
Share

Discussion about this post

User's avatar
Akram EL Basri's avatar
Akram EL Basri
Mar 12, 2024

I adore your articles man !

Expand full comment
Like (1)
Reply
Share
Anton Zaides's avatar
Anton Zaides
Oct 24, 2023

Great article :)

I would have loved to see the whole architecture in each step, to see how it 'evolves' (and looks at the end).

Expand full comment
Like (1)
Reply
Share
1 reply by Neo Kim
7 more comments...
8 Reasons Why WhatsApp Was Able to Support 50 Billion Messages a Day With Only 32 Engineers
#1: Learn More - Awesome WhatsApp Engineering (6 minutes)
Aug 27, 2023 â€¢ 
Neo Kim
744

Share this post

The System Design Newsletter
The System Design Newsletter
8 Reasons Why WhatsApp Was Able to Support 50 Billion Messages a Day With Only 32 Engineers
Copy link
Facebook
Email
Notes
More
25
How PayPal Was Able to Support a Billion Transactions per Day With Only 8 Virtual Machines
#30: Learn More - Awesome PayPal Engineering (4 minutes)
Dec 26, 2023 â€¢ 
Neo Kim
252

Share this post

The System Design Newsletter
The System Design Newsletter
How PayPal Was Able to Support a Billion Transactions per Day With Only 8 Virtual Machines
Copy link
Facebook
Email
Notes
More
14
How Stripe Prevents Double Payment Using Idempotent API
#45: A Simple Introduction to Idempotent API (4 minutes)
May 9, 2024 â€¢ 
Neo Kim
393

Share this post

The System Design Newsletter
The System Design Newsletter
How Stripe Prevents Double Payment Using Idempotent API
Copy link
Facebook
Email
Notes
More
30

Ready for more?

© 2025 Neo Kim
Publisher Privacy
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.