Design a system that scales to millions of users on AWS
Note: This document links directly to relevant areas found in the system design topics to avoid duplication. Refer to the linked content for general talking points, tradeoffs, and alternatives.
Step 1: Outline use cases and constraints
Gather requirements and scope the problem. Ask questions to clarify use cases and constraints. Discuss assumptions.
Without an interviewer to address clarifying questions, we’ll define some use cases and constraints.
Use cases
Solving this problem takes an iterative approach of: 1) Benchmark/Load Test, 2) Profile for bottlenecks 3) address bottlenecks while evaluating alternatives and trade-offs, and 4) repeat, which is good pattern for evolving basic designs to scalable designs.
Unless you have a background in AWS or are applying for a position that requires AWS knowledge, AWS-specific details are not a requirement. However, much of the principles discussed in this exercise can apply more generally outside of the AWS ecosystem.
We’ll scope the problem to handle only the following use cases
User makes a read or write request
Service does processing, stores user data, then returns the results
Service needs to evolve from serving a small amount of users to millions of users
Discuss general scaling patterns as we evolve an architecture to handle a large number of users and requests
Service has high availability
Constraints and assumptions
State assumptions
Traffic is not evenly distributed
Need for relational data
Scale from 1 user to tens of millions of users
Denote increase of users as:
Users+
Users++
Users+++
…
10 million users
1 billion writes per month
100 billion reads per month
100:1 read to write ratio
1 KB content per write
Calculate usage
Clarify with your interviewer if you should run back-of-the-envelope usage calculations.
1 TB of new content per month
1 KB per write * 1 billion writes per month
36 TB of new content in 3 years
Assume most writes are from new content instead of updates to existing ones
400 writes per second on average
40,000 reads per second on average
Handy conversion guide:
2.5 million seconds per month
1 request per second = 2.5 million requests per month
40 requests per second = 100 million requests per month
400 requests per second = 1 billion requests per month
Step 2: Create a high level design
Outline a high level design with all important components.
Identify and address bottlenecks, given the constraints.
Users+
Imgur
Assumptions
Our user count is starting to pick up and the load is increasing on our single box. Our Benchmarks/Load Tests and Profiling are pointing to the MySQL Database taking up more and more memory and CPU resources, while the user content is filling up disk space.
We’ve been able to address these issues with Vertical Scaling so far. Unfortunately, this has become quite expensive and it doesn’t allow for independent scaling of the MySQL Database and Web Server.
Goals
Lighten load on the single box and allow for independent scaling
Store static content separately in an Object Store
Move the MySQL Database to a separate box
Disadvantages
These changes would increase complexity and would require changes to the Web Server to point to the Object Store and the MySQL Database
Additional security measures must be taken to secure the new components
AWS costs could also increase, but should be weighed with the costs of managing similar systems on your own
Store static content separately
Consider using a managed Object Store like S3 to store static content
Highly scalable and reliable
Server side encryption
Move static content to S3
User files
JS
CSS
Images
Videos
Move the MySQL database to a separate box
Consider using a service like RDS to manage the MySQL Database
Simple to administer, scale
Multiple availability zones
Encryption at rest
Secure the system
Encrypt data in transit and at rest
Use a Virtual Private Cloud
Create a public subnet for the single Web Server so it can send and receive traffic from the internet
Create a private subnet for everything else, preventing outside access
Only open ports from whitelisted IPs for each component
These same patterns should be implemented for new components in the remainder of the exercise
Our Benchmarks/Load Tests and Profiling show that our single Web Server bottlenecks during peak hours, resulting in slow responses and in some cases, downtime. As the service matures, we’d also like to move towards higher availability and redundancy.
Goals
The following goals attempt to address the scaling issues with the Web Server
Based on the Benchmarks/Load Tests and Profiling, you might only need to implement one or two of these techniques
Use Horizontal Scaling to handle increasing loads and to address single points of failure
If you are configuring your own Load Balancer, setting up multiple servers in active-active or active-passive in multiple availability zones will improve availability
Terminate SSL on the Load Balancer to reduce computational load on backend servers and to simplify certificate administration
Use multiple Web Servers spread out over multiple availability zones
Use multiple MySQL instances in Master-Slave Failover mode across multiple availability zones to improve redundancy
For example, you can add Application Servers handling Read APIs while others handle Write APIs
Move static (and some dynamic) content to a Content Delivery Network (CDN) such as CloudFront to reduce load and latency
Trade-offs, alternatives, and additional details:
See the linked content above for details
Users+++
Imgur
Note:Internal Load Balancers not shown to reduce clutter
Assumptions
Our Benchmarks/Load Tests and Profiling show that we are read-heavy (100:1 with writes) and our database is suffering from poor performance from the high read requests.
Goals
The following goals attempt to address the scaling issues with the MySQL Database
Based on the Benchmarks/Load Tests and Profiling, you might only need to implement one or two of these techniques
Move the following data to a Memory Cache such as Elasticache to reduce load and latency:
Frequently accessed content from MySQL
First, try to configure the MySQL Database cache to see if that is sufficient to relieve the bottleneck before implementing a Memory Cache
Session data from the Web Servers
The Web Servers become stateless, allowing for Autoscaling
Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
Our Benchmarks/Load Tests and Profiling show that our traffic spikes during regular business hours in the U.S. and drop significantly when users leave the office. We think we can cut costs by automatically spinning up and down servers based on actual load. We’re a small shop so we’d like to automate as much of the DevOps as possible for Autoscaling and for the general operations.
Goals
Add Autoscaling to provision capacity as needed
Keep up with traffic spikes
Reduce costs by powering down unused instances
Automate DevOps
Chef, Puppet, Ansible, etc
Continue monitoring metrics to address bottlenecks
Consider a managed service such as AWS Autoscaling
Create one group for each Web Server and one for each Application Server type, place each group in multiple availability zones
Set a min and max number of instances
Trigger to scale up and down through CloudWatch
Simple time of day metric for predictable loads or
Metrics over a time period:
CPU load
Latency
Network traffic
Custom metric
Disadvantages
Autoscaling can introduce complexity
It could take some time before a system appropriately scales up to meet increased demand, or to scale down when demand drops
Users+++++
Imgur
Note:Autoscaling groups not shown to reduce clutter
Assumptions
As the service continues to grow towards the figures outlined in the constraints, we iteratively run Benchmarks/Load Tests and Profiling to uncover and address new bottlenecks.
Goals
We’ll continue to address scaling issues due to the problem’s constraints:
If our MySQL Database starts to grow too large, we might consider only storing a limited time period of data in the database, while storing the rest in a data warehouse such as Redshift
A data warehouse such as Redshift can comfortably handle the constraint of 1 TB of new content per month
With 40,000 average read requests per second, read traffic for popular content can be addressed by scaling the Memory Cache, which is also useful for handling the unevenly distributed traffic and traffic spikes
The SQL Read Replicas might have trouble handling the cache misses, we’ll probably need to employ additional SQL scaling patterns
400 average writes per second (with presumably significantly higher peaks) might be tough for a single SQL Write Master-Slave, also pointing to a need for additional scaling techniques
To further address the high read and write requests, we should also consider moving appropriate data to a NoSQL Database such as DynamoDB.
We can further separate out our Application Servers to allow for independent scaling. Batch processes or computations that do not need to be done in real-time can be done Asynchronously with Queues and Workers:
For example, in a photo service, the photo upload and the thumbnail creation can be separated:
Client uploads photo
Application Server puts a job in a Queue such as SQS
The Worker Service on EC2 or Lambda pulls work off the Queue then:
Creates a thumbnail
Updates a Database
Stores the thumbnail in the Object Store
Trade-offs, alternatives, and additional details:
See the linked content above for details
Additional talking points
Additional topics to dive into, depending on the problem scope and time remaining.