01 May, 2025

Microservices vs Monolithic Architecture

 Microservices vs Monolithic Architecture

Here’s a clear side-by-side comparison between Microservices and Monolithic architectures — from a system design and engineering perspective:


Aspect

Monolithic Architecture

Microservices Architecture

Definition

A single, tightly coupled codebase where all modules run as one unified application

A collection of small, independent services that communicate over the network (e.g., HTTP, gRPC)

Codebase

Single repository/project

Multiple repositories or modular projects per service

Deployment

Deployed as one unit (e.g., one WAR, JAR, EXE)

Each service is deployed independently

Scalability

Vertical scaling (scale entire app)

Horizontal scaling (scale services independently based on load)

Technology Stack

Generally a unified stack (e.g., Java/Spring, .NET)

Polyglot — different services can use different languages, databases, tools

Development Speed

Faster in early stages; becomes slower as app grows

Allows parallel development across teams

Team Structure

Centralized team ownership

Distributed team ownership; often organized by business domain (aligned with DDD)

Fault Isolation

A failure in one module can crash the whole application

Failures are isolated to individual services

Testing

Easier for unit and integration testing in one app

Requires distributed test strategy; includes contract and end-to-end testing

Communication

In-process function calls

Over network — usually REST, gRPC, or message queues

Data Management

Single shared database

Each service has its own database (DB per service pattern)

DevOps Complexity

Easier to deploy and manage early on

Requires mature CI/CD, service discovery, monitoring, orchestration (e.g., Kubernetes)

Change Impact

Any change requires full redeployment

Changes to one service don’t affect others (if contracts are stable)

Examples

Legacy ERP, early-stage startups

Amazon, Netflix, Uber, Spotify


🚀 Use Cases

Architecture

Best Suited For

Monolithic

- Simple, small apps
- Early-stage products
- Teams with limited resources

Microservices

- Large-scale apps
- Need for frequent releases
- Independent team scaling


⚖️ When to Choose What?

If You Need

Go With

Simplicity and speed

Monolith

Scalability, agility, resilience

Microservices

Quick prototyping

Monolith

Complex domains and team scaling

Microservices

 


Event-Driven Architecture (EDA) vs Event Sourcing Pattern vs Domain-Driven Design (DDD)

 Event-Driven Architecture (EDA) vs Event Sourcing Pattern vs  Domain-Driven Design (DDD) 

Here’s a clear point-by-point comparison of Event-Driven Architecture (EDA), Event Sourcing Pattern, and Domain-Driven Design (DDD) in a tabular format:


Aspect

Event-Driven Architecture (EDA)

Event Sourcing Pattern

Domain-Driven Design (DDD)

Definition

Architecture style where components communicate via events

Pattern where state changes are stored as a sequence of events

Software design approach focused on complex domain modeling

Primary Purpose

Loose coupling and asynchronous communication

Ensure complete audit and ability to reconstruct state from events

Align software with business domain and logic

Data Storage

Not the focus – events trigger actions, state stored in services

Event store maintains append-only log of events

Usually uses traditional databases; aggregates may encapsulate logic

Event Usage

Events trigger reactions across components

Events are the source of truth for entity state

Events may be used, but not central; focuses on domain entities

State Management

Handled independently in each service

Rebuilt by replaying stored events

Maintained via aggregates and entities

Use Cases

Microservices, IoT, real-time systems, decoupled systems

Financial systems, audit trails, CQRS-based systems

Complex business domains like banking, healthcare, logistics

Data Consistency

Eventual consistency between services

Strong consistency per aggregate through event replay

Consistency is modeled via aggregates and domain rules

Design Focus

Scalability, resilience, and responsiveness

Immutable history of changes; source-of-truth via events

Business logic clarity and deep understanding of domain

Examples

Online retail checkout process triggering shipping, billing services

Banking transaction ledger, order lifecycle events

Airline booking system, insurance claim processing

Tools & Tech

Kafka, RabbitMQ, Azure Event Grid, AWS SNS/SQS

EventStoreDB, Kafka, Axon Framework, custom append-only stores

DDD libraries (e.g., .NET's ValueObjects, Aggregates, Entities)

Challenges

Debugging, eventual consistency, complex tracing

Complex queries, data migration, replay management

Steep learning curve, overengineering for simple domains

Here’s an extended version of the previous table, now including technologies or approaches to address each consideration in distributed system design:

Consideration

Why It's Considered

Technology / Solution Approach

Scalability (Horizontal & Vertical)

To handle increased load by adding resources.

Kubernetes, Auto Scaling Groups (AWS/GCP/Azure), Load Balancers, Microservices

Fault Tolerance & Resilience

To keep the system running under failure conditions.

Circuit Breakers (Hystrix, Polly), Retries, Replication, Chaos Engineering

Consistency Model (CAP Theorem)

To decide trade-offs between consistency, availability, partition tolerance.

Cassandra (AP), MongoDB (CP), Zookeeper (CP), Raft/Quorum-based consensus

Latency and Performance

To ensure low response time and high throughput.

Caching (Redis, Memcached), CDNs, Edge Computing, Async Processing

Data Partitioning (Sharding)

To distribute data across multiple nodes for scalability.

Custom sharding logic, Hash-based partitioning, DynamoDB, Cosmos DB

Load Balancing

To evenly distribute traffic and prevent overload.

NGINX, HAProxy, AWS ELB, Azure Traffic Manager, Istio

Service Discovery

To locate services dynamically in changing environments.

Consul, Eureka, Kubernetes DNS, Envoy, etcd

Data Replication Strategy

To increase availability and reduce risk of data loss.

Master-Slave, Master-Master, Quorum-based systems (e.g., Kafka, Cassandra)

State Management (Stateless vs Stateful)

To improve scalability and fault recovery.

Stateless Microservices, External State Stores (Redis, DB), Sticky Sessions

API Design & Contracts

To define clear, reliable service boundaries.

OpenAPI (Swagger), GraphQL, REST, gRPC, Protocol Buffers

Security (AuthN, AuthZ, Encryption)

To protect data and services from threats.

OAuth2, OpenID Connect, TLS, JWT, Vault, Azure Key Vault, mTLS

Monitoring & Observability

To ensure system health, track performance and errors.

Prometheus, Grafana, ELK/EFK Stack, OpenTelemetry, Jaeger, Datadog

Deployment Strategy (CI/CD)

To enable fast, repeatable, safe deployments.

GitHub Actions, Azure DevOps, Jenkins, Spinnaker, ArgoCD, Helm

Cost Efficiency

To ensure optimal infrastructure cost for performance.

Serverless (Lambda, Azure Functions), Autoscaling, Reserved Instances, FinOps

Eventual vs Strong Consistency

To make trade-offs based on business need.

Eventual: Cassandra, DynamoDB. Strong: RDBMS, Spanner, CockroachDB

Network Topology & Latency Awareness

To reduce cross-region delays and data transfer.

Geo-distributed architecture, Anycast DNS, CDN, Multi-region deployments

Message Semantics (Delivery Guarantees)

To ensure reliable and ordered message handling.

Kafka, RabbitMQ, SQS, Idempotent Handlers, Deduplication strategies

Technology & Protocol Choices

To match communication and data needs of system components.

REST, gRPC, GraphQL, WebSockets, Protocol Buffers, Thrift

Compliance & Regulatory Requirements

To meet legal and security mandates.

Data encryption, audit logging, IAM policies, ISO/SOC2/GDPR toolsets



26 April, 2025

When to use REST, SOA, and Microservices

Here’s a breakdown of the core differences between REST, SOA, and Microservices and when you might choose each:

1. REST (Representational State Transfer)

What it is: REST is an architectural style for designing networked applications. It uses HTTP protocols to enable communication between systems by exposing stateless APIs.

Key Characteristics:

  • Communication: Uses standard HTTP methods (GET, POST, PUT, DELETE).

  • Data Format: Commonly JSON or XML.

  • Stateless: Every request from the client contains all the information the server needs to process it.

  • Scalability: Highly scalable due to statelessness.

  • Simplicity: Easy to implement and test.

Best Use Case:

  • For systems requiring lightweight, simple API communication (e.g., web applications or mobile apps).

2. SOA (Service-Oriented Architecture)

What it is: SOA is an architectural style where applications are composed of loosely coupled services that communicate with each other. Services can reuse components and are designed for enterprise-level solutions.

Key Characteristics:

  • Service Bus: Often uses an Enterprise Service Bus (ESB) to connect and manage services.

  • Protocol Support: Supports various protocols (SOAP, REST, etc.).

  • Centralized Logic: Often has a centralized governance structure.

  • Tightly Controlled: Services are larger and generally less independent.

  • Reusability: Focuses on reusing services across applications.

Best Use Case:

  • For large enterprise systems needing centralized coordination and integration across multiple systems (e.g., ERP systems).

3. Microservices

What it is: Microservices is an architectural style that structures an application as a collection of small, independent services that communicate with each other through lightweight mechanisms like REST, gRPC, or messaging queues.

Key Characteristics:

  • Independence: Each microservice is independently deployable and scalable.

  • Data Storage: Services manage their own databases, ensuring loose coupling.

  • Polyglot Programming: Different services can be built using different programming languages and frameworks.

  • Decentralized Logic: No central service bus; services manage their own logic.

Best Use Case:

  • For dynamic, scalable, and high-performing distributed applications (e.g., modern e-commerce platforms, video streaming services).

Comparison Table

AspectRESTSOAMicroservices
StyleAPI design standardArchitectural styleArchitectural style
CommunicationHTTP (stateless)Mixed protocols (SOAP, REST)Lightweight (REST, gRPC)
GovernanceDecentralizedCentralizedDecentralized
GranularityAPI endpointsCoarser-grained servicesFine-grained services
ScalabilityHorizontal scalingLimited by ESB scalingHorizontally scalable
Data HandlingExposed via APIsShared and reusableIndependent databases
Best ForWeb/mobile appsLarge enterprisesModern cloud-native apps

Which to Choose and Why

  1. Choose REST:

    • If your system requires lightweight and stateless API communication.

    • Ideal for building web services and mobile APIs quickly and easily.

  2. Choose SOA:

    • For large enterprises where services need to be reused across multiple systems.

    • When you need centralized management and tight integration.

  3. Choose Microservices:

    • When building a dynamic, scalable, and cloud-native application.

    • If you need flexibility to independently deploy, scale, and maintain different components.

Recommendation

For modern, scalable, and agile systems, Microservices are generally the best choice due to their modularity, independence, and ease of scaling. However, if you're working in an enterprise environment that requires centralization and reusability across legacy systems, SOA may be better. REST, on the other hand, is not an architecture but an API standard and can be used within both SOA and Microservices architectures.

25 April, 2025

Securing an Azure SQL Database

 Securing an Azure SQL Database is critical to protect sensitive data and ensure compliance with regulations. Here are some of the best security strategies and practices:

1. Authentication and Access Control

  • Use Microsoft Entra ID (formerly Azure AD) for centralized identity and access management.

  • Implement role-based access control (RBAC) to grant users the least privileges necessary.

  • Avoid using shared accounts and enforce multi-factor authentication (MFA) for all users.

2. Data Encryption

  • Enable Transparent Data Encryption (TDE) to encrypt data at rest automatically.

  • Use Always Encrypted to protect sensitive data, ensuring it is encrypted both at rest and in transit.

  • Enforce TLS (Transport Layer Security) for all connections to encrypt data in transit.

3. Firewall and Network Security

  • Configure server-level and database-level firewalls to restrict access by IP address.

  • Use Virtual Network (VNet) integration to isolate the database within a secure network.

  • Enable Private Link to access the database securely over a private endpoint.

4. Monitoring and Threat Detection

  • Enable SQL Auditing to track database activities and store logs in a secure location.

  • Use Advanced Threat Protection to detect and respond to anomalous activities, such as SQL injection attacks.

  • Monitor database health and performance using Azure Monitor and Log Analytics.

5. Data Masking and Row-Level Security

  • Implement Dynamic Data Masking to limit sensitive data exposure to non-privileged users.

  • Use Row-Level Security (RLS) to restrict access to specific rows in a table based on user roles.

6. Backup and Disaster Recovery

  • Enable geo-redundant backups to ensure data availability in case of regional failures.

  • Regularly test your backup and restore processes to ensure data recovery readiness.

7. Compliance and Governance

  • Use Azure Policy to enforce security standards and compliance requirements.

  • Regularly review and update security configurations to align with industry best practices.

8. Regular Updates and Patching

  • Ensure that the database and its dependencies are always up to date with the latest security patches.

By implementing these strategies, you can significantly enhance the security posture of your Azure SQL Database.


Here's a comparison of Apache Spark, Apache Flink, Azure Machine Learning, and Azure Stream Analytics, along with their use cases:

1. Apache Spark

  • Purpose: A distributed computing framework for big data processing, supporting both batch and stream processing.

  • Strengths:

    • High-speed in-memory processing.

    • Rich APIs for machine learning (MLlib), graph processing (GraphX), and SQL-like queries (Spark SQL).

    • Handles large-scale data transformations and analytics.

  • Use Cases:

    • Batch processing of large datasets (e.g., ETL pipelines).

    • Real-time data analytics (e.g., fraud detection).

    • Machine learning model training and deployment.

2. Apache Flink

  • Purpose: A stream processing framework designed for real-time, stateful computations.

  • Strengths:

    • Unified model for batch and stream processing.

    • Low-latency, high-throughput stream processing.

    • Advanced state management for complex event processing.

  • Use Cases:

    • Real-time anomaly detection (e.g., IoT sensor data).

    • Event-driven applications (e.g., recommendation systems).

    • Real-time financial transaction monitoring.

3. Azure Machine Learning

  • Purpose: A cloud-based platform for building, training, and deploying machine learning models.

  • Strengths:

    • Automated ML for quick model development.

    • Integration with Azure services for seamless deployment.

    • Support for distributed training and MLOps.

  • Use Cases:

    • Predictive analytics (e.g., customer churn prediction).

    • Image and speech recognition.

    • Real-time decision-making models (e.g., personalized recommendations).

4. Azure Stream Analytics

  • Purpose: A fully managed service for real-time stream processing in the Azure ecosystem.

  • Strengths:

    • Serverless architecture with easy integration into Azure Event Hubs and IoT Hub.

    • Built-in support for SQL-like queries on streaming data.

    • Real-time analytics with minimal setup.

  • Use Cases:

    • Real-time telemetry analysis (e.g., IoT device monitoring).

    • Real-time dashboarding (e.g., website traffic monitoring).

    • Predictive maintenance using streaming data.

Key Differences

Feature/ToolApache SparkApache FlinkAzure Machine LearningAzure Stream Analytics
Processing TypeBatch & StreamStream (with Batch)ML Model TrainingReal-Time Stream
LatencyModerateLowN/A (ML-focused)Low
IntegrationHadoop, KafkaKafka, HDFSAzure EcosystemAzure Ecosystem
Use Case FocusBig Data AnalyticsReal-Time ProcessingMachine LearningReal-Time Analytics


23 April, 2025

Build a Redis-like Distributed In-Memory Cache

  This tests:

  • System design depth

  • Understanding of distributed systems

  • Trade-off navigation (CAP, consistency, latency)

  • Real-world edge case handling

Let’s go step by step and design Redis-like cache from first principles, not using cloud-managed services.


🚀 Goal: Build a Redis-like Distributed In-Memory Cache


🧾 1. Requirements Gathering (Clarify with interviewer)

🔹 Functional

  • Support GET, SET, DEL, TTL

  • Handle concurrent reads/writes

  • Cache keys across multiple nodes

  • Optional: Support pub/sub, data structures (hash, list)

🔹 Non-Functional

  • Low latency (<1ms typical)

  • High availability & fault tolerance

  • Scalable horizontally

  • Eventual or strong consistency

  • Memory-optimized with TTL eviction

Absolutely! Back-of-the-envelope estimations are crucial in system design interviews — they demonstrate your pragmatism, ability to roughly size a system, and to make sound trade-offs.

Let’s break it down for your Redis-like Distributed In-Memory Cache System:


🧠 Scenario:

Let’s say you're designing this for an AI/ML pipeline system, like Google's CMCS ML. It caches:

  • Intermediate model data

  • Feature store results

  • Token metadata

  • Configuration data


📌 Estimation Goals:

We’ll estimate for:

What Example
🔹 Number of keys e.g., 100 million
🔹 Size per key e.g., average 1KB
🔹 Total memory footprint GB / TB scale
🔹 QPS (Queries Per Second) For read/write traffic
🔹 Node count and distribution
🔹 Network bandwidth
🔹 TTL / Eviction rates

⚙️ Step-by-Step Estimation

🔹 1. Number of Keys

Let’s say each ML workflow (pipeline) generates:

  • 10k intermediate cacheable entries

  • 1M workflows per day (across all users)


10k keys/workflow × 1M workflows/day = 10B keys/day

But not all stay in memory. We retain 10% for hot data in memory:

  • 10B × 10% = 1B keys cached at peak


🔹 2. Average Key Size

Let’s assume:

  • Key name: ~100 bytes

  • Value: ~900 bytes

  • TTL/metadata: ~20 bytes overhead

Total = 1KB per key


📦 3. Total Memory Requirement

1B keys × 1KB = 1,000,000,000 KB = ~1 TB
So you’d need ~1 TB of RAM across your cluster

Let’s budget for 30% overhead (replication, GC, fragmentation):

➡️ Effective: ~1.3 TB RAM


🧵 4. QPS (Queries Per Second)

Assume:

  • Each key gets ~10 reads per day → 10B reads/day

  • 1% of keys get hit 90% of the time (Zipfian)

10B reads/day ≈ 115,740 reads/sec
Writes: 1B/day ≈ 11,500 writes/sec
Target QPS:
  • Read QPS: 100K–150K

  • Write QPS: 10K–20K


🧑‍🤝‍🧑 5. Number of Nodes

If 1 machine supports:

  • 64 GB usable memory

  • 10K QPS (to be safe)

  • 10 Gbps NIC

Then:

  • RAM: 1.3 TB / 64 GB ≈ 20 nodes

  • QPS: 150K / 10K = 15 nodes

  • Plan for ~25–30 nodes (for headroom and HA)


🔁 6. Replication Overhead

Assuming:

  • 1 replica per shard for failover

  • 2× memory and network cost

➡️ RAM required: ~2.6 TB ➡️ Bandwidth: double write traffic (~20K writes/sec * 1KB = ~20 MB/sec replication stream)


📶 7. Network Bandwidth

Let’s estimate:

  • 150K reads/sec × 1KB = 150 MB/s

  • 20K writes/sec × 1KB = 20 MB/s

  • Replication = 20 MB/s

📌 Each node should handle:

  • Read bandwidth: ~6 MB/s

  • Write + replication: ~2 MB/s

  • Easily handled by 10 Gbps NIC


⏳ 8. Eviction Rate

Assuming TTL = 1 hour, and 1B keys:

  • Evictions per second = 1B / (60×60) ≈ 277K keys/sec

Eviction algorithm must be efficient:

  • LRU clock algo or async TTL scanner needed


✅ Final Summary

Metric Estimation
Total keys 1 billion
Avg size per key 1 KB
Total RAM (w/ overhead) ~2.6 TB (with replication)
Nodes 25–30 (for HA, QPS, memory headroom)
Read QPS ~150K/sec
Write QPS ~15–20K/sec
Eviction rate ~250–300K/sec
Network per node ~10 MB/s total (within 10Gbps budget)

🎯 Bonus: What Google Might Ask

What would change if you needed to support multi-tenant isolation?
→ Talk about namespacing keys, quota control, per-tenant memory buckets.

What if a single user uploads a 1GB object?
→ Chunk large values or offload to Blob storage and cache pointer.

How would you reduce memory cost?
→ TTL tuning, compression (LZ4), lazy expiration.



🧱 2. High-Level Architecture

                 +------------------------+
                 |  Client Applications   |
                 +------------------------+
                             |
                             v
                    +------------------+
                    |  Coordinator /   |
                    |  Cache Router    | (Optional)
                    +------------------+
                             |
          +------------------+------------------+
          |                                     |
     +----------+                        +-------------+
     |  Cache    |  <-- Gossip/Heartbeat -->  |  Cache     |
     |  Node A   |        Protocol             |  Node B    |
     +----------+                        +-------------+
          |                                     |
     +------------+                       +-------------+
     |  Memory DB |                       |  Memory DB  |
     +------------+                       +-------------+

🧠 3. Core Components

🔸 a. Data Storage (In-Memory)

  • Use hash maps in memory for key-value store

  • TTLs stored with each key (for expiry eviction)

  • Optionally support data types like list, hash, etc.

store = {
  "foo": { value: "bar", expiry: 1681450500 },
  ...
}

🔸 b. Shard & Partition

  • Use consistent hashing to assign keys to nodes

  • Each key Khash(K) % N where N = number of virtual nodes

This avoids rehashing all keys when nodes are added/removed

🔸 c. Cache Router / Coordinator

  • Client can compute hash OR use a proxy router to route to correct cache node

  • Think Twemproxy or Envoy as L7 proxy

🔸 d. Replication

  • Master-Replica model

  • Writes go to master → replicate to replica (async or sync)

  • Replicas take over on master failure

Node A (Master)
  └── Replica A1

🔸 e. Eviction Strategy

  • Support TTL for automatic key expiry

  • Support LRU / LFU / random eviction when memory full

  • Track access counters for eviction ranking

🔸 f. Concurrency

  • Use read-write locks or CAS for thread-safe operations

  • Each node is single-threaded (like Redis) or use event-loop


🔁 4. Replication and Fault Tolerance

Gossip/Heartbeat

  • Nodes send periodic heartbeats to detect failures

  • Use Raft or Gossip protocols for membership

Failover

  • If a master dies → elected replica takes over

  • Redis uses Redis Sentinel to coordinate failover


🧪 5. Optional Features

Feature Description
Pub/Sub Add pub/sub channels per key prefix
Persistence Periodically write snapshots to disk (RDB), or append-only logs (AOF)
Backpressure Queue or reject new writes if memory full
Client Library SDKs to abstract hash ring + failover

🔁 6. CAP Trade-off

CAP Property Redis Default
Consistency Eventual (async replication)
Availability High (failover supported)
Partition Tolerance Yes

📊 Metrics to Expose

  • Hit/Miss rate

  • Memory usage

  • Eviction count

  • Node replication lag

  • Key distribution balance


🧠 Questions Google May Ask You During This Design

  • How would you handle hot keys (one key accessed by many nodes)?

  • How do you rebalance keys after node failure?

  • Would you support atomic operations (INCR, DECR)?

  • How do you prevent cache stampede (multiple clients recomputing a missing key)?

  • How do you handle large objects that exceed single node memory?


🛠️ Tech Stack (if you had to build it)

Component

Tech

Cache server

C++ / Rust / Go (like Redis)

Hashing & comm

Consistent hashing + TCP sockets / gRPC

Replication

Raft or custom pub/sub protocol

Persistence

RocksDB / Flat file / Append logs

Observability

Prometheus + Grafana

 



Perfect — let’s walk through each of those Staff-level system design questions, and I’ll give you solid answers with reasoning, trade-offs, and implementation suggestions. You can use these to explain your decisions clearly in the interview.


🔥 1. How would you handle hot keys (one key accessed by many nodes)?

Problem:

  • One key (like "user:token:123") gets disproportionate access → overloads one node.

✅ Solutions:

  1. Key Replication (Read Replicas):

    • Replicate hot key across multiple nodes.

    • Client-side read load balancing (randomly pick from replicas).

    • Write goes to master → async to replicas.

  2. Key Sharding (Virtual Keys):

    • Split key logically, e.g., "video:views"video:views:1, ...:2

    • Clients randomly select a shard for read/write → reduce contention.

    • Aggregate during reads (costly but effective).

  3. Request Deduplication & Caching at Edge:

    • Use edge cache (like CDN or client-side cache) for super-hot keys.

  4. Rate Limiting / Backpressure:

    • Throttle requests to that key, or queue them on overload.

Interview Tip:

Emphasize dynamic detection of hot keys (via metrics), and adaptive replication or redirection.


💡 2. How do you rebalance keys after node failure?

Problem:

  • Node failure → key space imbalance.

  • Some nodes overloaded, others underused.

✅ Solutions:

  1. Consistent Hashing + Virtual Nodes:

    • Redistribute virtual nodes (vNodes) from failed node to others.

    • Only keys for those vNodes get rebalanced — minimal movement.

  2. Auto-Failover & Reassignment:

    • Use heartbeat to detect failure.

    • Other nodes take over lost slots or ranges.

  3. Key Migration Tools:

    • Background rebalance workers move keys to even out load.

    • Ensure write consistency during move via locking/versioning.

  4. Client-Side Awareness:

    • Clients get updated ring view and re-route requests accordingly.

Interview Tip:

Talk about graceful degradation during rebalancing and minimizing downtime.


⚙️ 3. Would you support atomic operations (INCR, DECR)?

Yes — atomic operations are essential in a caching layer (e.g., counters, rate limits, tokens).

Implementation:

  1. Single-Threaded Execution Model:

    • Like Redis: handle each command sequentially on single-threaded event loop → natural atomicity.

  2. Compare-And-Swap (CAS):

    • For multi-threaded or multi-process setups.

    • Use version numbers or timestamps to detect stale updates.

  3. Locks (Optimistic/Pessimistic):

    • Apply locks on keys for write-modify-write operations.

    • Use with caution to avoid performance degradation.

  4. Use CRDTs (Advanced Option):

    • Conflict-free data types (e.g., GCounter, PNCounter) for distributed atomicity.

Interview Tip:

Highlight that simplicity, speed, and correctness are the priority. Lean toward single-threaded per-key operation for atomicity.


🧊 4. How do you prevent cache stampede (multiple clients recomputing a missing key)?

Problem:

  • TTL expires → 1000 clients query same missing key → backend DDoS.

✅ Solutions:

  1. Lock/SingleFlight:

    • First client computes and sets value.

    • Others wait for value to be written (or reused from intermediate store).

    • Go has sync/singleflight, Redis can simulate with Lua locks.

  2. Stale-While-Revalidate (SWR):

    • Serve expired value temporarily.

    • In background, refresh the cache asynchronously.

  3. Request Coalescing at API Gateway:

    • Gateway buffers duplicate requests until cache is ready.

  4. Early Refresh Strategy:

    • Monitor popular keys.

    • Proactively refresh before TTL expiry.

Interview Tip:

Describe this as a read-heavy resilience pattern. Emphasize proactive + reactive strategies.


📦 5. How do you handle large objects that exceed single node memory?

Problem:

  • A single large key (e.g., serialized ML model, 1GB) doesn't fit in one node.

✅ Solutions:

  1. Key Chunking (Manual Sharding):

    • Split large value into multiple keys (file:1, file:2, file:3).

    • Store each chunk on different nodes.

    • Reassemble during read.

  2. Redirect to Object Store:

    • If object > X MB → store in Blob/File system (Azure Blob / GCS).

    • Cache a pointer/reference in cache instead.

  3. Use a Tiered Cache:

    • Store large objects in a slower (but scalable) cache (like disk-based).

    • Fast cache for hot small keys; slow cache for bulkier data.

  4. Compression:

    • Use lightweight compression (LZ4, Snappy) before storing.

Interview Tip:

Discuss threshold-based offloading and trade-off between latency vs. capacity.


21 April, 2025

Design a Global Video Streaming Service (e.g., YouTube, Netflix)

 

Design a Global Video Streaming Service (e.g., YouTube, Netflix)

Question: Design a scalable and fault-tolerant video streaming platform that can:

  • Stream videos globally with low latency.

  • Allow users to upload videos.

  • Handle millions of users simultaneously.

Requirements (functional and non-functional)

functional requirement: video upload, video streaming, handle the network bandwidth, video for the different devices like mobile, smart tv, computer non-function : high availability, fault tolerance


1. High-Level Requirements

Functional Requirements:

  • Video Upload: Users can upload videos in various formats.

  • Video Streaming: Provide smooth playback with adaptive streaming for different network conditions.

  • Network Bandwidth Handling: Adjust video quality dynamically based on bandwidth.

  • Device Compatibility: Support multiple devices (e.g., mobile, smart TV, computer).

Non-Functional Requirements:

  • High Availability: The service should handle millions of concurrent viewers with minimal downtime.

  • Fault Tolerance: The system should recover gracefully from failures like server crashes or network issues.

2. High-Level Design

Here's the architectural breakdown:

  1. Frontend: Provides user interface for uploading, browsing, and watching videos.

  2. Backend Services:

    • Upload Service: Handles video uploads and metadata storage.

    • Processing Service: Transcodes videos into multiple resolutions and formats.

    • Streaming Service: Delivers videos to users with adaptive bitrate streaming.

  3. Content Delivery Network (CDN): Caches videos close to users for low-latency streaming.

  4. Database:

    • Metadata storage (e.g., title, description, resolution info).

    • User data (e.g., watch history, preferences).

  5. Storage: Distributed storage for original and transcoded videos.

  6. Load Balancer: Distributes requests across multiple servers to ensure availability.

3. Capacity Planning

Let’s estimate resource requirements for a system handling 10 million daily users:

Storage:

  • Assume 1 million uploads daily, average video size = 100 MB.

  • Original videos = 1 million x 100 MB = 100 TB/day.

  • Transcoded versions (3 resolutions) = 3 x 100 TB = 300 TB/day.

  • For 1 month of storage: 300 TB x 30 days = ~9 PB (Petabytes).

Traffic:

  • Assume 10 million users, each streaming an average of 1 hour/day.

  • Bitrate for 1080p video: 5 Mbps.

  • Total bandwidth required: 10 million x 5 Mbps = 50 Tbps.

  • A CDN can offload 80% of traffic, so backend bandwidth = 10 Tbps.

Processing:

  • Each video is transcoded into 3 resolutions.

  • Average transcoding time per video = 5 minutes.

  • Total processing required: 5 minutes x 1 million videos/day = ~83,333 hours/day.

  • With 100 servers handling 50 videos/hour, you’ll need ~1,667 servers for transcoding.

4. Detailed Design

Upload Workflow:

  1. User uploads video.

  2. Upload Service stores the video in temporary storage (e.g., S3 bucket).

  3. Metadata (e.g., title, uploader info) is stored in a relational database like PostgreSQL.

  4. Processing Service fetches the video, transcodes it into multiple resolutions (e.g., 1080p, 720p, 480p), and stores them in distributed storage (e.g., HDFS).

Streaming Workflow:

  1. User requests a video.

  2. The Streaming Service retrieves the video metadata.

  3. CDN serves the video, reducing load on the backend.

  4. Adaptive streaming adjusts resolution based on the user’s available bandwidth.

Device Compatibility:

  • Transcode videos into formats like H.264 or H.265 to support multiple devices.

  • Use HTML5 players for web and SDKs for smart TVs and mobile devices.

5. Handling Edge Cases

Video Uploads:

  • Large Files: Use chunked uploads to handle interruptions.

  • Invalid Formats: Validate video format during upload.

Streaming:

  • Low Bandwidth: Use adaptive bitrate streaming to lower resolution for slow connections.

  • Server Outages: Use replicated storage to serve videos from a different region.

High Traffic:

  • Use CDNs to cache popular videos geographically closer to users.

  • Auto-scale backend servers to handle traffic spikes.

6. Trade-Offs

1. Storage Cost vs. Quality:

  • Storing multiple resolutions increases costs but improves device compatibility.

  • You may decide to limit resolutions for infrequently accessed videos.

2. Caching vs. Latency:

  • CDNs reduce latency but introduce cache invalidation challenges for newly uploaded videos.

3. Consistency vs. Availability:

  • For highly available systems, some metadata (e.g., view counts) may be eventually consistent.

7. Final System Diagram

Here’s what the architecture looks like:

User -> CDN -> Load Balancer -> Streaming Service -> Video Storage
       -> Upload Service -> Processing Service -> Distributed Storage
       -> Metadata DB


Scenario: Design a Scalable URL Shortener

Question: 

Imagine you are tasked with designing a system similar to Bitly that converts long URLs into short ones. The system should handle billions of URLs and millions of requests per second. Please explain how you would design this system.

Requirements:

function requirements: shortener, redirection, expiry of url 

non-functional: high availability, fault tolerant (like AP from CAP)

Step 1: High-Level Design

At a high level, the system will have the following components:

  1. Frontend Service: Handles user requests for shortening, redirection, and URL expiry.

  2. Backend Service: Processes requests, generates short URLs, manages expiration policies, and stores mappings.

  3. Database: Stores the short-to-long URL mappings.

  4. Cache: Speeds up redirection for frequently accessed URLs.

  5. Load Balancer: Distributes incoming traffic evenly across backend servers to handle high availability and fault tolerance.

Step 2: Capacity Planning

Now, let's expand on capacity planning for this system:

  1. Storage:

    • Assume 1 billion URLs in the system.

    • Average size for a record (short URL + long URL + metadata like expiry date) = 150 bytes.

    • Total storage required: 1 billion x 150 bytes = ~150 GB.

    • With 3x replication for fault tolerance, total storage: ~450 GB.

  2. Traffic:

    • Peak traffic: 10,000 requests/sec for redirection.

    • Each server can handle 1,000 requests/sec, so you'll need 10 servers at peak load.

    • Cache hit ratio: Assume 80% of requests hit the cache (Redis).

    • Only 20% of requests (2,000/sec) hit the database.

  3. Cache Size:

    • Frequently accessed URLs (~10% of all URLs): 100 million URLs.

    • Average size of a cached record: 150 bytes.

    • Total cache size: ~15 GB (enough for Redis to handle).

  4. Bandwidth:

    • Each redirection involves ~500 bytes of data transfer (request + response).

    • For 10,000 requests/sec: 500 bytes x 10,000 = ~5 MB/sec bandwidth requirement.

Step 3: Detailed Design

  1. Frontend:

    • Simple UI/API for creating short URLs and redirecting to original URLs.

    • API design:

      • POST /shorten: Accepts a long URL and returns a short URL.

      • GET /redirect/<short-url>: Redirects to the original URL.

  2. Backend:

    • URL Shortening:

      • Generate unique short URLs using Base62 encoding or a random hash.

      • Ensure collision resistance by checking the database for duplicates.

    • URL Redirection:

      • Lookup the long URL in the cache first. If not found, fetch it from the database.

    • Expiry Management:

      • Use a background job to periodically clean expired URLs from the database.

  3. Database:

    • Use a NoSQL database like Cassandra or DynamoDB for scalability.

    • Key-Value schema:

      • Key: Short URL.

      • Value: Original URL + metadata (creation time, expiry time).

    • Partitioning: Shard data based on the hash of the short URL.

  4. Cache:

    • Use Redis for caching frequently accessed URLs.

    • Implement TTL (Time-to-Live) to automatically remove expired cache entries.

  5. Load Balancer:

    • Use a load balancer (e.g., Nginx or AWS ELB) to distribute traffic across backend servers.

Step 4: Handling Edge Cases

  • Hash Collisions:

    • Handle collisions by appending random characters to the short URL.

  • Expired URLs:

    • Redirect users to an error page if the URL has expired.

  • Invalid URLs:

    • Validate input URLs before storing them.

  • High Traffic Spikes:

    • Scale horizontally by adding more backend and cache servers.

Step 5: CAP Theorem and Non-Functional Requirements

  • Consistency (C) is sacrificed since we prefer availability (A) and partition tolerance (P).

  • In case of database partitioning, short URLs may not immediately replicate globally but redirections will still work.

Final Diagram

Here’s a simple architecture for your system:

Client -> Load Balancer -> Backend Service -> Cache (Redis) -> Database
                   -> URL Expiry Job -> Clean Expired URLs in Database

This design ensures scalability, fault tolerance, and high availability. Feel free to dive deeper into any component or ask about the trade-offs in the design!


Trade-Off Example: NoSQL vs. Relational Database

Context:

  • In the design, we opted for a NoSQL database (e.g., Cassandra or DynamoDB) instead of a relational database like PostgreSQL or MySQL.

Why We Chose NoSQL:

  • Scalability: NoSQL databases are horizontally scalable. They can handle billions of records and handle massive traffic by distributing data across multiple servers.

  • Write Performance: URL Shorteners primarily involve write-heavy operations (e.g., inserting short-to-long URL mappings). NoSQL databases are optimized for high-throughput writes.

Trade-Off:

  1. Consistency vs. Scalability (CAP Theorem):

    • By using a NoSQL database, we prioritize availability (A) and partition tolerance (P) but sacrifice strong consistency (C). This means:

      • Short URL redirections may not immediately reflect updates if replicas are still syncing, but the system stays highly available.

      • For example, in a rare case of a database partition, a newly shortened URL might fail temporarily for a subset of users.

  2. Flexible Queries:

    • NoSQL databases are optimized for key-value lookups (e.g., finding the long URL from a short one).

    • However, if the system later needs advanced queries (e.g., analytics: "show all URLs created in the last 7 days"), a relational database might be better suited.

    • This trade-off means we prioritize simplicity and performance for the current use case while limiting flexibility for future feature expansions.Trade-Off Example: Cache vs. Database

      Context:

      • In the design, we opted for a Redis cache to store frequently accessed URLs and reduce latency.

      Why We Chose Redis Cache:

      • Speed: Redis operates in-memory, enabling near-instantaneous lookups compared to database queries.

      • Load Reduction: Redirect requests are served from the cache, offloading pressure from the database.

      • TTL: Redis supports Time-to-Live (TTL), allowing expired URLs to be removed automatically without database intervention.

      Trade-Off:

      1. Cache Hit vs. Miss:

        • Hit: When the short URL is found in the cache, the lookup is fast.

        • Miss: If the URL is not in the cache, the system falls back to querying the database, which is slower.

        • Example: If the cache hit ratio drops to 50% due to infrequently accessed URLs, latency increases, and the database may face higher load.

      2. Memory Usage vs. Scalability:

        • Redis stores all data in memory, which is expensive compared to disk storage.

        • Example: If we want to cache 1 billion URLs (about 150 GB), the cost of high-memory servers for Redis becomes a concern.

        • Trade-off: We limit caching to the most frequently accessed URLs (~10% of all URLs).

      3. Consistency vs. Performance:

        • If updates are made directly to the database (e.g., URL expiry or analytics tracking), the cache may hold stale data temporarily until refreshed.

        • Trade-off: Sacrifice real-time consistency to prioritize performance for redirection requests.

Trade-Off Example: Cache vs. Database

Context:

  • In the design, we opted for a Redis cache to store frequently accessed URLs and reduce latency.

Why We Chose Redis Cache:

  • Speed: Redis operates in-memory, enabling near-instantaneous lookups compared to database queries.

  • Load Reduction: Redirect requests are served from the cache, offloading pressure from the database.

  • TTL: Redis supports Time-to-Live (TTL), allowing expired URLs to be removed automatically without database intervention.

Trade-Off:

  1. Cache Hit vs. Miss:

    • Hit: When the short URL is found in the cache, the lookup is fast.

    • Miss: If the URL is not in the cache, the system falls back to querying the database, which is slower.

    • Example: If the cache hit ratio drops to 50% due to infrequently accessed URLs, latency increases, and the database may face higher load.

  2. Memory Usage vs. Scalability:

    • Redis stores all data in memory, which is expensive compared to disk storage.

    • Example: If we want to cache 1 billion URLs (about 150 GB), the cost of high-memory servers for Redis becomes a concern.

    • Trade-off: We limit caching to the most frequently accessed URLs (~10% of all URLs).

  3. Consistency vs. Performance:

    • If updates are made directly to the database (e.g., URL expiry or analytics tracking), the cache may hold stale data temporarily until refreshed.

    • Trade-off: Sacrifice real-time consistency to prioritize performance for redirection requests.

Trade-Off Example: Failure Recovery Mechanisms

Context:

To ensure high availability and fault tolerance, the system should recover gracefully when components fail (e.g., a server crash or cache failure). We incorporated replication and fallback strategies in the design.

Mechanisms for Recovery:

  1. Database Replication:

    • Multiple copies (replicas) of the database ensure availability even if one server fails.

    • Trade-Off:

      • Benefit: High availability and low risk of data loss.

      • Cost: Increased storage needs and replication overhead. If data needs to replicate across multiple nodes, write latency may increase.

      • Example: Updating a short URL mapping might take milliseconds longer due to replica sync delays.

  2. Cache Fallback to Database:

    • If the Redis cache goes down, the system queries the database directly.

    • Trade-Off:

      • Benefit: Ensures continuity of service for redirection requests.

      • Cost: Database will experience increased load during cache outages, resulting in higher latency and potential bottlenecks under peak traffic.

      • Example: During a cache failure, redirection latency might increase from 1ms to 10ms.

  3. Load Balancers with Failover:

    • Load balancers redirect traffic from failed servers to healthy servers.

    • Trade-Off:

      • Benefit: Users don’t notice server outages as requests are rerouted.

      • Cost: Adding failover capabilities increases infrastructure complexity and cost.

      • Example: Keeping additional standby servers can increase operational costs by 20%.

  4. Backups for Disaster Recovery:

    • Regular backups of database and metadata ensure recovery in case of catastrophic failures (e.g., data corruption).

    • Trade-Off:

      • Benefit: Prevents permanent data loss and ensures the system is recoverable.

      • Cost: Backup systems require extra storage and may not include real-time data due to backup frequency.

      • Example: If backups occur daily, URLs created just before failure might be lost.

  5. Retry Logic and Circuit Breakers:

    • Implement retries for transient failures and circuit breakers to avoid overwhelming downstream services.

    • Trade-Off:

      • Benefit: Improves reliability for users during intermittent failures.

      • Cost: Retries add latency and may temporarily strain the system.

      • Example: If the database is slow, retry logic might delay redirections by a few milliseconds.

Google system design interview experience

To excel in a system design interview at Google India, you’ll need a structured, methodical approach while demonstrating clarity and confidence. Here’s how you can handle system design questions effectively:

1. Understand the Problem Statement

  • Before diving in, clarify the requirements:

    • Ask questions to understand functional requirements (e.g., "What features does the system need?").

    • Explore non-functional requirements like scalability, performance, reliability, and security.

  • Example: If asked to design a URL shortener, clarify if analytics tracking or expiration for URLs is required.

2. Start with a High-Level Approach

  • Begin by breaking the problem into logical components. Use simple terms initially:

    • For example: "For a URL shortener, we need to generate short URLs, store mappings, and support quick redirections."

  • Draw a rough block diagram:

    • Show user interaction, application servers, caching layers, databases, etc.

    • Use terms like "user sends request," "application generates short URL," and "database stores mapping."

3. Dive Deeper into Core Components

  • Now, drill down into the architecture:

    • Database: What type of database fits the use case? Relational vs. NoSQL?

    • Caching: When and where to add caching for performance optimization.

    • Load Balancing: How to distribute requests across servers.

    • Scalability: Vertical (adding more resources to a server) and horizontal scaling (adding more servers).

4. Capacity Planning

  • Show your ability to handle real-world use cases by estimating resource needs:

    • Storage: How much data will the system store? Estimate based on user base and data size.

    • Traffic: How many requests per second must the system handle during peak load?

    • Throughput: Calculate bandwidth requirements.

5. Address Edge Cases

  • Always include these discussions:

    • How will the system behave under high traffic?

    • What happens if a component fails? (e.g., database failure).

    • How will data integrity and consistency be maintained in distributed systems?

6. Incorporate Non-Functional Requirements

  • Discuss how your design meets:

    • Reliability: Use replication and backups.

    • Fault Tolerance: Explain failure recovery mechanisms.

    • Security: Include encryption for sensitive data and authentication for user actions.

7. Trade-Offs and Justifications

  • Google interviewers love to see pragmatic thinking:

    • Explain why you chose one database over another (e.g., "NoSQL for scalability, as this system doesn't require complex joins").

    • Discuss trade-offs like cost vs. performance or consistency vs. availability (CAP theorem).

8. Be Collaborative and Communicative

  • Keep your thought process transparent:

    • Think out loud and explain your reasoning for every step.

    • If an interviewer questions your approach, handle it constructively and adapt your design if necessary.

  • Use Google’s "smart generalist" mindset—balance depth with breadth.

9. Final Review and Summary

  • Summarize your solution briefly:

    • Reiterate key design choices and how they align with the requirements.

  • Example: "In summary, I designed a scalable URL shortener with a distributed database for storage, Redis for caching popular URLs, and load balancers for handling traffic peaks."

10. Practice Mock Interviews

  • Prepare for common system design scenarios:

    • Design a scalable chat application.

    • Build a global video streaming service.

    • Create a recommendation system for an e-commerce platform.

  • Practice with peers or mentors to refine your communication and problem-solving skills.


I'll approach these system design questions as a Google engineer, incorporating edge cases, design diagrams, capacity planning, and non-functional requirements. Let's dive in:

1. Design a URL Shortener (e.g., bit.ly)

Requirements

  • Functional: Shorten URLs, redirect to original URLs, track usage statistics.

  • Non-functional: Scalability, low latency, fault tolerance, high availability.

Design

  1. Architecture:

    • Use a hashing algorithm (e.g., Base62 encoding) to generate unique short URLs.

    • Store mappings in a distributed NoSQL database (e.g., DynamoDB or Cassandra).

    • Implement caching (e.g., Redis) for frequently accessed URLs.

    • Use load balancers to distribute traffic across servers.

  2. Capacity Planning:

    • Storage:

      • Assume 1 billion URLs with an average of 100 bytes per URL (short + original URLs combined).

      • Total storage: 100 GB for URL mappings.

      • If we store analytics (e.g., click counts), assume an additional 50 GB for statistics.

    • Traffic:

      • Peak load: 10,000 requests per second (short URL redirection).

      • Use Redis cache to handle the most frequently accessed URLs. Cache size: 20 GB.

      • Throughput: Each server can process 1,000 requests/sec. At least 10 servers needed for peak traffic.

  3. Edge Cases:

    • Collision: Handle hash collisions by appending random characters.

    • Expired URLs: Implement TTL (Time-to-Live) for temporary URLs.

    • Invalid URLs: Validate URLs before shortening.

Diagram

Client -> Load Balancer -> Application Server -> Database
       -> Cache (Redis) -> Database

2. Design a Scalable Chat Application

Requirements

  • Functional: Real-time messaging, group chats, message history.

  • Non-functional: Scalability, low latency, fault tolerance.

Design

  1. Architecture:

    • Use WebSocket for real-time communication.

    • Store messages in a distributed database (e.g., Cassandra).

    • Implement sharding based on user IDs.

    • Use message queues (e.g., Kafka) for asynchronous processing.

  2. Capacity Planning:

    • Storage:

      • Assume 10 million users, with each user sending 100 messages/day.

      • Average message size: 200 bytes.

      • Total storage per day: 200 GB.

      • For 1 year of history: 73 TB.

    • Traffic:

      • Peak load: 100,000 concurrent connections.

      • WebSocket servers: Each server handles 5,000 connections. At least 20 servers required during peak hours.

      • Use Kafka for asynchronous processing; throughput: 1 million messages/sec.

  3. Edge Cases:

    • Offline Users: Queue messages for delivery when users reconnect.

    • Message Ordering: Use sequence numbers to ensure correct ordering.

    • Spam: Implement rate limiting and spam detection.

Diagram

Client -> WebSocket Server -> Message Queue -> Database

3. Design a Ride-Sharing Service (e.g., Uber)

Requirements

  • Functional: Match riders with drivers, calculate fares, track rides.

  • Non-functional: Scalability, real-time updates, fault tolerance.

Design

  1. Architecture:

    • Use GPS-based tracking for real-time updates.

    • Implement a matching algorithm to pair riders with nearby drivers.

    • Store ride data in a relational database (e.g., PostgreSQL).

  2. Capacity Planning:

    • Storage:

      • Assume 1 million rides/day, with each ride generating 10 updates (e.g., location, fare, etc.).

      • Average update size: 500 bytes.

      • Total storage per day: 5 GB.

      • For 1 year: 1.8 TB (for historical data storage).

    • Traffic:

      • Peak load: 10,000 ride matching requests/sec.

      • Use 10 application servers, each handling 1,000 requests/sec.

      • GPS tracking: Real-time updates require 50 MB/sec bandwidth.

  3. Edge Cases:

    • Surge Pricing: Implement dynamic pricing based on demand.

    • Driver Cancellations: Reassign rides to other drivers.

    • Network Failures: Use retries and fallback mechanisms.

Diagram

Client -> Load Balancer -> Application Server -> Database
       -> GPS Tracking -> Matching Algorithm

4. Design a Distributed File Storage System (e.g., Google Drive)

Requirements

  • Functional: Upload/download files, share files, version control.

  • Non-functional: Scalability, fault tolerance, high availability.

Design

  1. Architecture:

    • Use distributed storage (e.g., HDFS) for file storage.

    • Implement replication for fault tolerance.

    • Use metadata servers to track file locations.

  2. Capacity Planning:

    • Storage:

      • Assume 1 billion files, with an average size of 1 MB.

      • Total storage: 1 PB.

      • For replication (3 copies): 3 PB.

    • Traffic:

      • Peak load: 10,000 uploads/downloads/sec.

      • Each server handles 1,000 requests/sec. At least 10 servers required.

      • Metadata size for tracking files: 100 TB.

  3. Edge Cases:

    • Large Files: Split files into chunks for efficient uploads/downloads.

    • Conflicts: Implement version control for concurrent edits.

    • Data Loss: Use replication and backups.

Diagram

Client -> Metadata Server -> Distributed Storage

5. Design a Search Engine

Requirements

  • Functional: Index web pages, return relevant results, handle queries.

  • Non-functional: Scalability, low latency, fault tolerance.

Design

  1. Architecture:

    • Use web crawlers to index pages.

    • Store indexed data in a distributed database.

    • Implement ranking algorithms (e.g., PageRank).

  2. Capacity Planning:

    • Storage:

      • Assume 1 billion web pages, with an average size of 10 KB per page.

      • Total storage: 10 TB.

      • For additional metadata (e.g., indexing terms), storage increases to 20 TB.

    • Traffic:

      • Peak load: 100,000 search queries/sec.

      • Each server handles 1,000 queries/sec. At least 100 servers required.

      • Cache size for popular queries: 5 TB.

  3. Edge Cases:

    • Spam Pages: Filter out low-quality pages using heuristics.

    • Query Failures: Provide fallback results.

    • High Traffic: Use caching for popular queries.

Diagram

Client -> Load Balancer -> Search Server -> Database
       -> Web Crawler -> Indexing

These answers incorporate edge cases, design diagrams, capacity planning, and non-functional requirements. Let me know if you'd like to explore any of these designs further or add more details! 😊

Microservices vs Monolithic Architecture

 Microservices vs Monolithic Architecture Here’s a clear side-by-side comparison between Microservices and Monolithic architectures — fro...