Concept · ~8 min read

Olap Vs Oltp

OLTP systems handle fast, row-level transactions for applications; OLAP systems handle complex, column-scanning analytics queries — and confusing the two leads to systems that fail at both jobs.

Why this appears in interviews

Almost every data engineering decision — database choice, schema design, query optimization — flows from understanding which type of workload you are serving.

The mental model — a bank teller vs. an accountant

OLTP is like a bank teller. Processing your deposit in milliseconds, working with one customer's data at a time. Speed on individual rows is everything.

OLAP is like an accountant doing the year-end audit. Aggregating millions of transactions, computing totals by region for every month. Speed on large aggregations is everything.

The bank's transaction system would be terrible at the year-end audit. The accounting system would be terrible at processing live deposits.

OLTP characteristics

Purpose: Power transactional applications in real time.

Query pattern: Short queries touching a small number of rows. "Get the order with ID 12345."

Optimization for: Write throughput, read latency, ACID transaction integrity.

Schema: Normalized (3NF). Many tables with foreign key relationships.

Examples: PostgreSQL, MySQL, DynamoDB.

OLAP characteristics

Purpose: Power analytics and reporting.

Query pattern: Long queries scanning millions of rows. "Total revenue by product category per month for 2025 by region."

Optimization for: Read throughput on large data, columnar compression, parallel execution.

Schema: Denormalized (star schema). Fewer tables with more columns.

Examples: Snowflake, BigQuery, Redshift, ClickHouse.

Why columnar storage matters

OLTP stores data row by row: to compute total revenue, every row must be read. OLAP stores column by column: only the revenue column is read — 100x more efficient on wide tables. Columnar storage also compresses far better since similar values are stored adjacent.

Common interview mistakes

Mistake 1: Running analytics on the production OLTP database. Analytics queries scan millions of rows, slowing down the application. Analytics belong on a separate OLAP system.

Mistake 2: Using a normalized schema in a data warehouse. Normalization is right for OLTP. In OLAP, denormalized schemas are used for a reason.

Mistake 3: Not explaining why BigQuery is faster than Postgres for the same query. The answer is columnar storage and parallel execution.

Key vocabulary

  • OLTP — Online Transaction Processing — optimized for fast, row-level reads and writes. Powers applications.
  • OLAP — Online Analytical Processing — optimized for complex aggregation queries over large datasets. Powers analytics.
  • Columnar storage — Storing data column by column. Enables fast analytical queries on specific columns.
  • Normalization — Organizing data to minimize redundancy. Appropriate for OLTP. Typically avoided in OLAP schemas.
← Previous
Next · ProblemPostgreSQL Analytics Performance Issue