Navigating the Database Abyss: A Technical Odyssey Through Numi's Data Model Evolution

Leonard Weise

I switched our infrastructure three times while building Numi's financial data API platform. This isn't your typical "spin up a SQL DB and call it a day" story—this is about wrestling with real-time financial data at scale and living to tell the tale. If you've ever watched your carefully designed architecture crumble under actual production load, this post is for you. Let's get technical, let's get real, and let's dissect this beast together.

The Problem: Financial Data Won't Sit Still

Building a standard database is straightforward. Creating a cohesive data model for a financial platform processing millions of market events in real-time? That's where things get interesting. Numi, our developer-first financial data API service, required an architecture that could handle:

Traditional approaches weren't cutting it. I wanted Numi's APIs to feel lightning-fast and scale effortlessly—goals that sound reasonable until you're debugging time-series database issues at 3 AM.

Insight #1: The real enemy isn't the technology you choose—it's the mismatch between your data's behavior and your architecture's assumptions.

Financial data doesn't just grow linearly; it accelerates, changes patterns during market events, and demands low-latency access in ways that expose every architectural shortcut. Here's how we navigated from client-side experiments to server-side salvation.

Act 1: Dexie and the Client-Cache Dream

My journey began on the client side with a bold hypothesis: aggressive client-caching could deliver instant API responses and reduce server load. I chose Dexie, a wrapper around IndexedDB that transforms the notoriously clunky web standard into something actually usable.

IndexedDB isn't relational like SQL—it's essentially a key-value store with table-like structure. Dexie smooths out the rough edges with a clean API:

const db = new Dexie('NumiFinanceDB')
db.version(1).stores({
	stockData: 'symbol, timestamp, price, volume',
	marketEvents: 'id, symbol, eventType, timestamp',
})

The real magic came from Dexie's liveQuery, which automatically updated our dashboard components when data changed. For Numi's developer dashboard, streaming new market data was as simple as appending to a time-series field—no manual render updates needed.

But there was a catch: query too much data too high in the component tree, and every price update would trigger cascading re-renders. Aggressive memoization with React's useMemo became essential for maintaining performance.

Dexie excelled at virtualizing ticker lists with minimal network requests—unlike competitors whose server-side calls took seconds to load additional data points. Historical data proved trickier: to avoid schema complexity, I marked items with timeframe: "historical" instead of using separate tables, which eventually led to bloated local stores.

Insight #2: Client-side caching trades server load for client complexity—an excellent tradeoff until your data growth outpaces browser storage limits.

Act 2: Redis and the Streaming Nightmare

Moving to the backend, streaming financial data to consumers presented new challenges. My first attempt—"Redis v0"—was delightfully naive:

  1. Serialize all ticker streams with SuperJSON
  2. Compress the result with Gzip
  3. Store the blob in Redis under the API key

Fetching reversed the process: unzip, deserialize, send. Surprisingly, this worked initially. Tests with 1,000 tickers and 2 data points each compressed to just 200 KB.

Then reality intervened. Enterprise clients requested 100,000+ securities across multiple exchanges. Suddenly, our compressed blobs ballooned to 3 MB, bringing the system to its knees. Time for "Redis v1": each ticker and data point got its own key-value pair, namespaced by API key (e.g., apikey123:ticker-AAPL). The data layer chunked updates and fetches, and Upstash Redis handled it—until it didn't.

Within hours of deploying to our developer community, we jumped from 50,000 keys to nearly a million. Bandwidth exploded from 70 GB over three weeks to 40 GB in a single day. Redis, optimized for small, fast lookups, buckled under the weight of bulk financial data operations. Performance plummeted.

Insight #3: Redis is a precision instrument, not a bulldozer—perfect for atomic operations, disastrous for bulk financial data retrieval.

Act 3: PlanetScale and the SQL Redemption

Enter PlanetScale, a MySQL-based solution built on Vitess—the same technology powering YouTube and Slack. I migrated from Redis key-value pairs to a proper relational schema optimized for financial data:

CREATE TABLE market_data (
  id VARCHAR(36) PRIMARY KEY,
  symbol VARCHAR(255),
  api_key VARCHAR(36),
  price DECIMAL(20,6),
  timestamp BIGINT,
  INDEX idx_api_symbol (api_key, symbol)
);

Using Drizzle ORM, the transition was remarkably smooth. Our API endpoints remained unchanged—no client-side modifications needed for our developer community. A background backfill script handled legacy data, running asynchronously via Vercel's waitUntil to avoid blocking responses.

The results? PlanetScale's $30/month tier handled our load with ease:

For our 300,000 developer users, it scaled where Redis collapsed.

Insight #4: Relational databases aren't obsolete—they're waiting for the moment when you need both structure and scale simultaneously.

The Verdict: Choose Your Battles Wisely

After three database migrations in 24 hours, I've reached some hard-won conclusions. Client-caching is appealing for financial APIs, but it's rarely worth the complexity unless it's your core differentiator. Dexie works well for dashboard performance, but Redis hacks don't scale for real-time financial feeds. PlanetScale provided the stability we needed, but the journey to get there was unnecessarily painful.

For most financial data use cases (95%), you're better off with server-driven APIs and intelligent caching strategies using tools like React Query or Next.js prefetching. Client-side solutions promise the world but deliver specialized tools that rarely accommodate high-volume financial data requirements.

Insight #5: The best architecture isn't the most sophisticated—it's the one that aligns your data's characteristics with your users' expectations.

Lessons from the Financial Data Trenches

Final Insight: Building for scale isn't about picking winners—it's about recognizing when to pivot and understanding why.

This database odyssey was ultimately rewarding—despite the rage-filled 6 AM emergency merges. If you're convinced client-caching is right for your financial API, proceed with caution. Use Dexie if you're bootstrapping, PlanetScale if you're serious about reliable financial data delivery. Just remember that the simplest solution that meets your requirements is often the best one.

Now, if you'll excuse me, I've got a mobile SDK to build.