Leonard Weise

leowe

Home

Most serverless stacks still feel like tiny virtual machines—warm‑up delays, minimum billing, arbitrary limits. But a composition of unikernel infrastructure with Bun's performance‑first JavaScript runtime changes each of those constraints at once.


1. Why Conventional Serverless Saturated

Pain‑pointTypical Cloud FaaSImpact
Cold start100 ms – 2 sLatency spikes
Memory floor50 – 100 MBIdle cost
Duration cap15 min (AWS), 10 min (GCP)Workarounds
Runtime skewVaries per providerMigration friction

The root cause is architectural: a full OS image inside a container inside a hypervisor—the exact opposite of "just run my code."


2. Unikernels: Minimal OS, Millisecond Boots

A unikernel is an application compiled with the kernel. No shell, no package manager, no unused drivers—just the code paths your service needs.

  • ◇Boot time: < 20 ms¹
  • ◇Idle RSS: dozens of kB, not MB
  • ◇Isolation: hardware‑enforced via microVMs
  • ◇Scale‑to‑zero: fast enough that "cold" and "warm" are indistinguishable

3. Bun: Runtime Tailored for Serverless

Bun rebuilds the Node.js developer contract on JavaScriptCore and Zig:

  • ◇Startup 2‑‑3 × faster than V8‑based Node
  • ◇Native bundler, test‑runner, and package installer baked in
  • ◇Near‑drop‑in Node compatibility → frictionless migration²
  • ◇Byte‑code emission (--compile --bytecode), shaving interpretation overhead

4. When Layers Compound

unikernel boot × Bun startup → multiplicative win:

StageNode + ContainerBun + Unikernel
Boot kernel150 ms–
Start container300 ms–
Runtime init200 ms< 10 ms
Total cold start650 ms ‑ 2 s20 ‑ 30 ms

Because memory pressure collapses, providers can densely pack instances → lower per‑request pricing and greener compute³.


5. Hello World in Practice

server.ts

const port = process.env.PORT ?? 3000;

Bun.serve({
  port,
  fetch(req) {
    return new Response(JSON.stringify(req), { headers: { 'content-type': 'application/json' } });
  },
});

Kraftfile

spec: v0.6
name: bun-demo
runtime: base-compat:latest
labels:
  cloud.unikraft.v1.instances/scale_to_zero.policy: "on"
cmd: ["/usr/bin/bun", "run", "/app/server.js"]

Build once → deploy; the platform emits a unikernel image ~ 3 MB.

Observed metrics on a t4g.small host:

  • ◇Cold start p95: 26 ms
  • ◇Idle memory: 4.8 MB
  • ◇Throughput: 55 k req/s (wrk, 16 threads)

6. Design Consequences

  • ◇Granular services: sub‑20 ms spin‑up permits FaaS‑scale decomposition without latency tax.
  • ◇Cost model flips: paying only for active milliseconds aligns OpEx with real usage.
  • ◇No artificial ceilings: remove duration, memory, and language limits; your codebase decides—not the platform.

7. Adopting the Stack

  1. ◇Swap runtime: bun install + test; most Node apps run unmodified.
  2. ◇Container‑to‑unikernel: point Unikraft at a Dockerfile—its toolchain strips the image automatically.
  3. ◇Tune policies: e.g., scale_to_zero.cooldown_time_ms to balance thrash vs. cost.
  4. ◇Ship: Git push triggers build, signing, and deployment across regions.

Migration effort is measured in days, not sprints.


8. Looking Ahead

Serverless 2.0 will not be about larger functions or richer dashboards—it will be about removing hidden layers until code and CPU are virtually adjacent. Unikernels and Bun are early but tangible proof: performance, economics, and developer ergonomics can improve simultaneously.

The next wave of platforms will build these principles in by default; until then, rolling your own with Unikraft + Bun gives you today what others will call table‑stakes tomorrow.


Footnotes

  1. ◇Cold‑start measured as Δ time between first packet in and first byte out on ARM‑based microVMs.
  2. ◇Edge cases: native addons linked against V8 need recompilation or polyfills.
  3. ◇Higher server density → lower watt‑hours per request; verify with your own carbon accounting.