Back to Blog

Building PartAI: Architecture, Tech Stack & Engineering Decisions

February 11, 2026 • By Engineering Team • 12 min read

Creating a real-time multiplayer game with AI-generated content presents unique technical challenges. In this article, we'll walk through the architecture decisions, technology choices, and engineering trade-offs that went into building PartAI.

Tech Stack at a Glance

  • Frontend: Next.js 15 (App Router), React 19, TypeScript, Tailwind CSS
  • Backend: Next.js API Routes, Supabase (PostgreSQL, Auth, Realtime)
  • AI Services: OpenAI (GPT-4o, DALL-E 3), Google Gemini, Fal.ai (Flux models)
  • Real-time: Supabase Realtime (WebSocket-based)
  • Deployment: Vercel (Edge Functions)
  • Monitoring: Vercel Analytics, Supabase Logs

Why Next.js App Router?

When we started PartAI, Next.js 13 had just introduced the App Router. Despite being new, we chose it for several key reasons:

Server Components for Optimal Loading

Our lobby list page, player dashboard, and documentation all benefit from Server Components. We can fetch data directly in components without client-side loading states:

// app/lobbies/page.tsx
export default async function LobbiesPage() {
  const supabase = createSupabaseServerClient();
  const { data: lobbies } = await supabase
    .from('lobbies')
    .select('*')
    .eq('game_state', 'waiting');
  
  return <LobbyList lobbies={lobbies} />;
}

No useState, no useEffect, no loading spinners. Data is fetched on the server and streamed to the client.

Streaming & Suspense

Game pages have complex data requirements: lobby details, player list, current round, messages. With streaming, we can show the page shell immediately while data loads progressively:

<Suspense fallback={<LobbyShell />}>
  <LobbyContent lobbyId={id} />
</Suspense>

Supabase: The Backbone

Supabase isn't just a database for us—it's our entire backend infrastructure. Here's what we use:

PostgreSQL with Row Level Security (RLS)

All our data access is secured at the database level. For example, players can only see lobbies they're members of:

CREATE POLICY "Players can view their lobbies"
ON lobby_players FOR SELECT
USING (
  auth.uid() = profile_id 
  OR auth.uid() = anonymous_id
);

This means we never have to check permissions in application code. The database handles it.

Supabase Realtime for Live Updates

The hardest part of any multiplayer game is keeping all clients in sync. Supabase Realtime solved this elegantly:

const channel = supabase
  .channel(`lobby:${lobbyId}`)
  .on('postgres_changes', {
    event: '*',
    schema: 'public',
    table: 'lobbies',
    filter: `id=eq.${lobbyId}`
  }, (payload) => {
    setLobby(payload.new);
  })
  .subscribe();

When any player updates the lobby (starts game, advances round, etc.), all connected clients receive the update instantly. No polling, no manual WebSocket management.

Edge Functions via Supabase

For AI-heavy operations, we use Supabase Edge Functions (Deno-based) to:

  • Generate images asynchronously (Flux models can take 30+ seconds)
  • Process AI responses without blocking the main API
  • Handle webhooks from external AI providers

Real-Time Architecture

Synchronizing 4-8 players in a game with sub-second latency requirements was our biggest challenge. Here's our approach:

Optimistic Updates

When a player submits a guess or prompt, we:

  1. Immediately update local UI (optimistic)
  2. Send mutation to API
  3. Wait for Realtime broadcast to confirm
  4. Reconcile if there's a conflict (rare)

This gives instant feedback while maintaining consistency. Players never see lag in their own actions.

State Management Strategy

We use Zustand for local state and Supabase Realtime as our "source of truth":

const useLobbyStore = create((set) => ({
  lobby: null,
  players: [],
  messages: [],
  
  syncFromSupabase: (data) => set(data),
  optimisticUpdate: (update) => set(update),
}));

// Realtime sync
supabase.channel('lobby').on('*', (payload) => {
  useLobbyStore.getState().syncFromSupabase(payload.new);
});

AI Integration: Multi-Provider Strategy

We deliberately support multiple AI providers for resilience and cost optimization:

ProviderUsed ForWhy
OpenAI GPT-4oText generation, strategyBest reasoning quality
Google GeminiFake news, AI deceptionFast, cost-effective
DALL-E 3Premium image qualityBest prompt adherence
Flux (via Fal.ai)Standard imagesFast, affordable

Graceful Degradation

If an AI provider fails, we have fallback mechanisms:

async function generateImage(prompt: string) {
  try {
    return await falai.generate(prompt);
  } catch (error) {
    console.error('Fal.ai failed, falling back to DALL-E');
    return await openai.images.generate({ prompt });
  }
}

Performance Optimizations

Image CDN & Caching

Generated images are stored in Supabase Storage and cached at the edge via Vercel. This means:

  • First load: ~200ms (Supabase Storage)
  • Subsequent loads: ~50ms (Edge cache)
  • Regional CDN: Images are served from the nearest location to players

Database Connection Pooling

We use Supabase's connection pooler (PgBouncer) in transaction mode for API routes:

// Uses connection pool, not direct connection
const supabase = createClient(
  process.env.DATABASE_URL, // Pooler URL
  { db: { schema: 'public' } }
);

This allows us to handle 1000+ concurrent API requests without exhausting database connections.

React Compiler & Automatic Memoization

With React 19's compiler, we get automatic memoization of components and values. This eliminated ~40% of our manual useMemo/useCallback calls.

Security Considerations

Rate Limiting

We use Vercel's Edge Middleware for rate limiting:

  • 10 requests/minute for anonymous users
  • 100 requests/minute for authenticated users
  • Cloudflare Turnstile CAPTCHA for sensitive actions

Prompt Injection Protection

Players could try to manipulate AI responses via prompt injection. We sanitize inputs:

function sanitizePrompt(input: string): string {
  // Remove system-level instructions
  return input
    .replace(/system:|assistant:|user:/gi, '')
    .replace(/ignore previous instructions/gi, '')
    .slice(0, 500); // Max length
}

Lessons Learned

✅ What Worked
  • Supabase Realtime eliminated 90% of our sync complexity
  • Next.js App Router SSR improved SEO dramatically
  • Multi-provider AI strategy prevented downtime
⚠️ What We'd Change
  • Started with a monorepo from day one (Turborepo)
  • Implemented E2E tests earlier (Playwright)
  • Used a state machine library for complex game logic (XState)

Open Source

We're considering open-sourcing parts of PartAI, particularly:

  • Our Supabase Realtime hooks library
  • Multi-provider AI abstraction layer
  • Game state management patterns

If you're interested in contributing or learning more, reach out via our About page.

Have questions about our tech stack? Join the discussion on our community forum.