MentorMe
·4 min read

Supabase in Production — 10 Things Nobody Tells You

RLS footguns, connection pooling, pg_cron, edge functions — the unmarked dials on the dashboard.

SupabasePostgresMentorMe

Supabase is the most forgiving backend we've ever shipped on. It's also the most dangerous, because the dashboard makes everything look like a toy.

It isn't. It's Postgres. And Postgres in production punishes the things the tutorials don't teach you.

We run MentorMe on Supabase. Community, auth, payments, agent memory, the lot. Along the way we hit every footgun the docs don't warn you about. Here are the ten we wish someone had told us on day one.

First one: Row Level Security is not optional. If you disable RLS to move faster, you will leak data. We've seen it on four separate client projects this year alone. Turn RLS on from the first migration, write policies per table, and test them with an anon key in a separate browser session before you ship. The policy editor in the dashboard is fine for reading, but write your policies in SQL migrations and check them into git. Otherwise you'll never know when they changed.

Second: the default connection pooler will betray you the moment traffic spikes. Supabase gives you a session pooler and a transaction pooler. Serverless functions (Vercel, Netlify, Cloudflare) must use the transaction pooler. If you point a serverless function at the direct connection string, every cold start opens a new connection and you hit the Postgres connection limit faster than you can refresh the dashboard. Use the pgbouncer URL. Always.

"And Postgres in production punishes the things the tutorials don't teach you."

Third: pgcron exists and nobody talks about it. You don't need a separate worker service for scheduled jobs. You can run SQL on a cron directly inside Postgres. Nightly cleanups, weekly digest emails, stale session purges — all of it runs in Supabase with one enable command and a SQL statement. We moved three cron workers into pgcron last quarter and saved hours of maintenance.

Fourth: edge functions are not the answer to everything. They're great for webhooks, Stripe signature verification, AI streaming responses. They're terrible for long-running operations. If your function takes more than a few seconds, you're using the wrong tool. Put long jobs in a proper queue (we use pg-boss, which also runs inside Postgres) and let the edge function enqueue and return fast.

Fifth: the storage bucket defaults are too permissive. When you create a bucket, it's public by default in a lot of templates. Check this. Set bucket policies the same way you set RLS policies. And never, ever let the client upload directly to a bucket without a signed URL. We've audited apps where anyone with the anon key could write arbitrary files to the user-avatars bucket. That is a bug bounty waiting to happen.

Sixth: the database has a hard connection limit, and you'll hit it sooner than you think. On the free tier it's 60. On the small paid plan it's around 200. If you're running a Next.js app with server components, each render can open a connection. Use the pooler, yes, but also use Prisma or Drizzle with a connection limit set explicitly. We cap ours at 10 per instance. When you scale, you scale instances, not connections per instance.

Seventh: realtime is a separate product with its own rules. Subscribing to every table change from the client sounds cool until you realize you're pushing every row edit to every connected browser. Use channels. Filter on the server. And turn off realtime for tables that don't need it, which is most of them. The default in the dashboard is to enable it for everything, and that default is wrong for production.

56%

Wage premium for AI-skilled workers

Eighth: Supabase Auth's JWT expiration is configurable and you should configure it. The default is one hour, which is fine for most apps. But if you have long-lived sessions (we do, for the AI Operator Stack), you need refresh tokens handled correctly on the client. Next.js middleware is where this lives. Check the Supabase SSR helpers. The old auth-helpers library is deprecated. If you're still on it, migrate.

Ninth: back up your database, and do it outside of Supabase. Their point-in-time recovery is excellent. But it's still one account, one dashboard, one credential away from disaster. We pg_dump to S3 nightly from a separate compute environment. It takes twenty minutes to set up and it's saved us once already.

Tenth: read the Postgres logs. The dashboard has a logs explorer. Most people never open it. Slow queries, failed auth attempts, RLS policy violations — they're all there. Set an alert on error rate. Look at the slow query log once a week. You'll find bugs you didn't know you had.

Action step: turn on RLS for every table in your Supabase project today and write one policy for each.

Pro is $79/month or $597 one-time (Pro Lifetime). Full course library + live events + office hours.

Related reading