NextJSSecurity
5 min read
09.05.2026
.md

Securing My Blog's Secrets with Varlock and Doppler

Last updated: 09.05.2026

varlock

The author describes the security risks of traditional .env files in Next.js projects: secrets stored in plain text on local machines, duplicated across environments, and easily exposed through accidental commits or misconfigured deployments. Varlock is introduced as a structured, schema-based replacement for dotenv, using a committed .env.schema file that defines variable types, requirements, and sensitivity without storing actual secrets. It integrates with Next.js as a drop-in replacement for @next/env, providing TypeScript types, startup validation, sensitivity-aware client bundling, and log redaction. Doppler is chosen as the secrets provider for its generous free tier, clean dashboard, environment configs, and especially its native Vercel sync, which keeps secrets updated without tokens or runtime API calls. Locally, Varlock’s exec() calls the Doppler CLI to fetch secrets in memory; in production, Vercel’s environment variables are used directly, with the same schema validating both paths. Benefits include zero secrets on disk, early configuration error detection, automatic sensitivity handling, and an AI-friendly schema. The author would skip Varlock’s Doppler plugin next time and rely solely on the CLI-based exec() approach despite a small startup performance cost.

The Problem With .env Files

Every Next.js project starts the same way: you create a .env file, paste in your API keys, and add it to .gitignore. Job done, right?

Not quite. That file sits on your machine in plain text. It gets copied into CI pipelines, shared over Slack, duplicated across .env.local and .env.production. One accidental commit or a misconfigured deploy and your tokens are exposed. Even with gitignore, the secrets live on every developer's laptop, in every backup, in every disk image. I wanted a setup where secrets never touch the filesystem at all.

What Is Varlock?

Varlock is a modern replacement for dotenv, built on the @env-spec specification. Instead of scattering environment variables across multiple .env files with no structure, you define a single .env.schema file that describes every variable your app needs: its type, whether it's required, and whether it's sensitive.

The schema is committed to version control. It contains no secret values, only the structure and resolver functions. From this schema, Varlock gives you automatic TypeScript type generation, validation at startup with clear error messages, sensitivity markers that control client-side bundling, and log redaction so sensitive values never leak into terminal output.

For Next.js, Varlock provides a drop-in integration that replaces the built-in @next/env loader. You configure a package manager override, wrap your Next config with their plugin, and everything just works. Your existing process.env calls keep functioning while you gain validation and type safety on top.

Why Doppler as the Secrets Provider

Varlock supports multiple secrets backends through plugins and CLI integrations. I went with Doppler for a few reasons.

The free tier is generous: 50 service tokens, 10 projects, and 3 users. The dashboard is clean. You create a project, it gives you dev, staging, and production configs out of the box. Add your secrets, and they are available instantly through the CLI or API.

The killer feature for me was the native Vercel integration. Doppler syncs secrets directly to Vercel as environment variables. No tokens needed at deploy time, no API calls during the build. When you rotate a secret in Doppler, Vercel picks it up on the next deploy automatically.

The Integration: Zero .env Files

Here is what makes this setup different from most tutorials: there is no .env file. Not .env.local, not .env.production, nothing. The .env file in my project is empty. All secrets are resolved at runtime through two paths depending on the environment.

For local development, the .env.schema uses Varlock's exec() function to call the Doppler CLI for each secret. A line like SANITY_API_READ_TOKEN=exec(`doppler secrets get SANITY_API_READ_TOKEN --plain -p ask-blog -c dev`) tells Varlock to run that shell command and use the output as the value. The Doppler CLI authenticates through your browser login session, so no tokens or credentials are stored anywhere on disk. You run doppler login once, and every exec() call uses that session.

For production on Vercel, Doppler's native sync pushes all values directly as environment variables. When Varlock sees that a variable is already set in the environment, it skips the exec() call entirely and uses the existing value. The Doppler CLI is not even installed on Vercel, and it does not need to be. The schema validates the values regardless of where they came from.

This dual-path approach means the same .env.schema works everywhere without modification. Locally, exec() fetches from Doppler. On Vercel, environment variables are already present. Varlock validates both.

Security Benefits I Actually Got

The biggest win: truly zero secrets on disk. There is no .env file to accidentally commit, no token to leak in a screenshot, no credential file sitting in a backup. The Doppler CLI authenticates through your OS keychain via a browser-based login. Secrets exist only in memory when your app runs.

Schema validation catches configuration errors before the app starts. If a required variable is missing or a URL field contains something invalid, Varlock tells you exactly what is wrong. No more debugging a cryptic runtime crash because someone forgot to set an environment variable.

The sensitivity model is another practical win. Variables prefixed with NEXT_PUBLIC_ are automatically marked non-sensitive. Everything else is sensitive by default. Varlock redacts sensitive values in logs, so even if you accidentally log a token, it shows up as redacted characters.

Finally, the .env.schema file is AI-friendly by design. When tools like Claude Code work on my project, they can read the schema to understand what configuration exists and what types each variable expects, all without ever seeing the actual secret values. This is a real improvement over the old setup where an AI assistant might read a .env file and expose tokens in a conversation.

What I'd Do Differently

I initially tried using Varlock's Doppler plugin, which uses service tokens for authentication. It worked locally but broke on Vercel because the plugin requires an initialized connection. If the token is missing, every resolver call fails hard. I spent time trying to make it conditional before realising the exec() approach with the Doppler CLI is simpler and more robust.

If I were starting over, I would skip the plugin entirely and go straight to the CLI approach. Set up the Doppler Vercel integration first so production deploys work immediately, then configure the exec() calls for local development. The CLI approach has one tradeoff: each variable makes a separate shell call at startup, which is slightly slower than a bulk API call. In practice, it adds about a second to dev server startup, which is a price I am happy to pay for never storing secrets on disk.