A leaked API key can lead to unexpected costs, abuse of your account, or exposure of sensitive data. Properly managing secrets is not optional but a basic requirement for any AI application that wants to be taken seriously.
Of all the security mistakes we encounter in AI projects, a hardcoded API key in source code is the most common. It is understandable: it is quick and it works. But once that code is on GitHub or shared with someone, the key is compromised. This article explains how to do it right.
Environment variables are configuration values stored outside your application code and made available by the environment in which the application runs. They are typically defined as key-value pairs, such as OPENAI_API_KEY=sk-....
You read them in code via process.env.OPENAI_API_KEY (Node.js) or os.environ.get("OPENAI_API_KEY") (Python). The values are not in your code but are injected by the operating system, a deployment platform, or a secrets manager.
For local development you use a .env file in the root of your project. That file contains all the variables you need locally:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
DATABASE_URL=postgresql://localhost:5432/mydb
Load this file with a library: dotenv in Node.js or python-dotenv in Python. Always add the file to .gitignore. Never commit it, never share it, never leave it in your codebase.
Always create a .env.example file with the names of all variables but without values. This file you do commit, so new team members know which variables they need to set.
In production you do not use .env files but the secrets facilities of your deployment platform:
These systems store your secrets encrypted and inject them as environment variables when your application starts. Ensure that only the environments that need the secrets have access to them.
API keys sometimes leak. This can happen through a public GitHub commit, a screenshot, a log file, or a compromised machine. It is good to have a recovery plan ready:
Rotate keys regularly as a preventive measure, even when there is no leak. Most providers make this straightforward.
A common mistake is accidentally logging secrets. If you log a request that contains an authorization header, or if you log the full environment dump on an error, secrets can end up in your log files.
Ensure filtering in your logging configuration and be conscious of what you log. Never use console.log(process.env) or print(os.environ) in production.
Many teams use CI/CD tools like GitHub Actions, GitLab CI, or Jenkins. Here too: never put secrets as plaintext in configuration files. Use the secrets facilities of the platform:
${{ secrets.MY_SECRET }}Restrict which branches and jobs have access to which secrets.
Correctly managing environment variables and secrets is one of the simplest steps you can take to make your AI application more secure. At Mach8, structured secret management is standard in every project, not an afterthought.
Want to know more about how Mach8 sets up AI applications securely and manageably? View our AI agents service or get in touch.
We help you go from strategy to implementation. Schedule a no-obligation call.
Schedule a call