nono provides two ways to keep credentials out of the sandboxed process:
- Proxy injection (
--credential) - The agent talks to a local reverse proxy that injects real API keys on the fly. The credential never enters the sandbox, not even as an environment variable. This is the recommended approach for LLM API keys.
- Environment variable injection (
--env-credential) - Loads secrets from the system keystore and injects them as environment variables before the sandbox is applied. Simpler, but the secret is visible in the process environment.
Proxy Injection (Recommended)
The proxy acts as a reverse proxy for configured credential routes. The agent sends plain HTTP to localhost:<port>/<service>/... and the proxy:
- Strips the service prefix
- Injects the real credential as an HTTP header
- Forwards to the upstream over TLS
- Streams the response back
Agent sends: POST http://127.0.0.1:PORT/openai/v1/chat/completions
Proxy sends: POST https://api.openai.com/v1/chat/completions
Authorization: Bearer sk-... (injected from keystore)
The agent never sees the API key. Even if the agent is compromised, it cannot extract credentials from its own environment or memory.
Streaming responses (SSE for chat completions, MCP Streamable HTTP, A2A JSON-RPC) are forwarded without buffering.
Quick Start
# 1. Store credentials in the system keystore
security add-generic-password -s "nono" -a "openai_api_key" -w "sk-..." # macOS
# 2. Run with credential injection
nono run --allow-cwd --network-profile claude-code --credential openai -- my-agent
The proxy sets OPENAI_BASE_URL=http://127.0.0.1:<port>/openai in the child’s environment. Most LLM SDKs respect this variable and redirect API calls through the proxy automatically.
Storing Credentials
Credentials are stored in the system keyring under the service name nono. The username/account corresponds to the credential_key defined in network-policy.json:
| CLI Service | Keyring Username | Keyring Service |
|---|
openai | openai_api_key | nono |
anthropic | anthropic_api_key | nono |
gemini | gemini_api_key | nono |
google-ai | google_generative_ai_api_key | nono |
macOS
security add-generic-password -s "nono" -a "openai_api_key" -w "sk-..."
security add-generic-password -s "nono" -a "anthropic_api_key" -w "sk-ant-..."
security add-generic-password -s "nono" -a "gemini_api_key" -w "your-gemini-key"
Linux
The keyring crate uses service, username, and target attributes. You must use these exact attribute names:
echo -n "sk-..." | secret-tool store --label="nono: openai_api_key" \
service nono username openai_api_key target default
echo -n "sk-ant-..." | secret-tool store --label="nono: anthropic_api_key" \
service nono username anthropic_api_key target default
echo -n "your-gemini-key" | secret-tool store --label="nono: gemini_api_key" \
service nono username gemini_api_key target default
Important: Use username (not account) as the attribute name. The target default attribute is required for the keyring crate to find the entry.
1Password Integration
nono supports 1Password op:// URIs as a credential source anywhere you would use a keyring account name. For CLI env injection, use --env-credential-map <op://...> <ENV_VAR>. This also works in profile-based credentials (env_credentials, custom_credentials). The 1Password CLI (op) must be installed and authenticated.
Finding Your Secret Path
op:// URIs have the format op://<vault>/<item>/<field>. Use the op CLI to discover each segment:
# 1. List your vaults
op vault list
# 2. List items in a vault
op item list --vault Development
# 3. See all fields on an item
op item get "OpenAI API Key" --vault Development
# 4. The resulting URI
op://Development/OpenAI API Key/credential
The field name depends on the item type — “Password” items have a password field, “API Credential” items typically have credential, and custom items use whatever field labels you set. Run op item get to see all available fields for a given item.
CLI: Direct Environment Variable Injection
Pass an op:// URI to --env-credential-map with an explicit destination env var:
# Single 1Password secret
nono run --allow . --env-credential-map 'op://Development/OpenAI/credential' OPENAI_API_KEY -- my-agent
# Multiple secrets (mixed keyring + 1Password)
nono run --allow . \
--env-credential openai_api_key \
--env-credential-map 'op://Development/Anthropic/api-key' ANTHROPIC_API_KEY \
-- my-agent
Legacy syntax is still accepted for compatibility:
--env-credential 'op://vault/item/field=MY_VAR'.
Profile: Environment Credential Injection
In a profile’s env_credentials section, use an op:// URI as the key instead of a keyring account name:
{
"meta": { "name": "my-agent" },
"env_credentials": {
"op://Development/OpenAI/credential": "OPENAI_API_KEY"
}
}
nono run --profile my-agent -- my-agent
# Child process sees: OPENAI_API_KEY=sk-actual-secret-value
Profile: Proxy Credential Injection
For network API keys, proxy injection is recommended — the child process never sees the real secret. Use an op:// URI in credential_key:
{
"meta": { "name": "my-agent-secure" },
"network": {
"custom_credentials": {
"openai": {
"upstream": "https://api.openai.com/v1",
"credential_key": "op://Development/OpenAI API Key/credential",
"env_var": "OPENAI_API_KEY",
"inject_header": "Authorization",
"credential_format": "Bearer {}"
},
"anthropic": {
"upstream": "https://api.anthropic.com",
"credential_key": "op://Development/Anthropic/api-key",
"env_var": "ANTHROPIC_API_KEY",
"inject_header": "x-api-key",
"credential_format": "{}"
}
},
"credentials": ["openai", "anthropic"]
}
}
nono run --profile my-agent-secure -- my-agent
# Child process sees: OPENAI_API_KEY=nono_sess_a1b2c3... (phantom token)
# Proxy transparently swaps to real Bearer sk-... when forwarding to api.openai.com
Mixed Mode: Environment + Proxy Credentials
You can combine both injection modes in a single profile. Use env_credentials for non-network secrets (database passwords, tokens) and custom_credentials with proxy injection for API keys:
{
"meta": { "name": "mixed-example" },
"env_credentials": {
"op://Infrastructure/Database/password": "DATABASE_PASSWORD"
},
"network": {
"custom_credentials": {
"openai": {
"upstream": "https://api.openai.com/v1",
"credential_key": "op://Development/OpenAI/credential",
"env_var": "OPENAI_API_KEY",
"inject_header": "Authorization",
"credential_format": "Bearer {}"
}
},
"credentials": ["openai"]
}
}
nono run --profile mixed-example -- my-app
# Child process sees:
# - DATABASE_PASSWORD=... (from 1Password via env_credentials)
# - OPENAI_API_KEY=nono_sess_... (phantom token from credentials)
Apple Passwords Integration (macOS)
nono supports Apple Passwords entries via apple-password:// URIs in both --env-credential-map and profile credential fields (env_credentials, custom_credentials.credential_key).
URI format:
apple-password://<server>/<account>
server: website/service hostname (for example, github.com)
account: account/username for that entry (for example, alice@example.com)
nono resolves this URI using macOS security find-internet-password -s <server> -a <account> -w.
CLI: Direct Environment Variable Injection
# Single Apple Passwords secret
nono run --allow . --env-credential-map 'apple-password://github.com/alice@example.com' GITHUB_PASSWORD -- my-agent
# Mixed with keyring + 1Password
nono run --allow . \
--env-credential openai_api_key \
--env-credential-map 'op://Development/OpenAI/credential' OPENAI_API_KEY \
--env-credential-map 'apple-password://github.com/alice@example.com' GITHUB_PASSWORD \
-- my-agent
For Apple Passwords CLI env injection, use --env-credential-map so the
credential reference and destination variable are unambiguous.
Profile: Environment Credential Injection
{
"meta": { "name": "my-agent" },
"env_credentials": {
"apple-password://github.com/alice@example.com": "GITHUB_PASSWORD"
}
}
Profile: Proxy Credential Injection
{
"meta": { "name": "my-agent-secure" },
"network": {
"custom_credentials": {
"github_api": {
"upstream": "https://api.github.com",
"credential_key": "apple-password://github.com/alice@example.com",
"env_var": "GITHUB_PASSWORD",
"inject_header": "Authorization",
"credential_format": "token {}"
}
},
"credentials": ["github_api"]
}
}
Environment Variables
When credential routes are configured, the proxy sets SDK-specific base URL environment variables:
| Route | Environment Variable | Value |
|---|
| openai | OPENAI_BASE_URL | http://127.0.0.1:<port>/openai |
| anthropic | ANTHROPIC_BASE_URL | http://127.0.0.1:<port>/anthropic |
Most LLM SDKs (OpenAI Python, Anthropic Python, etc.) respect these variables and redirect API calls through the proxy automatically.
Credential Route Configuration
The built-in network-policy.json defines default credential routes:
Note: OpenAI’s upstream includes /v1 because the OpenAI SDK expects the base URL to include the version prefix. Anthropic’s SDK adds /v1/messages automatically, so its upstream is the root URL.
Using Credentials in Profiles
User profiles can specify which credential services to enable in the network section:
{
"meta": { "name": "my-agent" },
"filesystem": {
"allow": ["$WORKDIR"]
},
"network": {
"network_profile": "claude-code",
"credentials": ["openai", "anthropic"]
}
}
Custom Credential Definitions
For APIs not covered by the built-in services, you can define custom credentials in your profile. This lets you use --credential with any API while keeping credentials out of the sandbox.
{
"meta": { "name": "my-agent" },
"network": {
"network_profile": "minimal",
"credentials": ["openai", "telegram"],
"custom_credentials": {
"telegram": {
"upstream": "https://api.telegram.org",
"credential_key": "telegram_bot_token",
"inject_header": "Authorization",
"credential_format": "Bearer {}"
}
}
}
}
| Field | Required | Default | Description |
|---|
upstream | Yes | - | Upstream URL to proxy requests to (must be HTTPS, or HTTP for localhost only) |
credential_key | Yes | - | Keystore account name (alphanumeric and underscores only), op:// URI for 1Password, or apple-password:// URI for Apple Passwords |
inject_mode | No | header | Credential injection mode: header, url_path, query_param, or basic_auth |
inject_header | No | Authorization | HTTP header to inject the credential into (used with header and basic_auth modes) |
credential_format | No | Bearer {} | Format string for the credential value ({} is replaced with the credential) |
path_pattern | Conditional | - | Required for url_path mode. URL path pattern with {} placeholder (e.g., /bot{}/) |
path_replacement | No | - | Optional replacement pattern for url_path mode (e.g., /v2/bot{}/) |
query_param_name | Conditional | - | Required for query_param mode. Query parameter name for credential injection (e.g., key or api_key) |
env_var | Conditional | - | Explicit environment variable name for the phantom token. Required when credential_key is a URI manager ref (op://... or apple-password://...) |
Important: Use underscores, not hyphens, in credential names (the keys in custom_credentials). The credential name is used to generate environment variables like TELEGRAM_BASE_URL. Shell variable names cannot contain hyphens, so my-api would create MY-API_BASE_URL which cannot be referenced as $MY-API_BASE_URL in shell scripts. Use my_api instead.
Injection Modes
Custom credentials support multiple injection patterns to accommodate different API authentication schemes:
Header Mode (default)
Injects the credential as an HTTP header with optional formatting. This is the most common authentication pattern.
{
"custom_credentials": {
"telegram": {
"upstream": "https://api.telegram.org",
"credential_key": "telegram_bot_token",
"inject_mode": "header",
"inject_header": "Authorization",
"credential_format": "Bearer {}"
}
}
}
URL Path Mode
Replaces a phantom token in the URL path with the real credential. Useful for APIs like Telegram Bot API that embed authentication tokens in the path (e.g., /bot{token}/method).
{
"custom_credentials": {
"telegram": {
"upstream": "https://api.telegram.org",
"credential_key": "telegram_bot_token",
"inject_mode": "url_path",
"path_pattern": "/bot{}/",
"path_replacement": "/bot{}/"
}
}
}
The agent sends requests with a phantom token:
POST http://127.0.0.1:PORT/telegram/bot<NONO_PROXY_TOKEN>/sendMessage
The proxy validates the phantom token matches the session token, then replaces it with the real credential:
POST https://api.telegram.org/bot<REAL_TOKEN>/sendMessage
Query Parameter Mode
Adds or replaces a query parameter with the credential value. Common for APIs that use URL query parameters for authentication (e.g., Google Maps API).
{
"custom_credentials": {
"google_maps": {
"upstream": "https://maps.googleapis.com",
"credential_key": "google_maps_api_key",
"inject_mode": "query_param",
"query_param_name": "key"
}
}
}
The agent sends requests with a phantom token in the query parameter:
GET http://127.0.0.1:PORT/google_maps/maps/api/geocode/json?key=<NONO_PROXY_TOKEN>&address=...
The proxy validates the phantom token, then replaces it with the real credential:
GET https://maps.googleapis.com/maps/api/geocode/json?key=<REAL_API_KEY>&address=...
The credential value is URL-encoded automatically.
Basic Auth Mode
Injects a Base64-encoded Basic Authentication header. The credential value should be stored in username:password format in the keystore.
{
"custom_credentials": {
"private_api": {
"upstream": "https://api.example.com",
"credential_key": "example_basic_auth",
"inject_mode": "basic_auth"
}
}
}
Store the credential in username:password format:
# macOS
security add-generic-password -s "nono" -a "example_basic_auth" -w "myuser:mypassword"
# Linux
echo -n "myuser:mypassword" | secret-tool store --label="nono: example_basic_auth" \
service nono username example_basic_auth target default
The proxy automatically Base64-encodes the credential and injects it as Authorization: Basic <encoded>.
Phantom Token Validation
For url_path and query_param modes, the agent must include the session token (NONO_PROXY_TOKEN) as a placeholder in the request. The proxy validates this phantom token before replacing it with the real credential. Invalid or missing phantom tokens result in HTTP 401 Unauthorized responses.
Store the credential in the system keystore:
# macOS
security add-generic-password -s "nono" -a "telegram_bot_token" -w "your-bot-token"
# Linux
echo -n "your-bot-token" | secret-tool store --label="nono: telegram_bot_token" \
service nono username telegram_bot_token target default
Then run with the custom credential:
nono run --profile my-agent --credential telegram -- my-bot
Custom credentials can also override built-in services. For example, to route OpenAI requests through a custom proxy:
{
"network": {
"custom_credentials": {
"openai": {
"upstream": "https://my-openai-proxy.example.com/v1",
"credential_key": "my_openai_key",
"inject_header": "Authorization",
"credential_format": "Bearer {}"
}
}
}
}
Security Validation
Custom credentials are validated at startup:
- Upstream URL must be HTTPS (HTTP is only allowed for
localhost, 127.0.0.1, or ::1)
- Credential key must be alphanumeric (letters, numbers, and underscores only)
Invalid configurations will fail with a clear error message before the sandbox is applied.
Session Token Authentication
Reverse proxy requests are authenticated using an X-Nono-Token header containing the session token. The proxy generates a unique 256-bit token per session and passes it to the child via the NONO_PROXY_TOKEN environment variable. Every request to a credential route must include this header — requests without a valid token are rejected with 407 Proxy Authentication Required.
This prevents other localhost processes from accessing the credential injection routes.
Endpoint Filtering
Credential routes can be restricted to specific HTTP method+path combinations using --allow-endpoint or endpoint_rules in custom credential definitions. See Networking — Endpoint Filtering for full documentation including pattern syntax.
WSL2 Limitations
On WSL2, proxy-based credential injection (--credential) is blocked by default. The proxy itself works, but the network lockdown that prevents the child from bypassing the proxy cannot be kernel-enforced — WSL2’s seccomp notify conflict (microsoft/WSL#9548) blocks the fallback, and Landlock V4 (kernel 6.7+) is not yet available.
Environment variable injection (--env-credential) works normally on WSL2 — it does not depend on the proxy.
To opt in to proxy mode without network enforcement, set wsl2_proxy_policy: "insecure_proxy" in your profile’s security config. See Credential Proxy on WSL2 for details.
Security Properties
- Credentials never enter the sandbox - The agent process has no access to API keys, even through environment variables or memory
- Session token isolation - Reverse proxy routes require
X-Nono-Token authentication; CONNECT tunnels use Proxy-Authorization
- Keystore-backed storage - Credentials are loaded from the OS keyring (Keychain on macOS, Secret Service on Linux), not from plaintext files
- Zeroized in memory - Credential values are stored in
Zeroizing<String> and wiped from memory on drop
- Session-scoped - Credentials are loaded once at proxy startup and never written to disk or logged
- Header stripping - The proxy strips any
Authorization or x-api-key headers from the agent’s request before injecting the real credential, preventing the agent from overriding the injected value
Audit Logging
Reverse proxy requests are logged with the service name and status code, but credential values are never logged:
ALLOW REVERSE openai POST /v1/chat/completions -> 200
ALLOW REVERSE anthropic POST /v1/messages -> 200
Environment Variable Injection
For credentials that don’t need proxy-based protection (e.g., database URLs, custom tokens), you can load secrets from the system keystore and inject them as environment variables.
Quick Start
# 1. Store a secret in the system keystore
security add-generic-password -s "nono" -a "openai_api_key" -w "sk-..." # macOS
# 2. Run with env-credential injection
nono run --allow-cwd --env-credential openai_api_key -- my-agent
The secret is loaded from the keystore and injected as $OPENAI_API_KEY (uppercased account name).
How It Works
1. nono loads secrets from keystore BEFORE sandbox is applied
2. Sandbox is applied (blocks keystore access)
3. Secrets injected as environment variables
4. Command executed with secrets available
5. Secrets zeroized from memory after exec()
This ensures the sandboxed process cannot access the keystore directly — it only receives the specific secrets you authorized.
Storing Secrets
All nono secrets are stored under the service name nono in the system keystore.
macOS Keychain
# Interactive (prompts for password)
security add-generic-password -s "nono" -a "openai_api_key" -w
# Non-interactive (password on command line - less secure)
security add-generic-password -s "nono" -a "openai_api_key" -w "sk-..."
# Update an existing secret
security add-generic-password -s "nono" -a "openai_api_key" -w "new-value" -U
# Delete a secret
security delete-generic-password -s "nono" -a "openai_api_key"
# List all nono secrets
security dump-keychain | grep -A5 "nono"
You can also use the Keychain Access application:
- Open Keychain Access (search in Spotlight or find in
/Applications/Utilities/)
- Select the login keychain in the sidebar
- Click File > New Password Item (or press
Cmd+N)
- Fill in: Keychain Item Name:
nono, Account Name: openai_api_key, Password: your API key
- Click Add
When nono accesses a secret for the first time, macOS will prompt you to allow access. Click Always Allow to avoid repeated prompts.
Linux Secret Service
Linux uses the Secret Service API, typically provided by GNOME Keyring or KWallet. You need secret-tool (part of libsecret-tools) and a running keyring daemon.
Installation:
sudo apt install libsecret-tools gnome-keyring # Debian/Ubuntu
sudo dnf install libsecret gnome-keyring # Fedora
sudo pacman -S libsecret gnome-keyring # Arch Linux
Usage:
# Store (prompts for password)
secret-tool store --label="nono: openai_api_key" service nono username openai_api_key target default
# Store non-interactively
echo -n "sk-..." | secret-tool store --label="nono: openai_api_key" service nono username openai_api_key target default
# Retrieve (for testing)
secret-tool lookup service nono username openai_api_key target default
# Delete
secret-tool clear service nono username openai_api_key target default
# List all nono secrets
secret-tool search --all service nono
Important: The target default attribute is required for the keyring crate to find the entry.
Using env-credential
CLI Flag
Specify comma-separated account names to load:
# Load single secret
nono run --allow-cwd --env-credential openai_api_key -- claude
# Load multiple secrets
nono run --allow-cwd --env-credential openai_api_key,anthropic_api_key -- claude
The environment variable name is auto-generated by uppercasing the account name:
| Account Name | Environment Variable |
|---|
openai_api_key | OPENAI_API_KEY |
anthropic_api_key | ANTHROPIC_API_KEY |
github_token | GITHUB_TOKEN |
Profile-Based Secrets
Profiles can declare which credentials to load in the env_credentials section (the previous name secrets is still accepted):
{
"meta": { "name": "my-agent" },
"filesystem": {
"allow": ["$WORKDIR"]
},
"env_credentials": {
"openai_api_key": "OPENAI_API_KEY",
"anthropic_api_key": "ANTHROPIC_API_KEY",
"custom_token": "MY_CUSTOM_TOKEN"
}
}
Then use the profile directly:
nono run --profile my-agent -- my-agent
The env_credentials section maps keystore account names to environment variable names, giving you full control over naming.
Error Handling
Secret not found:
nono: Secret not found in keystore: openai_api_key
Store the secret first using the platform-specific commands above.
Keystore locked:
Keystore access failed for 'openai_api_key': ...
Please unlock your keystore and press Enter to retry (or Ctrl+C to abort):
Unlock your keystore (typically by entering your login password) and press Enter.
Multiple entries:
nono: Failed to access system keystore: Multiple entries (2) found for 'api_key' - please resolve manually
Delete the duplicate entries using your OS keystore manager.
Headless Linux Environments
Secret Service (GNOME Keyring) can be problematic on headless servers and SSH sessions because it requires D-Bus and a graphical login.
Option 1: Use pass (Recommended for Headless)
pass uses GPG encryption and works well in headless environments.
# Install and initialize
sudo apt install pass
gpg --gen-key
pass init "your-gpg-key-id"
# Store a secret
pass insert nono/openai_api_key
Wrapper script:
#!/bin/bash
# ~/bin/nono-with-pass
export OPENAI_API_KEY=$(pass show nono/openai_api_key)
export ANTHROPIC_API_KEY=$(pass show nono/anthropic_api_key)
exec nono run "$@"
Option 2: Environment Variables via Wrapper
For simple setups, export secrets from a protected file:
mkdir -p ~/.config/nono
touch ~/.config/nono/secrets.env
chmod 600 ~/.config/nono/secrets.env
cat > ~/.config/nono/secrets.env << 'EOF'
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
EOF
Wrapper script:
#!/bin/bash
# ~/bin/nono-env
source ~/.config/nono/secrets.env
exec nono run "$@"
File-based secrets are less secure than a proper keystore. Ensure the file has strict permissions (chmod 600) and is not backed up to insecure locations.
Option 3: Set Up Headless Keyring
If you prefer to use Secret Service in headless mode:
#!/bin/bash
# unlock-keyring.sh - Run once per session
read -s -p "Keyring password: " KEYRING_PASSWORD
echo
if [ -z "$DBUS_SESSION_BUS_ADDRESS" ]; then
eval $(dbus-launch --sh-syntax)
export DBUS_SESSION_BUS_ADDRESS
fi
echo -n "$KEYRING_PASSWORD" | gnome-keyring-daemon --unlock --components=secrets
unset KEYRING_PASSWORD
Add to ~/.bashrc or ~/.zshrc for SSH sessions:
if [ -n "$SSH_CONNECTION" ] && [ -z "$DBUS_SESSION_BUS_ADDRESS" ]; then
export $(dbus-launch)
fi
Security Considerations
What nono protects:
- Keystore file access - Sandbox blocks direct access to
~/Library/Keychains (macOS) and keyring files
- Memory exposure - Secrets wrapped in
Zeroizing<String> and cleared after use
Limitations:
- Environment variable visibility - On Linux,
/proc/PID/environ is readable by same-user processes. For maximum protection, use proxy injection instead.
- Malicious use of credentials - nono cannot prevent a sandboxed process from misusing legitimately obtained credentials
Best practices:
- Use unique account names (e.g.,
myapp_openai_key rather than api_key)
- Rotate secrets regularly
- Only grant secrets that are actually needed
- Prefer proxy injection for LLM API keys
Next Steps