Skip to main content

Documentation Index

Fetch the complete documentation index at: https://allhandsai-feat-encrypted-secrets-in-transit.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

A ready-to-run example is available here!
The Settings and Secrets API provides REST endpoints for managing agent configuration and custom secrets through a local agent server. This is the recommended pattern for frontend clients that need to:
  • Store secrets securely via the Settings API (encrypted at rest)
  • Pass encrypted secrets when starting conversations via secrets_encrypted=True
  • Never have access to plaintext secrets after initial storage

Key Concepts

Settings Endpoints

The agent server exposes settings management via REST:
  • GET /api/settings - Retrieve current settings
  • PATCH /api/settings - Update settings with a partial diff
# Store LLM configuration - API key is encrypted at rest
response = client.patch(
    "/api/settings",
    json={
        "agent_settings_diff": {
            "llm": {
                "model": "anthropic/claude-sonnet-4-5-20250929",
                "api_key": api_key,
            }
        }
    },
)
settings = response.json()
# API key is redacted by default
assert settings["agent_settings"]["llm"]["api_key"] == "**********"

Encrypted Secrets for Starting Conversations

Frontend clients use the X-Expose-Secrets: encrypted header to get cipher-encrypted secrets:
# Get settings with cipher-encrypted secrets
response = client.get(
    "/api/settings",
    headers={"X-Expose-Secrets": "encrypted"},
)
encrypted_settings = response.json()

# Encrypted keys start with "gAAAAA" (Fernet token format)
encrypted_api_key = encrypted_settings["agent_settings"]["llm"]["api_key"]
Then use the encrypted LLM config when starting a conversation:
# Extract LLM config from settings (includes encrypted api_key)
encrypted_llm = encrypted_settings["agent_settings"]["llm"]

# Start conversation with encrypted secrets
start_request = {
    "agent": {
        "kind": "Agent",
        "llm": encrypted_llm,  # Use entire LLM config from settings
        "tools": [{"name": "TerminalTool"}, {"name": "FileEditorTool"}],
    },
    "workspace": {"working_dir": "/tmp/demo"},
    "secrets_encrypted": True,  # Server will decrypt the API key
    "initial_message": {
        "role": "user",
        "content": [{"type": "text", "text": "Create a hello.txt file"}],
        "run": True,
    },
}
response = client.post("/api/conversations", json=start_request)
The server decrypts the secrets before using them, ensuring the frontend never has access to plaintext secrets after initial storage.

Custom Secrets CRUD Operations

Custom secrets can be created, listed, retrieved, and deleted:
# Create a secret
client.put(
    "/api/settings/secrets",
    json={
        "name": "MY_PROJECT_TOKEN",
        "value": "secret-token-abc123",
        "description": "Example project token",
    },
)

# List secrets (values not exposed)
secrets = client.get("/api/settings/secrets").json()["secrets"]

# Get secret value
value = client.get("/api/settings/secrets/MY_PROJECT_TOKEN").text

# Delete secret
client.delete("/api/settings/secrets/MY_PROJECT_TOKEN")

Secret Name Validation

Secret names must follow environment variable naming conventions:
  • Start with a letter (a-z, A-Z)
  • Contain only letters, numbers, and underscores
  • Be 1-64 characters long
Invalid names are rejected with a 422 response:
# Invalid: starts with number - returns 422
response = client.put(
    "/api/settings/secrets",
    json={"name": "123_invalid", "value": "test"},
)

# Invalid: contains hyphen - returns 422
response = client.put(
    "/api/settings/secrets",
    json={"name": "invalid-name", "value": "test"},
)

Ready-to-Run Example

This example demonstrates the complete encrypted secrets workflow:
  1. Store LLM API key via PATCH /api/settings (encrypted at rest)
  2. Fetch settings with X-Expose-Secrets: encrypted header
  3. Start conversation via POST /api/conversations with secrets_encrypted=True
  4. Poll conversation state and verify agent task completion
  5. Test custom secrets CRUD operations
examples/02_remote_agent_server/12_settings_and_secrets_api.py
<code will be auto-synced>
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o). The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.

Next Steps