Skip to content

Configuration

All configuration lives in the .env file at the project root. This file is gitignored — it never gets committed.

The container supports two mutually exclusive authentication modes (in priority order):

  • LiteLLM proxy — set LITELLM_BASE_URL and LITELLM_API_KEY to route through a LiteLLM proxy (see below).
  • OAuth mode — set CLAUDE_CODE_OAUTH_TOKEN only (see below).

Do not combine modes — use exactly one.

VariableDescriptionExample
CLAUDE_CODE_OAUTH_TOKENOAuth token from a Claude Max subscriptionsk-ant-oat01-...

When set, both Claude Code and OpenCode authenticate directly with Anthropic using OAuth — no API key or proxy needed. This takes priority over LITELLM_API_KEY for OpenCode configuration. Get your token from Claude Code settings or the Anthropic console.

If your LiteLLM (or other) proxy already speaks the native Anthropic Messages API — including web_search, streaming, and tool use — set LITELLM_BASE_URL to the proxy domain. The container automatically derives the provider-specific URLs:

  • Anthropic endpoint → ${LITELLM_BASE_URL}/anthropic (used by Claude Code)
  • OpenAI-compatible endpoint → ${LITELLM_BASE_URL}/api/v1 (used by OpenCode)
VariableDescriptionExample
LITELLM_BASE_URLDomain of your LiteLLM proxy (no path suffix)https://f5ai.pd.f5net.com
LITELLM_API_KEYAPI key for proxy authenticationyour-api-key or defaults to litellm-proxy

Users only need to set the domain — provider path suffixes are added automatically. At runtime, the entrypoint derives ANTHROPIC_API_KEY from LITELLM_API_KEY so Claude Code can consume the proxy credential without requiring a separate user-facing variable.

Model names are not remapped in this mode. Claude Code sends its standard model identifiers (e.g. claude-sonnet-4-6) and LiteLLM routes them to your configured backend.

By default, Node.js validates TLS certificates for all HTTPS connections. If your upstream API provider uses a self-signed certificate (common with internal Open WebUI or LiteLLM proxies), you can disable validation by adding this to your .env file:

NODE_TLS_REJECT_UNAUTHORIZED=0

This is not recommended for production use. Only set this when you trust the network path between the container and your API endpoint.

The container automatically configures OpenCode based on which credentials are available. OAuth mode takes priority when both are set.

When CLAUDE_CODE_OAUTH_TOKEN is set, the entrypoint seeds:

  • ~/.config/opencode/opencode.json — Anthropic-native config with claude-opus-4-6 as the default model
  • ~/.local/share/opencode/auth.json — OAuth credentials for Anthropic

This connects OpenCode directly to Anthropic with no proxy needed.

When LITELLM_API_KEY and LITELLM_BASE_URL are set (and no OAuth token), the entrypoint seeds ~/.config/opencode/opencode.json with a custom provider called OpenAI Proxy that points at the upstream endpoint.

Available models (LiteLLM mode):

Model IDDescription
claude-sonnet-4-5Claude Sonnet 4.5
claude-sonnet-4-6Claude Sonnet 4.6
claude-opus-4-6Claude Opus 4.6
claude-haiku-4-5Claude Haiku 4.5

In both modes, the config is only seeded when the file doesn’t already exist, so you can customize it without it being overwritten on restart. To reset to defaults, delete ~/.config/opencode/opencode.json (and ~/.local/share/opencode/auth.json for OAuth mode) and restart the container.

The container includes several AI coding agents in addition to Claude Code:

AgentCommandProvider
OpenCodeopencodeAnthropic (OAuth) or OpenAI-compatible (proxy)
CodexcodexOpenAI-compatible
AideraiderMulti-provider (configure at runtime)
PipiMulti-provider (configure at runtime)

Aider is installed via uv tool install with Python 3.12 (it requires Python <3.13) and includes the browser, help, and playwright extras. Pi is installed as a global npm package. Codex is a standalone binary that self-updates at runtime.

  • Claude Code: Uses built-in web search via the Anthropic Messages API (web_search tool)
  • OpenCode: Uses built-in Exa web search — no API key required. Enabled via OPENCODE_ENABLE_EXA=true or OPENCODE_EXPERIMENTAL=true

The container runs a virtual display stack for watching and interacting with headed browsers. See Remote Display (noVNC) for connection instructions and environment variables (ENABLE_VNC, VNC_RESOLUTION, DISPLAY, NOVNC_HOST_PORT).

The container includes a pre-configured Chrome DevTools MCP server for headless browser automation. Claude Code can navigate web pages, take screenshots, and inspect the DOM without any additional setup. See Chrome DevTools MCP for details.

VariableDescriptionHow to Set
GIT_AUTHOR_NAMEName for git commitsgit config user.name
GIT_AUTHOR_EMAILEmail for git commitsgit config user.email

If git is configured on your host, auto-populate both values:

Terminal window
echo "GIT_AUTHOR_EMAIL=$(git config user.email)" >> .env
echo "GIT_AUTHOR_NAME=\"$(git config user.name)\"" >> .env
VariableDescriptionHow to Set
SSH_PRIVATE_KEYBase64-encoded private keybase64 < ~/.ssh/id_ed25519
VariableDescriptionHow to Set
GH_TOKENGitHub personal access token for gh CLI and HTTPS gitgh auth token (if gh is installed locally)

When GH_TOKEN is set in .env, the gh CLI authenticates automatically — no need to run gh auth login. The entrypoint also runs gh auth setup-git, which configures the git credential helper so HTTPS git clone and git push work without SSH keys.

If gh is installed and authenticated on your host machine, extract the token directly:

Terminal window
echo "GH_TOKEN=$(gh auth token)" >> .env

Otherwise, create a personal access token and add it manually. Both fine-grained tokens (recommended) and classic tokens work. Fine-grained tokens let you scope access to specific repositories. Classic tokens require the repo scope for private repository access.

To verify authentication inside the container:

Terminal window
gh auth status

Every language runtime, CLI, and tool is installed directly in the Dockerfile at image build time. The pre-built image is published to ghcr.io/f5xc-salesdemos/devcontainer:latest on every push to main, so most users never need to build locally.

Running docker compose up -d pulls the pre-built image automatically. To build locally after customizing the Dockerfile, pass the build file explicitly:

Terminal window
docker compose -f docker-compose.yml -f docker-compose.build.yml up -d --build

See Local Development for details.

The Dockerfile uses a two-stage build optimized for Docker layer caching. See Local Development — Two-Stage Build Architecture for the full layer breakdown and build cache details.

The container includes baked-in configuration that prevents Claude Code from confusing tool names. Tool awareness is installed at two memory tiers for defense-in-depth:

  • Managed policy (/etc/claude-code/CLAUDE.md) — installed at Docker build time. This is the highest-priority tier in Claude Code’s memory hierarchy and is always loaded, even when a large project CLAUDE.md exists in the working directory.
  • User memory (~/.claude/CLAUDE.md) — seeded by the entrypoint on first startup. Provides a backup layer and a visible, customizable copy for users.

Claude Code loads instructions from multiple memory tiers (highest to lowest priority):

  1. Managed policy (/etc/claude-code/) — Always loaded, cannot be excluded
  2. Project memory (./CLAUDE.md) — Loaded per project directory
  3. User memory (~/.claude/CLAUDE.md) — Loaded globally for all sessions
  4. Local memory (./CLAUDE.local.md) — Personal per-project overrides

Without the Managed policy tier, tool awareness in User memory can be deprioritized when competing with a large project-level CLAUDE.md, causing Claude to lose awareness of its PascalCase tool names. The Managed tier guarantees tool awareness is always loaded at the highest priority.

Run the built-in self-test to verify the configuration:

Terminal window
claude-self-test

This checks that all configuration files are in place and contain the expected tool references.

To add tools to the image, edit the Dockerfile and rebuild locally. See Local Development — Adding Tools for instructions on adding APT packages, npm tools, pip tools, and binary downloads.

Extensions are configured in .devcontainer/devcontainer.json under customizations.vscode.extensions and installed automatically when opening in VS Code.

All data lives in named volumes — no host directories are mounted:

VolumeMount PointContents
workspace/workspaceYour cloned repositories and project files
home/home/vscodeShell history, tool configs, caches, SSH keys

Both persist across container restarts and rebuilds. Run docker compose pull to fetch the latest pre-built image before restarting.

ActionData
docker compose downPreserved
docker compose down -vDeleted
HostContainerService
127.0.0.1:${NOVNC_HOST_PORT:-6080}6080noVNC remote display (localhost only, override with NOVNC_HOST_PORT)

Ports are bound to 127.0.0.1 so they are only accessible from your machine, not the network.