OpenAI Codex With Your Own Model

Codex In Action With OpenResponses

Key Benefits

  • Run Codex with Custom Models: Integrate OpenAI Codex with OpenResponses to use any model of your choice
  • Extend Functionality: Enhance Codex capabilities with additional MCP tools or custom tools and integrations
  • Simple Deployment: Quick setup with no separate installation needed - just follow the quickstart guide
  • Full Control: Maintain complete ownership of your code data and model choices

Step-by-Step Setup Instructions

1. Run OpenResponses Service

Launch OpenResponses using Docker:

docker run -p 8080:8080 masaicai/open-responses:latest

For advanced configuration options, refer to the detailed quickstart guide.

2. Install OpenAI Codex

Install the Codex CLI globally:

npm i -g @openai/codex

For more information about Codex CLI, visit the official repository.

3. Configure and Run Codex with Your Preferred Model

Set OpenResponses as your base API and configure your API key:

# Point to your OpenResponses instance
export OPENAI_BASE_URL=http://localhost:8080/v1

# Configure your API key (use the key from your model provider, e.g., OpenAI, Claude, etc.)
export OPENAI_API_KEY=your-api-key-here

Run Codex with a specific model using the -m flag:

Example with Locally Deployed Model

codex -m "http://mymodel_host/v1@my_model" "explain me the structure of the codebase"

Example with Claude

codex -m "claude@claude-3-5-haiku-20241022" "explain me the structure of the codebase"

Example with DeepSeek

codex -m "deepseek@deepseek-chat" "analyze this repository"

Example with Google Gemini

codex -m "google@gemini-2.0-flash" "help me understand this code"

The format for specifying a model is provider@model-name or model_endpoint@model_name, where:

  • provider: The model provider (e.g., claude, deepseek, google, openai)
  • model_endpoint: For locally deployed models or any custom model provider, the endpoint URL where chat/completions is available
  • model-name: The specific model to use from that provider or endpoint

Supported Model Providers

OpenResponses supports the following model providers that can be used with the provider@model_name convention:

ProviderAPI Endpoint
openaihttps://api.openai.com/v1
claudehttps://api.anthropic.com/v1
anthropichttps://api.anthropic.com/v1
groqhttps://api.groq.com/openai/v1
togetheraihttps://api.together.xyz/v1
geminihttps://generativelanguage.googleapis.com/v1beta/openai/
googlehttps://generativelanguage.googleapis.com/v1beta/openai/
deepseekhttps://api.deepseek.com
ollamahttp://localhost:11434/v1
xaihttps://api.x.ai/v1

For any model provider not listed in the table above, you can use the model_endpoint@model_name convention by directly specifying the full API endpoint URL. This works for both locally deployed models and any third-party API services that follow the OpenAI-compatible chat/completions format.