Use Cases & Demos
Codex With Your Own Model
Run OpenAI Codex with OpenResponses using any model of your choice
OpenAI Codex With Your Own Model
Codex In Action With OpenResponses
Key Benefits
- Run Codex with Custom Models: Integrate OpenAI Codex with OpenResponses to use any model of your choice
- Extend Functionality: Enhance Codex capabilities with additional MCP tools or custom tools and integrations
- Simple Deployment: Quick setup with no separate installation needed - just follow the quickstart guide
- Full Control: Maintain complete ownership of your code data and model choices
Step-by-Step Setup Instructions
1. Run OpenResponses Service
Launch OpenResponses using Docker:
For advanced configuration options, refer to the detailed quickstart guide.
2. Install OpenAI Codex
Install the Codex CLI globally:
For more information about Codex CLI, visit the official repository.
3. Configure and Run Codex with Your Preferred Model
Set OpenResponses as your base API and configure your API key:
Run Codex with a specific model using the -m
flag:
Example with Locally Deployed Model
Example with Claude
Example with DeepSeek
Example with Google Gemini
The format for specifying a model is provider@model-name
or model_endpoint@model_name
, where:
provider
: The model provider (e.g., claude, deepseek, google, openai)model_endpoint
: For locally deployed models or any custom model provider, the endpoint URL where chat/completions is availablemodel-name
: The specific model to use from that provider or endpoint
Supported Model Providers
OpenResponses supports a variety of model providers that can be used with the provider@model_name
convention. For a complete list of supported providers with detailed examples for each, see our Model Providers documentation.