Run OpenAI Codex with OpenResponses using any model of your choice
Launch OpenResponses using Docker:
For advanced configuration options, refer to the detailed quickstart guide.
Install the Codex CLI globally:
For more information about Codex CLI, visit the official repository.
Set OpenResponses as your base API and configure your API key:
Run Codex with a specific model using the -m
flag:
The format for specifying a model is provider@model-name
or model_endpoint@model_name
, where:
provider
: The model provider (e.g., claude, deepseek, google, openai)model_endpoint
: For locally deployed models or any custom model provider, the endpoint URL where chat/completions is availablemodel-name
: The specific model to use from that provider or endpointOpenResponses supports a variety of model providers that can be used with the provider@model_name
convention. For a complete list of supported providers with detailed examples for each, see our Model Providers documentation.
Run OpenAI Codex with OpenResponses using any model of your choice
Launch OpenResponses using Docker:
For advanced configuration options, refer to the detailed quickstart guide.
Install the Codex CLI globally:
For more information about Codex CLI, visit the official repository.
Set OpenResponses as your base API and configure your API key:
Run Codex with a specific model using the -m
flag:
The format for specifying a model is provider@model-name
or model_endpoint@model_name
, where:
provider
: The model provider (e.g., claude, deepseek, google, openai)model_endpoint
: For locally deployed models or any custom model provider, the endpoint URL where chat/completions is availablemodel-name
: The specific model to use from that provider or endpointOpenResponses supports a variety of model providers that can be used with the provider@model_name
convention. For a complete list of supported providers with detailed examples for each, see our Model Providers documentation.