Run OpenAI Codex with OpenResponses using any model of your choice
-m flag:
provider@model-name or model_endpoint@model_name, where:
provider: The model provider (e.g., claude, deepseek, google, openai)model_endpoint: For locally deployed models or any custom model provider, the endpoint URL where chat/completions is availablemodel-name: The specific model to use from that provider or endpointprovider@model_name convention. For a complete list of supported providers with detailed examples for each, see our Model Providers documentation.