This guide provides step-by-step instructions for running the Open Responses service with the observability stack, which includes OpenTelemetry, Jaeger, Prometheus, and Grafana.
Navigate to the observability infrastructure directory:
Copy
cd observability/infra
Start the observability stack:
Copy
docker-compose up
This will run the following services:
OpenTelemetry Collector
Jaeger
Prometheus
Grafana
Navigate back to open-responses directory. Run Open Responses service with OpenTelemetry enabled:
Copy
docker-compose --profile mcp up open-responses-otel
Note: You might see momentary exceptions like this for spans, logs and metrics.
Copy
ERROR [OkHttp http://otel-collector:4318/...] [unknown] [unknown] i.o.e.internal.http.HttpExporter - Failed to export spans. The request could not be executed. Full error message: otel-collector
This is normal during startup.
Note: by default setup do not configure log exporter in otel-collector. Feel free to connect your favourite logging system.
Once all services are running properly, these errors should disappear.
Access the observability services through your browser:
Grafana comes pre-loaded with production-ready dashboards in the folder ‘Open Responses’ at http://localhost:3000/dashboards. These include:
Open-Responses GenAI Stats: Contains generative AI performance and usage metrics
Open-Responses Service Stats: Contains service compute level metrics like CPU, memory usage, etc.
Run few times any of the curl examples mentioned in the Quickstart guide to generate data. You should see statistics in the Grafana dashboards and traces in Jaeger.
You can also run examples from the OpenAI Agent SDK by following the instructions in the Quickstart guide.
To generate enough datapoints for meaningful dashboard visualization, you can use the load generation examples available at: OpenAI Agents Python ExamplesPrerequisites:
The service should be running as described in step #3
At least one model provider key is set (GROQ_API_KEY, OPENAI_API_KEY, or CLAUDE_API_KEY)