Latest articles

How to make requests to Claude using the OpenAI SDK

27 March 2026Braintrust Team
TL;DR

The Braintrust AI Gateway makes Claude accessible through the OpenAI SDK. Point the OpenAI client to https://gateway.braintrust.dev, authenticate with a Braintrust API key, and use a Claude model name such as claude-sonnet-4-6. Braintrust handles the translation between OpenAI's chat.completions format and Anthropic's messages API so you can add Claude without separately adopting Anthropic's SDK.

Why access Claude from the OpenAI SDK and how Braintrust helps

Teams already running OpenAI in production sometimes find that Claude performs better on tasks such as document analysis, multi-step reasoning, and safety-sensitive outputs. When Claude produces better results for specific tasks, the developer has to decide whether to introduce Anthropic's SDK into the application or find a way to access Claude without changing the existing operation.

Adding a second SDK usually creates more work than teams expect. The OpenAI SDK returns responses as choices[0].message.content, and OpenAI streaming uses delta objects, while the Anthropic SDK returns content[0].text and Anthropic streaming uses event-based formats. Error handling, retry behavior, and timeout patterns also differ across the OpenAI and Anthropic libraries. Supporting both SDKs means adding new application logic and testing each difference before shipping changes to production. For teams that want Claude for selected use cases rather than a full integration rewrite, adding and maintaining a second SDK is often hard to justify.

Braintrust AI Gateway gives OpenAI-based applications a way to access Claude without adopting a second SDK. Braintrust's gateway receives requests from the OpenAI client, detects whether the model is from Anthropic, converts the request to Claude's expected format, and returns the response in the OpenAI format that the application already parses. From the application's perspective, Claude fits into the same integration pattern as any other OpenAI API call.

How to make requests to Claude using Braintrust's OpenAI SDK

Braintrust AI Gateway translates each OpenAI-formatted request into the format Claude's API expects, then returns the response in the OpenAI structure your client already handles. Here is how to set it up from scratch.

Prerequisites

1. Create a Braintrust account and generate an API key: Sign up at Braintrust, then go to Settings > Organization > API keys and click + API key. Enter a name for the key and click Create. Copy the key immediately, as Braintrust will not show it again. The key (prefixed sk-) authenticates every request your application sends through the AI Gateway.

For non-interactive environments such as production backends or CI runners, create a service token under Settings > Organization > Service tokens. Click + Service token, assign the required permission groups, and click Create. Service tokens use the bt-st- prefix and work anywhere API keys are accepted, but they are tied to a service account rather than an individual user and can be scoped to specific projects or permission levels.

2. Store your Anthropic API key in Braintrust: Go to Settings > Organization > AI providers and select Anthropic. Enter your Anthropic API key and click Save. Braintrust uses the stored key whenever the AI Gateway forwards a request to Claude. Braintrust encrypts all provider API keys using AES-256 with unique keys and nonces rather than storing them as plaintext.

If different projects in your organization use different Anthropic accounts, add a project-level key under Project Settings > AI Providers. Braintrust's gateway uses the project-level key for requests tied to that project and falls back to the organization-level key for all other requests.

3. Install the OpenAI SDK: Run npm install openai for TypeScript or JavaScript, or pip install openai for Python.

Calling Claude with the OpenAI SDK via Braintrust

OpenAI-based applications can use Claude through Braintrust with only a few configuration changes in the existing client setup. Switch the baseURL from OpenAI's API to Braintrust's gateway, update the apiKey to your Braintrust key, and set the model to claude-sonnet-4-6. When Braintrust receives the request, it translates the chat.completions payload into Anthropic's messages format, sends the request to Claude, and returns the result through the same OpenAI response pattern, allowing developers to add Claude without introducing Anthropic's SDK or rewriting an existing OpenAI integration.

typescript

const client = new OpenAI({
  baseURL: "https://gateway.braintrust.dev",
  apiKey: process.env.BRAINTRUST_API_KEY,
});

// Call Anthropic's Claude using the OpenAI SDK
const response = await client.chat.completions.create({
  model: "claude-sonnet-4-6",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);

Here is the equivalent in Python:

python
import os

from openai import OpenAI

client = OpenAI(
    base_url="https://gateway.braintrust.dev",
    api_key=os.environ["BRAINTRUST_API_KEY"],
)

# Call Anthropic's Claude using the OpenAI SDK
response = client.chat.completions.create(
    model="claude-sonnet-4-6",
    messages=[{"role": "user", "content": "Hello!"}],
)

print(response.choices[0].message.content)

Enabling logging and caching

Logging

Logging records each Claude request as a trace inside a Braintrust project, capturing the full lifecycle of every call. Attach requests to a trace using span.export() in the x-bt-parent header.

typescript


const logger = initLogger({ projectName: "My Project" });

await logger.traced(async (span) => {
  const client = new OpenAI({
    baseURL: "https://gateway.braintrust.dev",
    apiKey: process.env.BRAINTRUST_API_KEY,
  });

  const response = await client.chat.completions.create(
    {
      model: "claude-sonnet-4-6",
      messages: [{ role: "user", content: "Hello!" }],
    },
    {
      headers: {
        "x-bt-parent": await span.export(),
      },
    },
  );

  console.log(response.choices[0].message.content);
});

Each logged trace includes token usage, latency, estimated cost, and the full request and response payload. Organizations that already send OpenAI traffic through Braintrust can view Claude traces in the same project, then filter by model or provider without creating a separate logging workflow.

Caching

Caching stores gateway responses so repeated requests with the same prompt can return immediately without making another call to Anthropic. Enable caching by adding "x-bt-use-cache": "always" to the client's defaultHeaders.

typescript

const client = new OpenAI({
  baseURL: "https://gateway.braintrust.dev",
  defaultHeaders: {
    "x-bt-use-cache": "always",
  },
  apiKey: process.env.BRAINTRUST_API_KEY,
});

Each cached response is AES-GCM-encrypted using a key derived from your API key, so only you can access your cached data. Cache entries remain available for one week by default. If a request needs a different retention period, use the x-bt-cache-ttl header to set the cache duration in seconds.

Manage every AI provider through a single SDK with Braintrust Gateway

Calling Claude via the OpenAI SDK is one use case for Braintrust AI Gateway's provider-routing layer, which translates requests across any supported model provider. Developers do not need a separate SDK integration for each model provider they want to test or deploy, as they can keep the SDK already used in the application and use Braintrust to route requests across AI providers.

Braintrust works with the OpenAI, Anthropic, and Google Gemini SDKs. After a client is pointed to Braintrust's gateway, developers can switch between supported providers by updating the model name rather than rewriting the integration for each provider.

Braintrust supports direct integrations with all major model providers, including:

LLM model providers: OpenAI, Anthropic, Gemini, Mistral, Groq, Fireworks, Together, xAI, Perplexity, Replicate, Cerebras, Baseten, and Lepton.

Cloud platform providers: AWS Bedrock, Vertex AI, Azure AI Foundry, and Databricks.

If you need models not on the list, add custom providers through Braintrust's custom provider configuration, which supports self-hosted models, fine-tuned models, and proprietary AI endpoints. Provider credentials are managed in Braintrust's organization settings rather than stored in application code, so adding a new provider to the stack takes a few clicks in the Braintrust dashboard.

Want to access multiple model providers without maintaining separate SDK integrations? Start free with Braintrust or schedule a demo.

FAQs

Do I need separate API keys for OpenAI and Anthropic when using the Braintrust AI Gateway?

Your application code only needs a Braintrust API key. OpenAI and Anthropic credentials are stored in Braintrust's organization settings under AI Providers, so they stay out of your codebase while Braintrust handles provider routing at the gateway level.

Can I compare OpenAI and Claude model outputs using Braintrust?

With logging enabled in Braintrust, you can run the same prompts against GPT and Claude via the same OpenAI SDK client, log both result sets to the same project, and compare output quality, latency, and cost in Braintrust's experiment comparison view. The view makes it easier to evaluate provider differences without building a separate comparison workflow in your application code.

How do I monitor costs for OpenAI calls vs. Claude calls?

Enable logging to a Braintrust project, and Braintrust will capture token usage and per-request cost data for each model invocation. Braintrust aggregates data by model and provider in its dashboards, allowing you to see how spending changes as traffic shifts between OpenAI and Claude.