Unified.to
All articles

How to Get a ChatGPT API Key (OpenAI) — and Connect It to Your Product


February 20, 2026

If you're building with ChatGPT (OpenAI's models like GPT-4.1), you'll need an OpenAI API key.

This guide walks through:

  1. Creating your OpenAI API key
  2. Setting up billing and usage limits
  3. Testing your first API call
  4. Using OpenAI through Unified's Generative AI API
  5. Connecting OpenAI to customer SaaS tools via [Unified MCP](/mcp)

Step 1: Create an OpenAI Account

Go to:

https://platform.openai.com

Sign up or log in.

You must create an organization before generating API keys.

Step 2: Generate an API Key

In the left sidebar:

API Keys → Create new secret key

Name it clearly (e.g., prod-backend, dev-testing).

Your key will be shown once. Copy and store it securely.

Best practice: use environment variables.

macOS / Linux

export OPENAI_API_KEY="<your_key>"

Windows (PowerShell)

setx OPENAI_API_KEY "<your_key>"

Never:

  • Commit API keys to Git
  • Embed them in frontend code
  • Share them in logs or screenshots

Step 3: Set Up Billing

OpenAI's API uses usage-based pricing.

In the dashboard:

Settings → Billing

Add a payment method and configure:

  • Initial credits
  • Auto-recharge (optional)
  • Usage limits

Without billing configured, your key may be inactive.

Step 4: Set Usage Limits

To avoid unexpected charges:

Settings → Limits

Configure:

  • Soft limit (email warning)
  • Hard limit (automatic cut-off)

This is critical for early-stage AI features where usage can spike unexpectedly.

Step 5: Test Your API Key

OpenAI's current standard endpoint is the Responses API.

Example:

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-4.1-mini",
    "input": "Write a two sentence summary of what an API key is."
  }'

If successful, you'll receive a JSON response containing the model output.

Using OpenAI in a Multi-Model AI Architecture

Calling OpenAI directly works if you only plan to use one provider.

Most AI-native SaaS teams eventually need:

  • Fallback between providers
  • Cost routing
  • Embedding portability
  • Enterprise flexibility

Instead of maintaining separate integrations for:

  • OpenAI
  • Gemini
  • Anthropic
  • Groq
  • Cohere

…you can integrate once using Unified's Generative AI API.

Build Once Across LLM Providers

Unified's Generative AI API standardizes:

  • Models
  • Prompts
  • Embeddings

Across supported providers — including OpenAI.

Core standardized objects

Model

  • id
  • max_tokens
  • temperature support

Prompt

  • model_id
  • messages
  • temperature
  • max_tokens
  • responses
  • tokens_used

Embedding

  • model_id
  • content
  • dimension
  • embeddings
  • tokens_used

This allows you to:

  • Route requests between OpenAI and other providers
  • Run the same prompt across models and compare output
  • Generate embeddings consistently
  • Keep your product logic stable

You write your GenAI integration once.

Let GPT Take Action with Unified MCP

Text generation is only half the system.

Production AI features require structured, authorized reads and writes against customer SaaS platforms:

  • List CRM deals
  • Retrieve ATS candidates
  • Fetch files
  • Update records
  • Write notes

Unified's MCP server connects OpenAI models to customer integrations in a controlled, authorized way.

OpenAI + Unified MCP (Remote MCP Example)

OpenAI supports remote MCP servers through the Responses API.

Example:

resp = client.responses.create(
    model="gpt-4.1",
    tools=[{
        "type": "mcp",
        "server_label": "unifiedMCP",
        "server_url": "https://mcp-api.unified.to/mcp?token=TOKEN&connection=CONNECTION",
        "require_approval": "never"
    }],
    input="List candidates and summarize their resumes."
)

When the model requests a tool:

  1. Call Unified's /tools/{id}/call
  2. Return the result to OpenAI
  3. Continue the response

This architecture separates responsibilities:

  • OpenAI → reasoning
  • Unified → authorized API execution
  • Your app → product logic and UX

Production Controls You Should Use

When deploying OpenAI with MCP:

  • Scope tools to avoid model overload
  • Restrict permissions per connection
  • Use regional MCP endpoints (US/EU/AU)
  • Monitor usage and API volume
  • Keep tokens server-side only

Unified's architecture is:

  • Real-time (data fetched directly from source APIs)
  • Pass-through
  • Zero storage of customer payloads
  • Usage-based pricing aligned to API volume

Why This Matters for AI-Native SaaS Teams

Calling GPT-4.1 is simple.

Shipping:

  • AI copilots
  • Agent-based write actions
  • Embedding pipelines
  • Enterprise-grade SaaS integrations

…requires infrastructure.

Unified was built for:

  • Real-time integration access
  • Pass-through architecture
  • Zero-storage design
  • MCP-compatible AI agents
  • Usage-based scaling

OpenAI generates intelligence.

Unified connects that intelligence to structured SaaS data and authorized actions.

That's how AI features move from demo to production.

→ Start your 30-day free trial

→ Book a demo

All articles