Unified.to
All articles

How to Get a Gemini API Key — and Connect It to Your Product


February 20, 2026

Gemini is Google's large language model family available via Google AI Studio and Google Cloud (Vertex AI).

If you want to use Gemini inside:

  • a SaaS product
  • an internal AI tool
  • an AI agent with tool-calling
  • or a multi-model GenAI system

…you'll need a Gemini API key.

This guide walks through:

  1. Creating a Gemini API key in Google AI Studio
  2. Configuring environment variables securely
  3. Testing your first request
  4. Connecting Gemini to Unified's Generative AI API
  5. Enabling Gemini tool-calling via [Unified MCP](/mcp)

Step 1: Access Google AI Studio

Go to:

https://aistudio.google.com

Sign in with your Google account.

On first access, you'll need to:

  • Accept the Generative AI terms
  • Confirm region availability
  • Review data usage policies

For development, AI Studio is the fastest path.

For enterprise production, you may later migrate to Google Cloud Vertex AI.

Step 2: Navigate to 'Get API Key'

In the left sidebar, click:

Get API key

This opens the API key management page.

If you're new:

  • Google may auto-create a default Google Cloud project
  • Or you can import or create a project manually

Each Gemini API key is tied to a Google Cloud project.

Step 3: Create Your API Key

Click:

Create API key

You'll be prompted to:

  • Create in a new project
  • Or select an existing project

Once created, the key will look like:

AIza...

Copy it immediately and store it securely.

Step 4: Store the Key Securely

Best practice: set it as an environment variable.

macOS / Linux (bash)

export GEMINI_API_KEY=<YOUR_KEY>

Then:

source ~/.bashrc

Gemini SDKs automatically detect:

  • GEMINI_API_KEY
  • or GOOGLE_API_KEY (takes precedence if both are set)

Never:

  • Commit keys to Git
  • Embed them in frontend code
  • Share them in screenshots

For production, use a secret manager.

Step 5: Test Your Gemini API Key

Example using REST:

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=$GEMINI_API_KEY" \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{
    "contents": [{
      "parts": [{
        "text": "Explain how AI works in a few words"
      }]
    }]
  }'

If successful, you'll receive a generated response from Gemini.

Understanding Gemini Pricing and Limits

Gemini offers a free tier with rate limits. Limits vary by:

  • Model (e.g., Flash vs Pro)
  • Requests per minute
  • Tokens per minute
  • Requests per day

For production systems:

  • Monitor usage in Google AI Studio
  • Set quota alerts
  • Implement retry/backoff logic
  • Avoid sending excessive prompt context

Using Gemini in a Multi-Model AI Architecture

Most AI-native SaaS teams do not rely on a single provider long-term.

Common reasons:

  • Cost optimization
  • Fallback if one provider degrades
  • Model specialization
  • Enterprise procurement requirements

Instead of building separate integrations for:

  • Gemini
  • OpenAI
  • Anthropic
  • Groq
  • Cohere

…you can integrate once against Unified's Generative AI API.

Build Once Across Gemini and Other LLM Providers

Unified's Generative AI API standardizes:

  • Models
  • Prompts
  • Embeddings

Across supported LLM providers, including Gemini.

Core objects

Model

  • id
  • name
  • max_tokens
  • temperature support

Prompt

  • model_id
  • messages
  • temperature
  • max_tokens
  • responses
  • tokens_used

Embedding

  • model_id
  • content
  • dimension
  • embeddings
  • tokens_used

This allows you to:

  • Switch between Gemini and other providers without rewriting integration code
  • Run the same prompt across providers and compare outputs
  • Route requests dynamically based on cost or availability
  • Generate embeddings consistently across providers

Your product logic stays stable.

Provider differences are abstracted at the integration layer.

Let Gemini Take Action via Unified MCP

Text generation is one layer.

Production AI features require structured, authorized reads and writes against customer SaaS platforms.

Examples:

  • List CRM deals
  • Retrieve ATS candidates
  • Fetch a file from storage
  • Update a ticket
  • Write back a note

Unified's MCP server connects Gemini (and other LLMs) to customer integrations through tool-calling.

Gemini Tool-Calling with Unified MCP

Gemini uses function_declarations for tool calling.

High-level flow:

  1. Fetch tools in Gemini format:
GET /tools?type=gemini
  1. Provide tools to the Gemini API call
  2. Gemini returns a function call request:
{
  "function_call": {
    "name": "list_candidates",
    "args": {
      "limit": "100"
    }
  }
}
  1. Call Unified:
POST /tools/{id}/call
  1. Return the tool result back to Gemini

This keeps responsibilities clean:

  • Gemini decides which tool to call
  • Unified executes the authorized call against the source API
  • Your app controls UX, approvals, and logging

Important MCP Controls for Production

MCP includes controls you should use intentionally:

  • permissions → restrict what tools can do
  • tools → limit available tools to reduce model overload
  • hide_sensitive=true → remove PII fields from responses
  • Regional endpoints (US/EU/AU)

LLMs have tool limits. Scoping is not optional — it's required for stable deployments.

Security Best Practices for Gemini + MCP

When deploying Gemini in production:

  • Keep API keys server-side
  • Rotate keys periodically
  • Use restricted API keys
  • Separate dev/staging/prod keys
  • Monitor request patterns
  • Log tool calls and responses

If the model can write to external platforms, treat that as privileged access.

Why This Matters for AI-Native SaaS Teams

Calling Gemini directly is simple.

Building:

  • Multi-model routing
  • Embedding pipelines
  • Agent-based actions
  • SaaS write-backs
  • Enterprise-grade integration security

…is not.

Unified was built for AI-native SaaS teams that need:

  • Real-time data access
  • Pass-through architecture
  • Zero storage of customer data
  • Usage-based pricing aligned with API volume
  • MCP-compatible integration infrastructure

Gemini generates reasoning.

Unified connects that reasoning to structured SaaS data and authorized actions.

That's the difference between experimenting with an LLM and shipping AI features inside a real product.

→ Start your 30-day free trial

→ Book a demo

All articles