Skip to main content

Mic's

Silence is golden.

"ThunderAI" Setup Guide - OpenAI Compatible API

The OpenAI Compatible API integration lets you connect ThunderAI to any server that follows the OpenAI API format.
This covers local servers like LM Studio, hosted services like DeepSeek, Grok, Mistral AI, OpenRouter, and Perplexity, as well as any custom or self-hosted endpoint.

API key requiredDepends on the service
Free optionDepends on the service (local servers are free)
Best forFlexibility: local servers, alternative providers, custom endpoints

Predefined configurations

ThunderAI includes ready-made presets for the most popular OpenAI-compatible services. If you are using one of these, select the preset and only provide your API key:

PresetNotes
DeepSeek APIFast and cost-effective, strong reasoning models
Grok APIxAI's Grok models
Mistral APIMistral AI's hosted models
OpenRouter APIAccess many providers through a single API key
Perplexity APIIncludes web-search augmented models

Setup

1. Open ThunderAI Options

In Thunderbird, click the ThunderAI menu and select Options, or go to
Tools → Add-ons and Themes → ThunderAI → Preferences.

2. Select the OpenAI Compatible API integration

In the Connection section, choose OpenAI Compatible API from the integration dropdown.

3a. Using a preset

If your provider has a preset, select it from the preset list. The API URL will be filled in automatically. Enter your API key and skip to step 5.

3b. Manual configuration

If your provider is not in the preset list, configure it manually:

  • API URL: Enter the base URL of your server. For LM Studio running locally, this is typically http://localhost:1234/v1. For hosted providers, check their documentation.
  • Remove "v1" from URL: Enable this if your provider's documentation instructs you to use the base URL without the /v1 segment.

4. Enter your API key (if required)

Local servers like LM Studio usually do not require an API key, so you can leave this field empty or enter any placeholder value. Hosted services will require a valid key.

5. Choose a model

Select your model from the Model dropdown. Use the button to refresh the list.

If the server does not expose a models list endpoint, enable the Set model name manually option and type the model name as specified by your provider.

6. Save and test

Click Save. Open any email, use the ThunderAI menu, and run a prompt to verify everything works.

LM Studio — quick setup

LM Studio provides a local server with an OpenAI-compatible API and a graphical interface to download and manage models.

  1. Download and install LM Studio from lmstudio.ai
  2. In LM Studio, go to the Local Server tab and start the server
  3. Download a model of your choice from LM Studio's model browser
  4. In ThunderAI, set the API URL to http://localhost:1234/v1, leave the API key empty, and select your model

Tips

  • OpenRouter is a great option if you want to access many different models (including Claude, Gemini, and Mistral) through a single API key and billing account.
  • When using local servers, performance depends on your hardware — see the Ollama guide for hardware tips.
  • If requests fail with a CORS error and you can't add the proper configuration on the server, use the "All URLs" optional permission in ThunderAI settings as a workaround.

Troubleshooting

"Connection refused" on a local server

Make sure the local server (e.g. LM Studio) is actually running and the server URL in ThunderAI matches the port it is listening on.

"401 Unauthorized" error

Check your API key. For local servers that do not require a key, try entering a dummy value like none. Some servers reject empty key fields.

Empty model list

Enable the Set model name manually option and type the model name as specified by your provider or server documentation.

The "v1" in the URL causes errors

Enable the Remove "v1" from URL option in ThunderAI settings.


Back to guides