"ThunderAI" Setup Guide - Ollama (Local AI)
The Ollama integration lets you run AI models locally on your own machine.
Your emails never leave your computer: there are no API keys, no cloud services, no costs beyond your hardware.
| API key required | No |
| Free option | Yes [Completely free, you use your hardware] |
| Private / Local? | Yes [Everything stays on your machine] |
| Best for | Privacy-conscious users, sensitive emails, offline use |
Prerequisites
- Ollama installed and running on your machine (or accessible on your local network)
- At least one model downloaded in Ollama
- Thunderbird with ThunderAI installed
Installing Ollama and downloading a model
If you haven't installed Ollama yet:
- Download it from ollama.com and follow the installation instructions for your OS
- Open a terminal and pull a model, for example:
ollama pull llama3
Other good options for email tasks:mistral,phi3,gemma2 - Verify Ollama is running:
ollama list
Configuring CORS
Thunderbird extensions run in a sandboxed environment, so Ollama needs to be configured to accept requests from browser extensions.
You need to set the OLLAMA_ORIGINS environment variable to moz-extension://*.
For more detailed CORS configurations, see the Ollama CORS Information page.
Setup
1. Open ThunderAI Options
In Thunderbird, click the ThunderAI menu and select Options, or go to
Tools → Add-ons and Themes → ThunderAI → Preferences.
2. Select the Ollama integration
In the Connection section, choose Ollama from the integration dropdown.
3. Set the server URL
Enter the URL of your Ollama server. The default is:
http://localhost:11434
If Ollama is running on another machine on your network, replace localhost with that machine's IP address.
4. Choose a model
Select a model from the Model dropdown. Click the button to refresh the list.
5. Save and test
Click Save. Open any email, use the ThunderAI menu, and run a prompt to verify the connection.
Tips
- Performance depends on your hardware. A modern machine with at least 16 GB of RAM will handle models like
llama3ormistralcomfortably. A GPU significantly speeds things up. - Smaller models (7B parameters) are faster but less capable; larger models (13B, 70B) produce better results but require more resources.
- For email tasks,
mistralandllama3are popular choices offering a good balance of quality and speed.
Troubleshooting
"Failed to connect" or no response
Check that Ollama is actually running (ollama list in a terminal). Verify the server URL in ThunderAI options matches your Ollama setup.
CORS error
The OLLAMA_ORIGINS variable is either not set or Ollama was not restarted after setting it. Follow the steps above again, or use the "All URLs" permission as a workaround.
Empty model list
Make sure you have pulled at least one model with ollama pull <modelname> and that Ollama is running.