#
Intelligence Settings
The Intelligence settings pane is where you configure your AI providers. Open it from Settings (Cmd+Comma) and select the Intelligence tab.
#
Overview
The main view shows your available AI providers with their current status:
- Claude in Weavestream — Cloud-based AI via Anthropic
- Ollama in Weavestream — Local AI running on your Mac
Each provider shows:
- An Active badge (green) if it's your current provider
- A Set as Active button if it's configured but not selected
- Turn On if it needs to be configured
A privacy link at the top provides more information about how data is handled.
#
Configuring Claude
Click Claude in Weavestream to open its configuration:
#
Status
Shows whether Claude is Configured (green checkmark) or Not Configured (red X).
#
API Key
Enter your Anthropic API key. Keys start with sk-ant-. Click Save API Key to store it securely in your Mac's Keychain.
If you need an API key, click the link to visit Anthropic's website and create one.
#
Removing the Key
If you want to disconnect Claude, click Remove Key. This deletes the API key from your Keychain. Your conversations are preserved — you just won't be able to send new messages until you add a key again.
#
Configuring Ollama
Click Ollama in Weavestream to open its configuration:
#
Status
Shows:
- Configured / Not Configured — Whether settings have been saved
- Connected / Cannot connect — Whether Weavestream can reach the Ollama server
#
Server URL
The URL where Ollama is running. The default is http://localhost:11434, which is correct if Ollama is installed on the same Mac.
Click Test to verify the connection.
#
Model
Select which Ollama model to use. If Weavestream detects running models, they appear in a dropdown. Click Refresh to re-discover available models.
If no models appear, make sure you've downloaded at least one model using Ollama (e.g., ollama pull llama3 in Terminal).
#
Max Context Tokens
Controls the maximum context window size for the local model. Adjustable from 4,096 to 32,768 tokens in steps of 1,024.
- Lower values — Faster processing, less data sent per request
- Higher values — More data context, but slower and requires more memory
The default is suitable for most models. Increase it if you have a powerful Mac and want the AI to see more data at once.
#
Experimental Notice
Ollama support comes with a note that local models may return less consistent results compared to Claude. Quality depends heavily on which model you choose and its size.
#
Setup Instructions
If you haven't installed Ollama yet, the settings page includes a link to ollama.ai with setup instructions.
Click Save Configuration when you're done.
#
Switching Providers
To switch your active AI provider:
- Make sure the provider you want to use is configured
- Click Set as Active next to that provider
- The green "Active" badge moves to your selection
You can switch at any time. Your conversations are preserved regardless of which provider is active.