#
Privacy & AI Providers
Understanding what happens to your data when you use AI analysis is important. Weavestream gives you full control over which provider handles your data and what gets sent.
#
Data Storage
All your data — items, conversations, settings, and credentials — is stored locally on your Mac. Weavestream uses an on-device database and your Mac's Keychain for credentials. Nothing is synced to a cloud service by Weavestream itself.
#
What Gets Sent to the AI?
When you ask a question in the AI chat, Weavestream sends:
- Your question — The message you typed
- Your custom prompt (if selected) — The analysis instructions
- Relevant data — A subset of items and fields based on smart routing
#
Smart Routing
Weavestream doesn't blindly send all your data to the AI. It uses a smart routing step first:
- The AI reviews your question alongside a sample of your data's field names and values
- It identifies which fields are relevant to answering your question
- Only the relevant fields and items are sent in the full analysis request
You can control how many sample values are shown during routing in Settings → General → Sample values per field (1 to 10, default is 3).
#
Smart Filters with Output Fields
If you're viewing a Smart Filter that has Output Fields configured, only those selected fields are sent to the AI. This gives you precise control over what the AI sees.
#
Provider Comparison
#
Claude (Anthropic)
When using Claude, your question and relevant data are sent to Anthropic's API over an encrypted (HTTPS) connection. Anthropic's data handling policies apply to this data. Your API key is stored locally in your Mac's Keychain and is only used to authenticate requests.
For details on Anthropic's data policies, refer to Anthropic's privacy documentation (linked in Settings → Intelligence).
#
Ollama (Local)
When using Ollama, everything stays on your Mac. The data is sent to the Ollama server running locally (typically at http://localhost:11434). No internet connection is required, and no data leaves your device.
Ollama uses a chunking strategy for large datasets — if your data exceeds the model's context window, it's processed in segments. You can adjust the Max Context Tokens in the Ollama settings to tune this behavior.
#
Choosing the Right Provider
Use Claude when:
- You need the most accurate, detailed analysis
- Your data isn't highly sensitive
- You want the best results for complex questions
Use Ollama when:
- Data privacy is a top priority
- You're working with sensitive or regulated data
- You need to work offline
- You want to avoid API costs
You can switch between providers at any time in Settings → Intelligence without losing your conversations or data.
#
Credentials Security
All authentication credentials (API keys, OAuth tokens, passwords) are stored in your Mac's Keychain — Apple's encrypted credential storage. They are never included in AI analysis requests. The AI only sees item data, not your source credentials.
#
Next Steps
- Getting Started with AI — Set up your AI provider
- Intelligence Settings — Configure provider details