#
Privacy & AI Providers
Understanding what happens to your data when you use AI analysis is important. Weavestream gives you full control over which provider handles your data and what gets sent.
#
Data Storage
All your data — items, conversations, settings, and credentials — is stored locally on your Mac. Weavestream uses an on-device database and your Mac's Keychain for credentials. Nothing is synced to a cloud service by Weavestream itself.
#
What Gets Sent to the AI?
Weavestream's AI uses an agent-based system. Rather than sending all your data at once, the AI agent requests only what it needs by calling tools. Here's what gets sent over the course of a conversation:
- Your question — The message you typed
- Your custom prompt (if selected) — The analysis instructions
- A system prompt — Instructions and a summary of your available sources and endpoints (field names and item counts — not the data itself)
- Tool results — As the agent searches, filters, or inspects items, the results of those tool calls are added to the conversation context
#
How the Agent Discovers Data
The AI doesn't receive your data upfront. Instead, it uses tools to pull in only what's relevant:
- search_items — Searches item titles, summaries, and raw data by keyword
- filter_items — Filters items by field conditions (e.g., status equals "critical")
- get_item_details — Fetches the full details of a single item
- count_items — Counts items, optionally grouped by endpoint or status
- list_sources — Discovers what sources, endpoints, and fields are available
- get_field_values — Gets the distinct values for a specific field
Each tool call returns only the data the agent asked for. Individual tool results are limited to 16,000 characters by default to keep context manageable.
#
Scoping Limits What the Agent Can See
When you select an endpoint, source, or Smart Filter in the sidebar, the agent's tool calls are scoped to that selection. It can only search and filter within the data you've chosen. This is both a privacy control and a way to focus the analysis.
#
Provider Comparison
#
Claude (Anthropic)
When using Claude, your questions and the data retrieved by tool calls are sent to Anthropic's API over an encrypted (HTTPS) connection. Claude supports prompt caching — on follow-up tool calls within the same question, the system prompt and tool definitions are cached at reduced cost.
Your API key is stored locally in your Mac's Keychain and is only used to authenticate requests.
For details on Anthropic's data policies, refer to Anthropic's privacy documentation (linked in Settings → Intelligence).
#
Gemini (Google)
When using Gemini, data is sent to Google's Generative AI API over an encrypted connection. Gemini models support extended thinking on newer models (2.5+, 3+), which may process data through additional reasoning steps on Google's servers.
Your API key is stored locally in your Mac's Keychain.
For details on Google's data policies, refer to Google's AI privacy documentation.
#
Ollama (Local)
When using Ollama, everything stays on your Mac. Data is sent to the Ollama server running locally (typically at http://localhost:11434). No internet connection is required, and no data leaves your device.
Ollama is marked as Experimental in Weavestream. Local models have smaller context windows, so the agent may not be able to process as much data per question as cloud providers.
#
Choosing the Right Provider
Use Claude when:
- You need the most accurate, detailed analysis
- Your data isn't highly sensitive
- You want the best results for complex questions
Use Gemini when:
- You want strong analysis with flexible model options
- You're looking for a good balance of cost and capability
- You want thinking/reasoning visibility on supported models
Use Ollama when:
- Data privacy is a top priority
- You're working with sensitive or regulated data
- You need to work offline
- You want to avoid API costs
You can switch between providers at any time in Settings → Intelligence without losing your conversations or data.
#
Credentials Security
All authentication credentials (API keys, OAuth tokens, passwords) are stored in your Mac's Keychain — Apple's encrypted credential storage. They are never included in AI tool calls. The AI only sees item data retrieved through its tools, not your source credentials.
#
Next Steps
- Getting Started with AI — Set up your AI provider
- How AI Chat Works — Understand the agent system and optimization tips
- Intelligence Settings — Configure provider details