# Getting Started with AI

Weavestream includes a built-in AI chat that can analyze your data, answer questions, identify patterns, and generate summaries. The AI works as an intelligent agent — it can search, filter, and explore your data on its own to find the answers you need. You choose which AI provider to use based on your needs — cloud-based for maximum capability or local for privacy.

# Choosing an AI Provider

Weavestream supports three AI providers. You only need to set up one to get started.

# Claude (Anthropic)

The default and most capable option. Claude runs in the cloud via Anthropic's API.

Pros:

  • Most accurate and capable analysis
  • Handles large, complex datasets well
  • Excellent at following nuanced instructions

Setup:

  1. Open Settings (Cmd+Comma) and go to Intelligence
  2. Click on Claude in Weavestream
  3. Enter your Anthropic API key (starts with sk-ant-)
  4. Click Save API Key

Don't have an API key? There's a link in the settings to get one from Anthropic's website.

# Gemini (Google)

Google's Gemini models offer strong reasoning capabilities with multiple model options to balance cost and performance.

Pros:

  • Multiple model tiers (Gemini 3 Pro, 2.5 Flash, 2.5 Pro, 2.0 Flash)
  • Advanced reasoning with thinking support on newer models
  • Competitive pricing

Setup:

  1. Open Settings (Cmd+Comma) and go to Intelligence
  2. Click on Gemini in Weavestream
  3. Enter your Google AI API key
  4. Select a model — Gemini 2.5 Flash is the default and offers the best balance of cost and performance
  5. Click Save

# Ollama (Local)

A privacy-first option that runs AI models entirely on your Mac. No data leaves your device. This provider is marked as Experimental.

Pros:

  • Complete privacy — data stays local
  • No API costs
  • Works offline

Considerations:

  • Requires Ollama to be installed and running on your Mac
  • Quality depends on the model you choose
  • May be slower than cloud options, especially for large datasets

Setup:

  1. Install Ollama from ollama.ai
  2. Download a model (e.g., ollama pull llama3)
  3. In Weavestream, open SettingsIntelligence
  4. Click on Ollama in Weavestream
  5. Enter the server URL (default: http://localhost:11434)
  6. Click Test to verify the connection
  7. Select a model from the dropdown (click Refresh to discover available models)
  8. Adjust Max Context Tokens if needed (default is fine for most uses)
  9. Click Save Configuration

# Apple Intelligence

Apple's built-in intelligence is used for specific features like smart chat naming (automatically titling your conversations). It works alongside your primary provider rather than replacing it.

# Setting Your Active Provider

After configuring one or more providers, set your active one:

  1. Go to SettingsIntelligence
  2. Click Set as Active next to the provider you want to use

The active provider is shown with a green "Active" badge. You can switch providers at any time without losing your conversations.

# Opening the AI Chat

The AI chat panel appears below the detail view in the main window. You can toggle the chat bubble styling in SettingsIntelligence under the Chat section.

Selecting a source, endpoint, or Smart Filter in the sidebar before starting a chat scopes the AI's queries to that data — but the AI can also discover and explore your data on its own using its built-in tools.

# Your First Analysis

  1. Select a source, endpoint, or Smart Filter in the sidebar
  2. In the AI chat panel, type a question like "Summarize the key trends in this data"
  3. Press Return or click the Send button

The AI agent will automatically search and filter your data to find what's relevant, then provide its analysis. You'll see its progress in real time — including which tools it's using and what it's finding.

You can also use the quick-start buttons that appear in a new chat:

  • Summarize — Get a high-level summary of key points
  • Critical Issues — Identify the most urgent items needing attention
  • Trends — Spot patterns and trends in the data
  • Overview — Get an overview of your data across all sources

# Next Steps