Changelog
What's changed
Major and minor releases, newest first.
v1.3
April 2026MAS launch- Bundled on-device AI via Apple MLX — briefs, chat, synthesis with no API key required
- Three-option YouTube workflow when public captions aren't available: download and import, paste a transcript, or BYOK third-party transcript API
- Optional youtube-transcript.io integration for one-click YouTube transcripts (BYOK)
- Transcription quality picker: Fast (Whisper Small), Balanced (Medium), High Quality (Large-v3-turbo)
- Local AI quality picker: Fast (Llama 3.2 1B), Balanced (3B), High Quality (Qwen2.5 7B)
- Live transcription progress: percentage and minute-of-N status during long-file Whisper runs
- Brief from notes alone — no transcript required
- Persistent chat history per video, surviving tab switches and app relaunches
- Storage settings: per-engine breakdown with one-click cleanup of unused models
- Security-scoped bookmarks for video files — library survives app quit
- Models stored in Application Support (durable) rather than Caches (purgeable)
- Mac App Store sandboxing throughout — no external binary dependencies
Removed
- Ollama auto-install and subprocess management (replaced by bundled MLX)
- yt-dlp subprocess for YouTube transcription (replaced by manual-caption fetch + three-option fallback)
- Press tier (live multi-stream monitoring) — deferred from MAS v1, see workplan
v1.2
April 2026Press tier (deferred)- Live multi-stream monitoring (up to 5 concurrent streams)
- yt-dlp resolver for YouTube, Twitch, Kick, TikTok, Rumble and others
- ffmpeg-based live audio extraction → Whisper transcription
- Audio pinning + focus mode for the live grid
- Per-stream keyword alerts with auto-quote capture
- Manual quote and frame capture with attribution
- Named coverage sessions with history browser
- AI Journalist Brief (time-ordered, attributed)
- Coverage export bundle (brief, quotes, transcripts, captures, metadata)
- Unified FTS search across notes, transcripts, quotes, and alerts
- Retroactive coverage tagging for already-running streams
- ⌘⇧Q / ⌘⇧F keyboard shortcuts for quote / frame capture
v1.1
April 2026AI reliability- Multi-provider routing across OpenAI, Anthropic, Gemini, Ollama
- Provider fallback chain when a key rate-limits
- Tolerant JSON decoding for AI responses (skips malformed fields instead of failing the whole brief)
- Robust JSON extraction that strips prose/markdown fences
- Raw-response logging on decode failures
- Live transcript persistence to the FTS index
v1.0
March 2026Researcher tier- Cross-video synthesis (tension maps) with consensus / contested / single-source labels
- Compile Document: AI-generated research doc with full citations
- Knowledge graph with entity resolution across videos
- Spotlight indexing
- Obsidian export with YAML frontmatter + [[wikilinks]]
- Chat with any video
- Per-entity export (Markdown + Obsidian)
v0.3
March 2026AI Brief- Four-layer AI Brief (overview, claims, entities, quotes, flashcards, draft note)
- Parallel segment extraction
- Per-video reading list + open questions
- Regenerate brief preserves previously generated versions
v0.2
February 2026Scholar tier- Whisper transcription (local, via WhisperKit)
- Frame capture + OCR on saved frames
- Region capture (crop a selected area)
- Formula → LaTeX recognition
- Swift / JavaScript code extraction
- Visual timeline (auto slide-change detection)
- Anki .apkg export
- yt-dlp caption import
v0.1
January 2026Foundations- Native macOS app (SwiftUI + AVKit)
- Local and YouTube video playback
- Timestamp-anchored notes with ⌘↩ save
- Video library with playlists
- YouTube playlist import
- SQLite + GRDB with FTS5 for notes search
- Markdown export