new
fixed
improved
v2.7.0: Welcome o1 models + improved reliability
Thinkbuddy 2.7.0 introduces advanced vision models, new reasoning models, important bug fixes about LLM reliability, smarter file extraction, and significant performance upgrades to supercharge your AI-assisted workflow.
New Features
Enhanced Vision Model Capabilities
- Full Chat Visibility: Vision model now sees entire chat history, improving context awareness.
- Claude Vision Integration: Added support for Claude Vision model.
- New AI Models: Introduced o1-preview, o1-mini, Mixtral, and Llama 3.2.
Advanced OCR System for File Extraction
- Flexible Input: Upload PDFs directly or provide a URL.
- AI-Powered Extraction: Uses GPT-4 Turbo with Vision for accurate text and image extraction.
- Structured Output: Extracted content formatted in Markdown for readability.
Light Theme (Beta)
- New option for users preferring a lighter interface.
Improvements
Settings UI Overhaul
- Redesigned buttons and icons for better user experience.
- Overall aesthetic and usability enhancements.
Performance Optimizations
- Significant CPU efficiency improvements.
- Faster PDF processing with parallel page conversion.
- Enhanced throughput with batch image processing.
- Improved scalability with asynchronous processing.
- Added retry mechanism for handling failures and API limits.
Robust Error Handling
- Comprehensive logging and exception handling for increased reliability.
LLM Model Health
- General stability and performance improvements to various LLM models.
Bug Fixes
- Addressed various issues to enhance overall app stability and performance.