Choosing Your Models
Seylo allows you to configure different LLM models for each agent to balance performance and cost. Here are recommended configurations:
Better Performance
Planner & Validator: Claude 3.7 Sonnet
Better reasoning and planning capabilities
More reliable task validation
Navigator: Claude 3.5 Haiku
Efficient for web navigation tasks
Good balance of performance and cost
Cost-Effective Configuration
Planner & Validator: Claude Haiku or GPT-4o
Reasonable performance at lower cost
May require more iterations for complex tasks
Navigator: Gemini 2.0 Flash or GPT-4o-mini
Lightweight and cost-efficient
Suitable for basic navigation tasks
Local Models
Setup Options:
Use Ollama or other custom OpenAI-compatible providers to run models locally
Zero API costs and complete privacy with no data leaving your machine
Recommended Models:
Qwen 2.5 Coder 14B
Mistral Small 24B
Prompt Engineering:
Local models require more specific and cleaner prompts
Avoid high-level, ambiguous commands
Break complex tasks into clear, detailed steps
Provide explicit context and constraints
Note: The cost-effective configuration may produce less stable outputs and require more iterations for complex tasks.
Last updated