AI-powered VS Code extension with multi-agent architecture for autonomous coding assistance. Supports several providers (Gemini, Anthropic, OpenAI, DeepSeek, Qwen, Groq, GLM, XGrok) plus local models via Ollama. Features code review, refactoring, terminal integration, and intelligent codebase understanding.
# Add to your Claude Code skills
git clone https://github.com/olasunkanmi-SE/codebuddyCodeBuddy is an AI-powered coding assistant for Visual Studio Code featuring multi-agent architecture, nine AI provider integrations, local model support, and intelligent codebase understanding. It functions as an autonomous pair programmer capable of planning, executing, and debugging complex development tasks.
CodeBuddy enhances developer productivity through AI-powered code assistance, providing intelligent code review, refactoring suggestions, optimization recommendations, and interactive chat capabilities. The extension supports both cloud-based and local AI models, enabling developers to choose the right balance of capability, speed, and privacy for their workflow.
CodeBuddy employs specialized agents that collaborate on complex tasks:
Chat Mode: Traditional question-and-answer interaction for quick queries, code explanations, and getting code snippets without file modifications.
Agent Mode: Autonomous execution with full tool access including file operations, terminal commands, web search, and codebase analysis. Changes can be reviewed before application.
CodeBuddy supports nine AI providers:
| Provider | Default Model | Capabilities | | --------- | ----------------------- | --------------------------------- | | Gemini | gemini-2.5-pro | Long context, general coding | | Anthropic | claude-sonnet-4-5 | Complex architecture, refactoring | | OpenAI | gpt-4o | Reasoning, planning | | DeepSeek | deepseek-chat | Cost-effective coding | | Qwen | qwen-max | Strong open-weight performance | | Groq | llama-3.1-70b-versatile | Ultra-fast inference | | GLM | glm-4 | Chinese and English support | | XGrok | grok | Alternative reasoning | | Local | qwen2.5-coder | Privacy-first, offline capable |
Run completely offline with local models via Ollama or LM Studio: