Local Copilots in VSCode

Exploring the open-source LLM landscape

Local Copilots in VSCode

Run models locally

Further reading

Choose a model

Benchmarks and leaderboards:

Choose a VSCode extension

Example setup with Continue on a Macbook / 64GB Max Utra

Roughly speaking, Macbook RAM correlates with the parameter size of the model you could use. 34b is enough to make the Macbook run fans.

  1. ollama run phind-codellama:34b-v2
  2. Install Continue
  3. Edit config file:
config.json
{
"models": [
{
"title": "Phind CodeLlama",
"provider": "ollama",
"model": "phind-codellama-34b",
"api_base": "http://localhost:11434"
}
],
"model_roles": {
"default": "Phind CodeLlama"
},
"slash_commands": [
{
"name": "edit",
"description": "Edit highlighted code",
"step": "EditHighlightedCodeStep"
},
{
"name": "comment",
"description": "Write comments for the highlighted code",
"step": "CommentCodeStep"
},
{
"name": "share",
"description": "Download and share this session",
"step": "ShareSessionStep"
},
{
"name": "cmd",
"description": "Generate a shell command",
"step": "GenerateShellCommandStep"
}
],
"custom_commands": [
{
"name": "test",
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
]
}