Local Copilots in VSCode
Exploring the open-source LLM landscape
Run models locally
Further reading
Choose a model
Benchmarks and leaderboards:
Choose a VSCode extension
Example setup with Continue on a Macbook / 64GB Max Utra
Roughly speaking, Macbook RAM correlates with the parameter size of the model you could use. 34b is enough to make the Macbook run fans.
ollama run phind-codellama:34b-v2
- Install Continue
- Edit config file: