Follow-up on "vim + llm =🔥": small things that awww
Integrating AI with Vim, Emacs, and VSCode. A Small Hack Using a Familiar Workflow. Select Your Text, Choose Your LLM Action, Replace Your Selection: A Baseline for Experimenting With Various Models.
This post is a follow-up on the shared experiment by Joseph Thacker with his vim editor “vim + llm = 🔥”. I found it very interesting because it uses a familiar workflow for vi users and extend the possibilities without the need of yet another AI plugin. It just works — not perfect, but in my opinion, incredibly powerful.
To people that didn’t read Joseph’s post, his passing his selected text in visual mode to a shell command. The shell command can be anything like sort or uniq. What’s interesting here is the process used: the selected region is replaced by the output of llm, a cli to use LLMs in the terminal.
I’m an emacs user with evil-mode, which is a full-featured vi layer, so it just works for me too. While it was super cool, I felt that I could experiment further in several areas:
Organize my prompts using folders, eg
Use either llm-cli or ollama (for testing local models) without duplicating prompts
Override temperature and model at call time
The end result is a bash utility called llm-bash that I’m proud to share today. I’m open to pull requests, especially for adding prompts that would be useful for everyone. It requires llm or ollama.
Table of content:
Usage on vim like editors
Usage on VSCode and emacs (without evil-mode)
Usage in your terminal
llm-together: 54 open-source models, ready-to-use
Usage on vim like editors
As you may have seen on the gif in the top of this post, we leverage the visual mode of vi. We make a text selection used as input for your prompt. After selecting your text, you can run a shell command by entering the command line mode with colon
: followed by an exclamation point
! . Enter your shell command and its output will replace your selection.
Super simple with endless possibilities!
However, there are two limitations:
First, it’s a blocking operation. If your command takes a long time, you won’t be able to use your editor until it finishes. That’s why you should use models that respond as quickly as possible. GPT-3.5-turbo is the fastest and is consistent for my prompts.
Second, your selection will be replaced by the llm output, so your prompt should be designed with this in mind. For instance, my
code/fill prompt is designed to output the full initial text selection with the
//fill keywords replaced by code logic. This approach has been challenging to perfect and currently only works with GPT-3.5. Not models are are equals but it’s great to experiment immediately with various models, from OpenAI to local ones.
You can find the prompts on GitHub.
Usage on VSCode and Emacs (non evil-mode)
We are a small team, but we all use different editors: emacs, neovim and VSCode. If possible, we like to share our tools across the team. We found a way to make it work on VSCode in the same fashion than Vim.
You will need the plugin Filter Text. It’s original goal is to select a text region and run a shell command like
uniq to filter the text. Because it’s not limited to specific shell commands, we can use it for our bash utility to work.
Once installed, you can run the VSCode action “Filter Text Inplace” for an experience similar to vim users.
For emacs, there’s the command
shell-command-on-region that takes the region as the command’s input, but it opens a new buffer with the result. I’m sure there’s a pl
ugin to replace in-place, but I haven’t looked for it since I don’t need it.
Usage in your terminal
lm translate/english “Hey, what’s up?”
Multiline with Ollama and override default model with mis:
echo “$(cat)“ | oll translate/english -m zephyrthen
Ctrl+Dafter a return line to run the inference
Usage: lm [OPTION] [PROMPT] This is a language model bash wrapper for LLM-CLI and OLLAMA. Options: -h, --help Display this help message and exit -t, --temperature Set the temperature -h, --model Set the model Prompts: code/comment: Add comments to code code/explain: Explain a piece of code by adding comments code/fill: Replace the //fill keywords with the missing code logic code/fix: Fix errors in code syntax or logic code/name: Rename code symbols for clarity email/draft: Create an email from instructions or notes email/reply: Reply to an email in the sender language personal/mealplan: Generate a meal plan for the next 7 days translate/english: Translate text to English translate/french: Translate text to French
llm-together: 54 open-source models, ready-to-used
On a side note, we created a plugin for llm-cli to support the Together backend. Together is a cloud platform for fine-tuning and running large AI models. They offer “featured” models, which are always ready for inference, like llama-2-70b-chat or CodeLlama-34b-Python. We don’t have the resources to host these models but are eager to test them. By installing this plugin, you have 54 open-source models to play with. Awesome stuff!
Together is providing $25 of free credits for a new signup, enough to try lots of models and prompts: https://together.ai/apis
Find the github project here: https://github.com/wearedevx/llm-together
> llm install llm-together > llm keys set together > Enter key: <paste key here> > llm models list
Here the full featured list for Together.ai (13th, October 2023):