• Modern Chaos
  • Posts
  • Follow-up on "vim + llm =🔥": small things that awww

Follow-up on "vim + llm =🔥": small things that awww

Integrating AI with Vim, Emacs, and VSCode. A Small Hack Using a Familiar Workflow. Select Your Text, Choose Your LLM Action, Replace Your Selection: A Baseline for Experimenting With Various Models.

This post is a follow-up on the shared experiment by Joseph Thacker with his vim editor “vim + llm = 🔥”. I found it very interesting because it uses a familiar workflow for vi users and extend the possibilities without the need of yet another AI plugin. It just works — not perfect, but in my opinion, incredibly powerful.

To people that didn’t read Joseph’s post, his passing his selected text in visual mode to a shell command. The shell command can be anything like sort or uniq. What’s interesting here is the process used: the selected region is replaced by the output of llm, a cli to use LLMs in the terminal.

I’m an emacs user with evil-mode, which is a full-featured vi layer, so it just works for me too. While it was super cool, I felt that I could experiment further in several areas:

  • Organize my prompts using folders, eg code/fill , translate/english

  • Use either llm-cli or ollama (for testing local models) without duplicating prompts

  • Override temperature and model at call time

The end result is a bash utility called llm-bash that I’m proud to share today. I’m open to pull requests, especially for adding prompts that would be useful for everyone. It requires llm or ollama.

Table of content:

  • Usage on vim like editors

  • Usage on VSCode and emacs (without evil-mode)

  • Usage in your terminal

  • llm-together: 54 open-source models, ready-to-use

Usage on vim like editors

As you may have seen on the gif in the top of this post, we leverage the visual mode of vi. We make a text selection used as input for your prompt. After selecting your text, you can run a shell command by entering the command line mode with colon : followed by an exclamation point ! . Enter your shell command and its output will replace your selection.

Super simple with endless possibilities!

However, there are two limitations:

First, it’s a blocking operation. If your command takes a long time, you won’t be able to use your editor until it finishes. That’s why you should use models that respond as quickly as possible. GPT-3.5-turbo is the fastest and is consistent for my prompts.

Second, your selection will be replaced by the llm output, so your prompt should be designed with this in mind. For instance, my code/fill prompt is designed to output the full initial text selection with the //fill keywords replaced by code logic. This approach has been challenging to perfect and currently only works with GPT-3.5. Not models are are equals but it’s great to experiment immediately with various models, from OpenAI to local ones.

You can find the prompts on GitHub.

Usage on VSCode and Emacs (non evil-mode)

We are a small team, but we all use different editors: emacs, neovim and VSCode. If possible, we like to share our tools across the team. We found a way to make it work on VSCode in the same fashion than Vim.

You will need the plugin Filter Text. It’s original goal is to select a text region and run a shell command like sort or uniq to filter the text. Because it’s not limited to specific shell commands, we can use it for our bash utility to work.

Once installed, you can run the VSCode action “Filter Text Inplace” for an experience similar to vim users.

For emacs, there’s the command shell-command-on-region that takes the region as the command’s input, but it opens a new buffer with the result. I’m sure there’s a pl
ugin to replace in-place, but I haven’t looked for it since I don’t need it.

Usage in your terminal

Examples:

  • Single line: lm translate/english “Hey, what’s up?”

  • Multiline with Ollama and override default model with mis: echo “$(cat)“ | oll translate/english -m zephyr then Ctrl+D after a return line to run the inference

Usage: lm [OPTION] [PROMPT]

This is a language model bash wrapper for LLM-CLI and OLLAMA.

Options:
  -h, --help           Display this help message and exit
  -t, --temperature    Set the temperature 
  -h, --model          Set the model

Prompts:

  code/comment: Add comments to code
  code/explain: Explain a piece of code by adding comments
  code/fill: Replace the //fill keywords with the missing code logic
  code/fix: Fix errors in code syntax or logic
  code/name: Rename code symbols for clarity
  email/draft: Create an email from instructions or notes
  email/reply: Reply to an email in the sender language
  personal/mealplan: Generate a meal plan for the next 7 days
  translate/english: Translate text to English
  translate/french: Translate text to French

llm-together: 54 open-source models, ready-to-used

On a side note, we created a plugin for llm-cli to support the Together backend. Together is a cloud platform for fine-tuning and running large AI models. They offer “featured” models, which are always ready for inference, like llama-2-70b-chat or CodeLlama-34b-Python. We don’t have the resources to host these models but are eager to test them. By installing this plugin, you have 54 open-source models to play with. Awesome stuff!

Together is providing $25 of free credits for a new signup, enough to try lots of models and prompts: https://together.ai/apis 

Find the github project here: https://github.com/wearedevx/llm-together

Getting started

> llm install llm-together
> llm keys set together
> Enter key: <paste key here>
> llm models list

Here the full featured list for Together.ai (13th, October 2023):

  • Austism/chronos-hermes-13b

  • Gryphe/MythoMax-L2-13b

  • NousResearch/Nous-Hermes-Llama2-13b

  • NumbersStation/nsql-llama-2-7B

  • OpenAssistant/llama2-70b-oasst-sft-v10

  • Phind/Phind-CodeLlama-34B-Python-v1

  • Phind/Phind-CodeLlama-34B-v2

  • SG161222/Realistic_Vision_V3.0_VAE

  • WizardLM/WizardCoder-Python-34B-V1.0

  • WizardLM/WizardLM-70B-V1.0

  • garage-bAInd/Platypus2-70B-instruct

  • huggyllama/llama-65b

  • lmsys/vicuna-13b-v1.5-16k

  • lmsys/vicuna-13b-v1.5

  • mistralai/Mistral-7B-Instruct-v0.1

  • prompthero/openjourney

  • togethercomputer/CodeLlama-13b-Instruct

  • togethercomputer/CodeLlama-13b-Python

  • togethercomputer/CodeLlama-13b

  • togethercomputer/CodeLlama-34b-Instruct

  • togethercomputer/CodeLlama-34b-Python

  • togethercomputer/CodeLlama-34b

  • togethercomputer/CodeLlama-7b-Instruct

  • togethercomputer/CodeLlama-7b-Python

  • togethercomputer/CodeLlama-7b

  • togethercomputer/GPT-JT-6B-v1

  • togethercomputer/GPT-JT-Moderation-6B

  • togethercomputer/GPT-NeoXT-Chat-Base-20B

  • togethercomputer/LLaMA-2-7B-32K

  • togethercomputer/Llama-2-7B-32K-Instruct

  • togethercomputer/Pythia-Chat-Base-7B-v0.16

  • togethercomputer/Qwen-7B-Chat

  • togethercomputer/Qwen-7B

  • togethercomputer/RedPajama-INCITE-7B-Base

  • togethercomputer/RedPajama-INCITE-7B-Chat

  • togethercomputer/RedPajama-INCITE-7B-Instruct

  • togethercomputer/RedPajama-INCITE-Base-3B-v1

  • togethercomputer/RedPajama-INCITE-Chat-3B-v1

  • togethercomputer/RedPajama-INCITE-Instruct-3B-v1

  • togethercomputer/alpaca-7b

  • togethercomputer/falcon-40b-instruct

  • togethercomputer/falcon-40b

  • togethercomputer/falcon-7b-instruct

  • togethercomputer/falcon-7b

  • togethercomputer/llama-2-13b-chat

  • togethercomputer/llama-2-13b

  • togethercomputer/llama-2-70b-chat

  • togethercomputer/llama-2-70b

  • togethercomputer/llama-2-7b-chat

  • togethercomputer/llama-2-7b

  • togethercomputer/mpt-30b-chat

  • togethercomputer/mpt-30b-instruct

  • togethercomputer/mpt-30b

  • upstage/SOLAR-0-70b-16bit

Join the conversation

or to participate.