Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
jpmcb authored Dec 13, 2023
1 parent ec485ed commit a1d3d64
Showing 1 changed file with 8 additions and 5 deletions.
13 changes: 8 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,16 @@ _[Ollama](https://github.com/jmorganca/ollama) interfaces for Neovim: get up and

https://github.com/jpmcb/nvim-llama/assets/23109390/3e9e7248-dcf4-4349-8ee2-fd87ac3838ca

# Project status

🏗️ 👷 Warning! Under active development!! 👷 🚧

# Requirements
## Requirements

Docker is required to use `nvim-llama`.

And that's it! All models and clients run from within Docker to provide chat interfaces and functionality.
This is an agnostic approach that works for MacOS, Linux, and Windows.

# Installation
## Installation

Use your favorite package manager to install the plugin:

Expand All @@ -39,7 +37,7 @@ use 'jpmcb/nvim-llama'
Plug 'jpmcb/nvim-llama'
```

# Setup & configuration
## Setup & configuration

In your `init.vim`, setup the plugin:

Expand Down Expand Up @@ -85,3 +83,8 @@ When setting the `model` setting, the specified model will be automatically down
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
70B parameter models require upwards of 64 GB of ram (if not more).

## Usage

The `:Llama` autocommand opens a `Terminal` window where you can start chatting with your LLM.

To exit `Terminal` mode, which by default locks the focus to the terminal buffer, use the bindings `Ctrl-\ Ctrl-n`

0 comments on commit a1d3d64

Please sign in to comment.