Home windows preview · Ollama Weblog

Ollama is now available on Home windows in preview, making it doable to tug, run and create massive language fashions in a brand new native Home windows expertise. Ollama on Home windows consists of built-in GPU acceleration, entry to the complete model library, and the Ollama API together with OpenAI compatibility.
{Hardware} acceleration
Ollama accelerates operating fashions utilizing NVIDIA GPUs in addition to fashionable CPU instruction units akin to AVX and AVX2 if accessible. No configuration or virtualization required!
Full entry to the mannequin library
The total Ollama model library is offered to run on Home windows, together with vision models. When operating imaginative and prescient fashions akin to LLaVA 1.6, pictures may be dragged and dropped into ollama run
so as to add them to a message.
At all times-on Ollama API
Ollama’s API routinely runs within the background, serving on http://localhost:11434
. Instruments and purposes can connect with it with none extra setup.
For instance, right here’s how one can invoke Ollama’s API utilizing PowerShell:
(Invoke-WebRequest -method POST -Physique '{"mannequin":"llama2", "immediate":"Why is the sky blue?", "stream": false}' -uri http://localhost:11434/api/generate ).Content material | ConvertFrom-json
Ollama on Home windows additionally helps the identical OpenAI compatibility as on different platforms, making it doable to make use of present tooling constructed for OpenAI with native fashions through Ollama.
Get began
To get began with the Ollama on Home windows Preview:
- Download Ollama on Home windows
- Double-click the installer,
OllamaSetup.exe
- After putting in, open your favourite terminal and run
ollama run llama2
to run a mannequin
Ollama will immediate for updates as new releases turn into accessible. We’d love your suggestions! Should you encounter any points please tell us by opening an issue or by becoming a member of the Discord server.