Ollama on Intel Arc A770 using Vulkan

vulkan is still experimental

Intro

I have an Intel Arc A770 and it was always a pain to run anything ai on it - Intel has some own libraries like OpenAPI or whatever that you need to pull from them, use special compiler or whatever. Nonsense.

Vulkan

Enter Vulkan.

Vulkan is the “new” api, derived by Khronos from once-upon-a-time AMD’s reinvention of graphic API called Mantle. Khronos took Mantle as a base and produced Vulkan. Generally a revolution in graphics - it now allows full control of the hardware. “New” in this means years younger than OpenGL.

One would thought that people would jump on it, being open and all. But no. Lazy people stuck with CUDA so Nvidia can have 1000% upwards margin on anything they sell.

Great.

Vulkan, as said, is an alternative to OpenAPI, CUDA and ROCm. That everyone supports. At the point of writing, this is still experimental in Ollama, so a special flag has to be set via environment variables. Installation follows.

Note: vulkan itself is not experimental, and this experimental statement only applies to ollama.

Installation

Official install of ollama is sudo executing a script from the internet. What could go wrong. I hate this approach, so i decided to to things in a manual way, and to be honest, here nothing complicated happens. Im doing this on Fedora, which at the time is 43.

Download

wget https://ollama.com/download/ollama-linux-amd64.tgz

Install

tar --extract --gzip --directory=/usr/local/bin --file=ollama-linux-amd64.tgz
sudo ln -sf /usr/local/bin/bin/ollama /usr/local/bin/ollama

Setup users

sudo useradd --system --shell /bin/false --user-group --create-home --home-dir /usr/share/ollama ollama
sudo usermod -a --groups render ollama
sudo usermod -a --groups video ollama

Systemd service

cat <<EOF | sudo tee /etc/systemd/system/ollama.service >/dev/null
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="OLLAMA_VULKAN=1"

[Install]
WantedBy=default.target
EOF

Start the service

systemctl start ollama

open-webui

Open-webui, being a web app, is probably more involved for manual installation, so i just wanted to skip it and run it via podman.

podman run -d \
    --network host \
    -v open-webui:/app/backend/data \
    --name open-webui \
    -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
    --rm \
    ghcr.io/open-webui/open-webui:latest

Result

Open http://localhost:8080 and enjoy open-webui connected to locally running ollama server.