下载 Fetching Title#pmbv
curl -fsSL https://ollama.com/install.sh | sh
or
sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/bin/ollama
sudo chmod +x /usr/bin/ollama
Create a user for Ollama:
sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama
Create a service file in /etc/systemd/system/ollama.service:
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
[Install]
WantedBy=default.target
Then start the service:
sudo systemctl daemon-reload
sudo systemctl start ollama
sudo systemctl enable ollama
Install CUDA drivers
run
| Model | Parameters | Size | Download |
| | — | |
| Llama 3 | 8B | 4.7GB | ollama run llama3
|
| Llama 3 | 70B | 40GB | ollama run llama3:70b
|
| Mistral | 7B | 4.1GB | ollama run mistral
|
| Dolphin Phi | 2.7B | 1.6GB | ollama run dolphin-phi
|
| Phi-2 | 2.7B | 1.7GB | ollama run phi
|
| Neural Chat | 7B | 4.1GB | ollama run neural-chat
|
| Starling | 7B | 4.1GB | ollama run starling-lm
|
| Code Llama | 7B | 3.8GB | ollama run codellama
|
| Llama 2 Uncensored | 7B | 3.8GB | ollama run llama2-uncensored
|
| Llama 2 13B | 13B | 7.3GB | ollama run llama2:13b
|
| Llama 2 70B | 70B | 39GB | ollama run llama2:70b
|
| Orca Mini | 3B | 1.9GB | ollama run orca-mini
|
| LLaVA | 7B | 4.5GB | ollama run llava
|
| Gemma | 2B | 1.4GB | ollama run gemma:2b
|
| Gemma | 7B | 4.8GB | ollama run gemma:7b
|
| Solar | 10.7B | 6.1GB | ollama run solar
|
Uninstall
Remove the ollama service:
sudo systemctl stop ollama
sudo systemctl disable ollama
sudo rm /etc/systemd/system/ollama.service
Remove the ollama binary from your bin directory (either /usr/local/bin, /usr/bin, or /bin):
sudo rm $(which ollama)
Remove the downloaded models and Ollama service user and group:
sudo rm -r /usr/share/ollama
sudo userdel ollama
sudo groupdel ollama