Operating System for Linux Setup
- Pop!OS (https://pop.system76.com/)
Ollama Documentation
OpenWebUI Documentation
Windows Setup
- Open Terminal (CMD) Terminal:
- wsl --install
- Reboot your system
- When it is done you will be prompted to setup a
- Username
- Password
- sudo apt update
- sudo apt upgrade -y
- Once the above is completed then run the next command
- curl -fsSL https://ollama.com/install.sh | sh
Installing and Using Ollama
Web-Bases:
- Open https://127.0.0.1:11434.
- If it was successful then you should see Ollama is running in the Body of the webpage Terminal:
- ollama pull llama2
- ollama run llama2
- Give a prompt that you want it to answer
- /bye (will exit)
- Make sure Docker is installed, if not follow Installed Docker
- sudo docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- sudo docker ps Web-Based:
- Sign Up via http://localhost:8080/auth (this will be automatically be the admin account)
- Select the Llama2 in the dropdown Select a model
- If you need to change the Connections URL, click on your profile name (bottom left of the sidebar)
- Select Connections and update the url
Installing Docker
Terminal:
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
- sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Adding more Models
Web-Based:
- Go to https://ollama.com/library to see what is available Terminal:
- ollama pull codegemma (example) Web-Based:
- Login via http://localhost:8080/auth
- Select the codegemma (or whatever you selected) in the dropdown Select a model
- Use @ to mention another model to look at your prompt
- Alternatively run
- ollama pull llama2 && ollama pull llava && ollama pull mistral && ollama pull codegemma
Uploading Files
Terminal:
- ollama pull llava Web-Based:
- Login via http://localhost:8080/auth
- Select the llava
Admin Settings
Web-Based:
- Login via http://localhost:8080/auth (use your admin account)
- Click on Admin Settings
Add New User
- Login via http://localhost:8080/auth (use your admin account)
- Click on Admin Settings
- Go to the Users Tab
User Permissions
- You can allow chat deletion
- Or allow a user to only use a specific model (Model Whitelisting)
Create Custom Model
Web-Based:
- Login via http://localhost:8080/auth
- Go to http://localhost:8080/admin
- Then the Modelfiles Tab
- Click Create a Modelfile
Add Stable Diffusion | Image Support
Install PyENV and Python
Terminal:
- sudo apt install -y make build-essential libssl-dev zlib1g-dev \libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev \libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python-openssl git
- curl https://pyenv.run | bash
- export PYENV_ROOT=$HOME/.pyenv" [[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH" eval "$(pyenv init -)"
- source .bashrc
- pyenv -h (this will check if it is working)
- pyenv install 3.10
- pyenv global 3.10
Install Automatic 11111
Terminal:
- mkdir stablediff
- cd stablediff
- wget -q https://raw.githubusercontent.com/AUTOMATIC11111/stable-diffusion-webui/master/webui.sh
- ls (make sure it downloaded the file)
- chmod +x webui.sh
- ./webui.sh Web-Based:
- Go to http://localhost:7860
Integrate it into your Ollama instance
Terminal: - ./webui.sh --listen --api Web-Based:
- Login via http://localhost:8080/auth
- Click on your profile name (bottom left of the sidebar)
- Select Images and update the url with http://localhost:7860
- Press the refresh option
- Turn on Experimental
- Enter a prompt
- Press on the Image icon
Add Files | Documents Support
Web-Based:
- Login via http://localhost:8080/auth
- Click on your profile name (bottom left of the sidebar)
- Select Documents
- You can call the added document by using #
Implementing Ollama with Obsidian
- Go to Settings (Cog icon in the left bottom corner of the application)
- Select Community Plugins
- Search for BMO Chatbot
- Install and Enable it
- Go back to Settings and you will find BMO Chatbox under the Community Plugins Sidebar section
- Under Ollama Connection add the url to your Ollama Server
- General, Select the model you want to use
Exposing the Server to the outside
Terminal:
- sudo docker stop open-webui
- sudo docker rm open-webui
- sudo docker run -d -p 8585:8080 -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main (change the ports as you wish)
- Wait about 5 mins Web-Based:
- Sign Up via http://localhost:8585/auth (this will be automatically be the admin account)
- Admin Panel > Connections change the Ollama API url to http://host.docker.internal:11434
- Reinstall the models
- Use DuckDNS example http://aiserver.duckdns.org:8585 (https://www.duckdns.org/)
- or
- Access it via http://your-public-ip:8585
Customize the OpenWebUI Site
Access the Docker container
- sudo docker exec -it open-webui /bin/bash
Install nano if not already installed
- apt-get update
- apt-get install nano
Edit the configuration file
- nano /app/backend/config.py
- exit
Restart the Docker container to apply changes
- sudo docker restart open-webui
Excellent Models
- codegemma
- llama2
- llava
- mistral
Add comment