Ollama: Difference between revisions
Jump to navigation
Jump to search
Created page with "Category:SoftwareCategory:AI {|<!--CONFIGURATION: REQUIRED--> |{{#vardefine:app|ollama}} |{{#vardefine:url|https://github.com/ollama/ollama}} <!--CONFIGURATION: OPTION..." |
No edit summary |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 32: | Line 32: | ||
{{#if: {{#var: exe}}|==Additional Information== | {{#if: {{#var: exe}}|==Additional Information== | ||
{{Note | HiPerGator users should be aware that the State of Florida prohibits the use of '''DeepSeek''' models, like '''R1'''. Please consult the [https://www.dms.myflorida.com/prohibited_applications_list list of prohibited applications]. This prohibition extends to HiPerGator and all state-owned devices.|warn}} | |||
=== Interactive OLLAMA use === | |||
1. $ ollama serve ( | Users need to start an interactive HiperGator Desktop session session on a GPU node at Open Ondemand (https://ood.rc.ufl.edu/) and launch two terminals, one to start the ollama server and the other to chat with LLMs. | ||
In terminal 1, load the ollama module and start the server with either default or custom environmental settings: | |||
1. Default settings | |||
$ ml ollama | |||
$ ollama serve (default environmental variables). | |||
2. Custom settings | |||
$ ml ollama | |||
$ env {options} ollama serve (pass environmental variables to server). | |||
For example: set custom path to LLMs models: | |||
$ env OLLAMA_MODELS=/blue/group/$USER/ollama/models ollama serve | |||
In terminal 2, pull a model and start chatting. For example, llama3.2: | |||
$ ml ollama | |||
$ ollama pull llama3.2 | |||
$ ollama run llama3.2 | |||
=== OLLAMA as a Slurm job === | |||
#!/bin/bash | |||
#SBATCH --job-name=ollama | |||
#SBATCH --output=ollama_%j.log | |||
#SBATCH --ntasks=1 | |||
#SBATCH --cpus-per-task=8 | |||
#SBATCH --mem=20gb | |||
#SBATCH --partition=gpu | |||
#SBATCH --gpus=a100:1 | |||
#SBATCH --time=01:00:00 | |||
date;hostname;pwd | |||
module load ollama | |||
env_path=/my/conda/env/bin | |||
#add conda env with langchain to path | |||
export PATH=$env_path:$PATH | |||
ollama serve & | |||
ollama pull mistral | |||
python my_ollama_python_script.py >> my_ollama_output.txt | |||
Example python script: | |||
from langchain.llms import Ollama | |||
ollama = Ollama(model="mistral") | |||
print(ollama.invoke("why is the sky blue")) | |||
|}} | |}} |
Latest revision as of 16:54, 24 February 2025
Description
Get up and running with large language models.
Environment Modules
Run module spider ollama
to find out what environment modules are available for this application.
System Variables
- HPC_OLLAMA_DIR - installation directory
Additional Information
HiPerGator users should be aware that the State of Florida prohibits the use of DeepSeek models, like R1. Please consult the list of prohibited applications. This prohibition extends to HiPerGator and all state-owned devices.
Interactive OLLAMA use
Users need to start an interactive HiperGator Desktop session session on a GPU node at Open Ondemand (https://ood.rc.ufl.edu/) and launch two terminals, one to start the ollama server and the other to chat with LLMs.
In terminal 1, load the ollama module and start the server with either default or custom environmental settings:
1. Default settings $ ml ollama $ ollama serve (default environmental variables).
2. Custom settings $ ml ollama $ env {options} ollama serve (pass environmental variables to server). For example: set custom path to LLMs models: $ env OLLAMA_MODELS=/blue/group/$USER/ollama/models ollama serve
In terminal 2, pull a model and start chatting. For example, llama3.2:
$ ml ollama $ ollama pull llama3.2 $ ollama run llama3.2
OLLAMA as a Slurm job
#!/bin/bash #SBATCH --job-name=ollama #SBATCH --output=ollama_%j.log #SBATCH --ntasks=1 #SBATCH --cpus-per-task=8 #SBATCH --mem=20gb #SBATCH --partition=gpu #SBATCH --gpus=a100:1 #SBATCH --time=01:00:00 date;hostname;pwd module load ollama env_path=/my/conda/env/bin #add conda env with langchain to path export PATH=$env_path:$PATH ollama serve & ollama pull mistral python my_ollama_python_script.py >> my_ollama_output.txt
Example python script:
from langchain.llms import Ollama ollama = Ollama(model="mistral") print(ollama.invoke("why is the sky blue"))