Gentoo Websites Logo
Go to: Gentoo Home Documentation Forums Lists Bugs Planet Store Wiki Get Gentoo!
Bug 954399 - [guru] sci-ml/ollama-0.6.5-r1 cannot use amd gpu even with rocm use flag enabled
Summary: [guru] sci-ml/ollama-0.6.5-r1 cannot use amd gpu even with rocm use flag enabled
Status: RESOLVED FIXED
Alias: None
Product: GURU
Classification: Unclassified
Component: Package issues (show other bugs)
Hardware: All Linux
: Normal normal
Assignee: Paul Zander
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2025-04-25 13:54 UTC by neilgechn
Modified: 2025-04-30 12:02 UTC (History)
0 users

See Also:
Package list:
Runtime testing required: ---


Attachments
Patch file for ollama-0.6.5-r1.ebuild (file_954399.txt,150 bytes, patch)
2025-04-25 13:59 UTC, neilgechn
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description neilgechn 2025-04-25 13:54:58 UTC
Problem Description:
Ollama runs exclusively in CPU mode despite detecting AMD GPU information. When executing `ollama ps`, the GPU is displayed as the processor. However, system monitoring shows no GPU usage.

It is expected that Ollama can use an AMD GPU to accelerate running LLMs, which it cannot.


Installation Details:
The Ollama package was installed with the following
```
USE="rocm -blas -cuda -mkl"
AMDGPU_TARGETS="gfx1100 -gfx90a -gfx803 -gfx900 -gfx906 -gfx908 -gfx940 -gfx941 -gfx942 -gfx1010 -gfx1011 -gfx1012 -gfx1030 -gfx1031 -gfx1101 -gfx1102" CPU_FLAGS_X86="avx avx2 avx512_bf16 avx512_vnni avx512f avx512vbmi f16c fma3 -amx_int8 -amx_tile -avx_vnni"
```


Cause of the Issue:
According to the ebuild file at https://gitweb.gentoo.org/repo/proj/guru.git/tree/sci-ml/ollama/ollama-0.6.5-r1.ebuild, Ollama is compiled with:
```
CMAKE_HIP_ARCHITECTURES="$(get_amdgpu_flags)"
CMAKE_HIP_PLATFORM="amd"
```

However, these settings are not recognized during the configuration and compilation process. As a result, the file `/usr/lib64/ollama/rocm/libggml-hip.so` is missing after installing the Ollama package.


Solution Investigated and Tested:
 (in existing ebuild file)
1. Remove -DCMAKE_HIP_ARCHITECTURES and -DCMAKE_HIP_PLATFORM (line 234 and line 345),
2. Replace them with `-DAMDGPU_TARGETS="$(get_amdgpu_flags)"`
(as referenced in https://gitlab.archlinux.org/archlinux/packaging/packages/ollama/-/blob/main/PKGBUILD)

This modification successfully generates `/usr/lib64/ollama/rocm/libggml-hip.so` with rocblas.
Ollama can now utilize the GPU when running (e.g., start the server with `ollama serve` and run LLM with `ollama run qwen2.5:32b`).
Comment 1 neilgechn 2025-04-25 13:59:49 UTC
Created attachment 926180 [details, diff]
Patch file for ollama-0.6.5-r1.ebuild
Comment 2 Larry the Git Cow gentoo-dev 2025-04-30 12:02:51 UTC
The bug has been closed via the following commit(s):

https://gitweb.gentoo.org/repo/proj/guru.git/commit/?id=e92cad7ec294d72b3435e5f3aaa52a48da3c57a5

commit e92cad7ec294d72b3435e5f3aaa52a48da3c57a5
Author:     Paul Zander <negril.nx+gentoo@gmail.com>
AuthorDate: 2025-04-27 14:20:05 +0000
Commit:     Paul Zander <negril.nx+gentoo@gmail.com>
CommitDate: 2025-04-28 18:48:24 +0000

    sci-ml/ollama: add 0.6.6
    
    Closes: https://bugs.gentoo.org/954399
    Signed-off-by: Paul Zander <negril.nx+gentoo@gmail.com>

 sci-ml/ollama/Manifest            |   2 +
 sci-ml/ollama/ollama-0.6.6.ebuild | 287 ++++++++++++++++++++++++++++++++++++++
 sci-ml/ollama/ollama-9999.ebuild  |   3 +
 3 files changed, 292 insertions(+)