Go to:
Gentoo Home
Documentation
Forums
Lists
Bugs
Planet
Store
Wiki
Get Gentoo!
Gentoo's Bugzilla – Attachment 897436 Details for
Bug 935842
[guru] app-misc/ollama: fails to compile when CUDAToolkit is detected (GCC-14 SYSTEM)
Home
|
New
–
[Ex]
|
Browse
|
Search
|
Privacy Policy
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
[x]
|
Forgot Password
Login:
[x]
Build log
build.log (text/x-log), 54.54 KB, created by
Lucio Sauer
on 2024-07-10 23:53:47 UTC
(
hide
)
Description:
Build log
Filename:
MIME Type:
Creator:
Lucio Sauer
Created:
2024-07-10 23:53:47 UTC
Size:
54.54 KB
patch
obsolete
> * Package: dev-ml/ollama-9999:0 > * Repository: guru > * Maintainer: zdanevich.vitaly@ya.ru > * USE: abi_x86_64 amd64 elibc_glibc kernel_linux > * FEATURES: network-sandbox preserve-libs sandbox userpriv usersandbox > >>>> Unpacking source... > * Repository id: ollama_ollama.git > * To override fetched repository properties, use: > * EGIT_OVERRIDE_REPO_OLLAMA_OLLAMA > * EGIT_OVERRIDE_BRANCH_OLLAMA_OLLAMA > * EGIT_OVERRIDE_COMMIT_OLLAMA_OLLAMA > * EGIT_OVERRIDE_COMMIT_DATE_OLLAMA_OLLAMA > * > * Fetching https://github.com/ollama/ollama.git ... >git fetch https://github.com/ollama/ollama.git +HEAD:refs/git-r3/HEAD >git symbolic-ref refs/git-r3/dev-ml/ollama/0/__main__ refs/git-r3/HEAD > * Repository id: ggerganov_llama.cpp.git > * To override fetched repository properties, use: > * EGIT_OVERRIDE_REPO_GGERGANOV_LLAMA_CPP > * EGIT_OVERRIDE_BRANCH_GGERGANOV_LLAMA_CPP > * EGIT_OVERRIDE_COMMIT_GGERGANOV_LLAMA_CPP > * EGIT_OVERRIDE_COMMIT_DATE_GGERGANOV_LLAMA_CPP > * > * Fetching https://github.com/ggerganov/llama.cpp.git ... >git fetch https://github.com/ggerganov/llama.cpp.git --prune +refs/heads/*:refs/heads/* +refs/tags/*:refs/tags/* +refs/notes/*:refs/notes/* +refs/pull/*/head:refs/pull/* +HEAD:refs/git-r3/HEAD a8db2a9ce64cd4417f6a312ab61858f17f0f8584 >From https://github.com/ggerganov/llama.cpp > * branch a8db2a9ce64cd4417f6a312ab61858f17f0f8584 -> FETCH_HEAD >git update-ref --no-deref refs/git-r3/dev-ml/ollama/0/llama_cpp/__main__ a8db2a9ce64cd4417f6a312ab61858f17f0f8584 > * Repository id: nomic-ai_kompute.git > * To override fetched repository properties, use: > * EGIT_OVERRIDE_REPO_NOMIC_AI_KOMPUTE > * EGIT_OVERRIDE_BRANCH_NOMIC_AI_KOMPUTE > * EGIT_OVERRIDE_COMMIT_NOMIC_AI_KOMPUTE > * EGIT_OVERRIDE_COMMIT_DATE_NOMIC_AI_KOMPUTE > * > * Fetching https://github.com/nomic-ai/kompute.git ... >git fetch https://github.com/nomic-ai/kompute.git --prune +refs/heads/*:refs/heads/* +refs/tags/*:refs/tags/* +refs/notes/*:refs/notes/* +refs/pull/*/head:refs/pull/* +HEAD:refs/git-r3/HEAD 4565194ed7c32d1d2efa32ceab4d3c6cae006306 >From https://github.com/nomic-ai/kompute > * branch 4565194ed7c32d1d2efa32ceab4d3c6cae006306 -> FETCH_HEAD >git update-ref --no-deref refs/git-r3/dev-ml/ollama/0/llama_cpp/kompute/__main__ 4565194ed7c32d1d2efa32ceab4d3c6cae006306 > * Checking out https://github.com/ollama/ollama.git to /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999 ... >git checkout --quiet refs/git-r3/HEAD >GIT update --> > repository: https://github.com/ollama/ollama.git > at the commit: 2d1e3c32291239137fe7763431fc094dc809da28 > * Checking out https://github.com/ggerganov/llama.cpp.git to /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp ... >git checkout --quiet a8db2a9ce64cd4417f6a312ab61858f17f0f8584 >GIT update --> > repository: https://github.com/ggerganov/llama.cpp.git > at the commit: a8db2a9ce64cd4417f6a312ab61858f17f0f8584 > * Checking out https://github.com/nomic-ai/kompute.git to /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/ggml/src/kompute ... >git checkout --quiet 4565194ed7c32d1d2efa32ceab4d3c6cae006306 >GIT update --> > repository: https://github.com/nomic-ai/kompute.git > at the commit: 4565194ed7c32d1d2efa32ceab4d3c6cae006306 >go mod vendor >go: downloading github.com/stretchr/testify v1.9.0 >go: downloading github.com/google/uuid v1.1.2 >go: downloading github.com/spf13/cobra v1.7.0 >go: downloading golang.org/x/sys v0.20.0 >go: downloading golang.org/x/crypto v0.23.0 >go: downloading github.com/containerd/console v1.0.3 >go: downloading github.com/mattn/go-runewidth v0.0.14 >go: downloading github.com/olekukonko/tablewriter v0.0.5 >go: downloading golang.org/x/term v0.20.0 >go: downloading github.com/d4l3k/go-bfloat16 v0.0.0-20211005043715-690c3bdd05f1 >go: downloading github.com/nlpodyssey/gopickle v0.3.0 >go: downloading github.com/pdevine/tensor v0.0.0-20240510204454-f88f4562727c >go: downloading github.com/x448/float16 v0.8.4 >go: downloading golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa >go: downloading google.golang.org/protobuf v1.34.1 >go: downloading github.com/gin-gonic/gin v1.10.0 >go: downloading golang.org/x/text v0.15.0 >go: downloading github.com/emirpasic/gods v1.18.1 >go: downloading github.com/agnivade/levenshtein v1.1.1 >go: downloading github.com/google/go-cmp v0.6.0 >go: downloading github.com/gin-contrib/cors v1.7.2 >go: downloading golang.org/x/sync v0.3.0 >go: downloading github.com/davecgh/go-spew v1.1.1 >go: downloading github.com/pmezard/go-difflib v1.0.0 >go: downloading gopkg.in/yaml.v3 v3.0.1 >go: downloading github.com/inconshreveable/mousetrap v1.1.0 >go: downloading github.com/spf13/pflag v1.0.5 >go: downloading github.com/rivo/uniseg v0.2.0 >go: downloading github.com/pkg/errors v0.9.1 >go: downloading github.com/apache/arrow/go/arrow v0.0.0-20211112161151-bc219186db40 >go: downloading github.com/chewxy/hm v1.0.0 >go: downloading github.com/chewxy/math32 v1.10.1 >go: downloading github.com/google/flatbuffers v24.3.25+incompatible >go: downloading go4.org/unsafe/assume-no-moving-gc v0.0.0-20231121144256-b99613f794b6 >go: downloading gonum.org/v1/gonum v0.15.0 >go: downloading gorgonia.org/vecf32 v0.9.0 >go: downloading gorgonia.org/vecf64 v0.9.0 >go: downloading github.com/gin-contrib/sse v0.1.0 >go: downloading github.com/mattn/go-isatty v0.0.20 >go: downloading golang.org/x/net v0.25.0 >go: downloading golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 >go: downloading github.com/gogo/protobuf v1.3.2 >go: downloading github.com/golang/protobuf v1.5.4 >go: downloading github.com/xtgo/set v1.0.0 >go: downloading github.com/go-playground/validator/v10 v10.20.0 >go: downloading github.com/pelletier/go-toml/v2 v2.2.2 >go: downloading github.com/ugorji/go/codec v1.2.12 >go: downloading github.com/bytedance/sonic v1.11.6 >go: downloading github.com/goccy/go-json v0.10.2 >go: downloading github.com/json-iterator/go v1.1.12 >go: downloading github.com/gabriel-vasile/mimetype v1.4.3 >go: downloading github.com/go-playground/universal-translator v0.18.1 >go: downloading github.com/leodido/go-urn v1.4.0 >go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd >go: downloading github.com/modern-go/reflect2 v1.0.2 >go: downloading github.com/go-playground/locales v0.14.1 >go: downloading github.com/cloudwego/base64x v0.1.4 >go: downloading golang.org/x/arch v0.8.0 >go: downloading github.com/bytedance/sonic/loader v0.1.1 >go: downloading github.com/twitchyliquid64/golang-asm v0.15.1 >go: downloading github.com/klauspost/cpuid/v2 v2.2.7 >go: downloading github.com/cloudwego/iasm v0.2.0 ># github.com/agnivade/levenshtein v1.1.1 >## explicit; go 1.13 >github.com/agnivade/levenshtein ># github.com/apache/arrow/go/arrow v0.0.0-20211112161151-bc219186db40 >## explicit; go 1.15 >github.com/apache/arrow/go/arrow >github.com/apache/arrow/go/arrow/array >github.com/apache/arrow/go/arrow/bitutil >github.com/apache/arrow/go/arrow/decimal128 >github.com/apache/arrow/go/arrow/endian >github.com/apache/arrow/go/arrow/float16 >github.com/apache/arrow/go/arrow/internal/cpu >github.com/apache/arrow/go/arrow/internal/debug >github.com/apache/arrow/go/arrow/memory >github.com/apache/arrow/go/arrow/memory/internal/cgoalloc >github.com/apache/arrow/go/arrow/tensor ># github.com/bytedance/sonic v1.11.6 >## explicit; go 1.16 >github.com/bytedance/sonic >github.com/bytedance/sonic/ast >github.com/bytedance/sonic/decoder >github.com/bytedance/sonic/encoder >github.com/bytedance/sonic/internal/caching >github.com/bytedance/sonic/internal/cpu >github.com/bytedance/sonic/internal/decoder >github.com/bytedance/sonic/internal/encoder >github.com/bytedance/sonic/internal/jit >github.com/bytedance/sonic/internal/native >github.com/bytedance/sonic/internal/native/avx >github.com/bytedance/sonic/internal/native/avx2 >github.com/bytedance/sonic/internal/native/neon >github.com/bytedance/sonic/internal/native/sse >github.com/bytedance/sonic/internal/native/types >github.com/bytedance/sonic/internal/resolver >github.com/bytedance/sonic/internal/rt >github.com/bytedance/sonic/option >github.com/bytedance/sonic/unquote >github.com/bytedance/sonic/utf8 ># github.com/bytedance/sonic/loader v0.1.1 >## explicit; go 1.16 >github.com/bytedance/sonic/loader >github.com/bytedance/sonic/loader/internal/abi >github.com/bytedance/sonic/loader/internal/rt ># github.com/chewxy/hm v1.0.0 >## explicit >github.com/chewxy/hm ># github.com/chewxy/math32 v1.10.1 >## explicit; go 1.13 >github.com/chewxy/math32 ># github.com/cloudwego/base64x v0.1.4 >## explicit; go 1.16 >github.com/cloudwego/base64x ># github.com/cloudwego/iasm v0.2.0 >## explicit; go 1.16 >github.com/cloudwego/iasm/expr >github.com/cloudwego/iasm/x86_64 ># github.com/containerd/console v1.0.3 >## explicit; go 1.13 >github.com/containerd/console ># github.com/d4l3k/go-bfloat16 v0.0.0-20211005043715-690c3bdd05f1 >## explicit; go 1.17 >github.com/d4l3k/go-bfloat16 ># github.com/davecgh/go-spew v1.1.1 >## explicit >github.com/davecgh/go-spew/spew ># github.com/emirpasic/gods v1.18.1 >## explicit; go 1.2 >github.com/emirpasic/gods/containers >github.com/emirpasic/gods/lists >github.com/emirpasic/gods/lists/arraylist >github.com/emirpasic/gods/utils ># github.com/gabriel-vasile/mimetype v1.4.3 >## explicit; go 1.20 >github.com/gabriel-vasile/mimetype >github.com/gabriel-vasile/mimetype/internal/charset >github.com/gabriel-vasile/mimetype/internal/json >github.com/gabriel-vasile/mimetype/internal/magic ># github.com/gin-contrib/cors v1.7.2 >## explicit; go 1.18 >github.com/gin-contrib/cors ># github.com/gin-contrib/sse v0.1.0 >## explicit; go 1.12 >github.com/gin-contrib/sse ># github.com/gin-gonic/gin v1.10.0 >## explicit; go 1.20 >github.com/gin-gonic/gin >github.com/gin-gonic/gin/binding >github.com/gin-gonic/gin/internal/bytesconv >github.com/gin-gonic/gin/internal/json >github.com/gin-gonic/gin/render ># github.com/go-playground/locales v0.14.1 >## explicit; go 1.17 >github.com/go-playground/locales >github.com/go-playground/locales/currency ># github.com/go-playground/universal-translator v0.18.1 >## explicit; go 1.18 >github.com/go-playground/universal-translator ># github.com/go-playground/validator/v10 v10.20.0 >## explicit; go 1.18 >github.com/go-playground/validator/v10 ># github.com/goccy/go-json v0.10.2 >## explicit; go 1.12 >github.com/goccy/go-json >github.com/goccy/go-json/internal/decoder >github.com/goccy/go-json/internal/encoder >github.com/goccy/go-json/internal/encoder/vm >github.com/goccy/go-json/internal/encoder/vm_color >github.com/goccy/go-json/internal/encoder/vm_color_indent >github.com/goccy/go-json/internal/encoder/vm_indent >github.com/goccy/go-json/internal/errors >github.com/goccy/go-json/internal/runtime ># github.com/gogo/protobuf v1.3.2 >## explicit; go 1.15 >github.com/gogo/protobuf/gogoproto >github.com/gogo/protobuf/proto >github.com/gogo/protobuf/protoc-gen-gogo/descriptor ># github.com/golang/protobuf v1.5.4 >## explicit; go 1.17 >github.com/golang/protobuf/proto ># github.com/google/flatbuffers v24.3.25+incompatible >## explicit >github.com/google/flatbuffers/go ># github.com/google/go-cmp v0.6.0 >## explicit; go 1.13 >github.com/google/go-cmp/cmp >github.com/google/go-cmp/cmp/internal/diff >github.com/google/go-cmp/cmp/internal/flags >github.com/google/go-cmp/cmp/internal/function >github.com/google/go-cmp/cmp/internal/value ># github.com/google/uuid v1.1.2 >## explicit >github.com/google/uuid ># github.com/inconshreveable/mousetrap v1.1.0 >## explicit; go 1.18 >github.com/inconshreveable/mousetrap ># github.com/json-iterator/go v1.1.12 >## explicit; go 1.12 >github.com/json-iterator/go ># github.com/klauspost/cpuid/v2 v2.2.7 >## explicit; go 1.15 >github.com/klauspost/cpuid/v2 ># github.com/kr/text v0.2.0 >## explicit ># github.com/leodido/go-urn v1.4.0 >## explicit; go 1.18 >github.com/leodido/go-urn >github.com/leodido/go-urn/scim/schema ># github.com/mattn/go-isatty v0.0.20 >## explicit; go 1.15 >github.com/mattn/go-isatty ># github.com/mattn/go-runewidth v0.0.14 >## explicit; go 1.9 >github.com/mattn/go-runewidth ># github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd >## explicit >github.com/modern-go/concurrent ># github.com/modern-go/reflect2 v1.0.2 >## explicit; go 1.12 >github.com/modern-go/reflect2 ># github.com/nlpodyssey/gopickle v0.3.0 >## explicit; go 1.17 >github.com/nlpodyssey/gopickle/pickle >github.com/nlpodyssey/gopickle/pytorch >github.com/nlpodyssey/gopickle/types ># github.com/olekukonko/tablewriter v0.0.5 >## explicit; go 1.12 >github.com/olekukonko/tablewriter ># github.com/pdevine/tensor v0.0.0-20240510204454-f88f4562727c >## explicit; go 1.22.0 >github.com/pdevine/tensor >github.com/pdevine/tensor/internal/execution >github.com/pdevine/tensor/internal/serialization/fb >github.com/pdevine/tensor/internal/serialization/pb >github.com/pdevine/tensor/internal/storage >github.com/pdevine/tensor/native ># github.com/pelletier/go-toml/v2 v2.2.2 >## explicit; go 1.16 >github.com/pelletier/go-toml/v2 >github.com/pelletier/go-toml/v2/internal/characters >github.com/pelletier/go-toml/v2/internal/danger >github.com/pelletier/go-toml/v2/internal/tracker >github.com/pelletier/go-toml/v2/unstable ># github.com/pkg/errors v0.9.1 >## explicit >github.com/pkg/errors ># github.com/pmezard/go-difflib v1.0.0 >## explicit >github.com/pmezard/go-difflib/difflib ># github.com/rivo/uniseg v0.2.0 >## explicit; go 1.12 >github.com/rivo/uniseg ># github.com/spf13/cobra v1.7.0 >## explicit; go 1.15 >github.com/spf13/cobra ># github.com/spf13/pflag v1.0.5 >## explicit; go 1.12 >github.com/spf13/pflag ># github.com/stretchr/testify v1.9.0 >## explicit; go 1.17 >github.com/stretchr/testify/assert >github.com/stretchr/testify/require ># github.com/twitchyliquid64/golang-asm v0.15.1 >## explicit; go 1.13 >github.com/twitchyliquid64/golang-asm/asm/arch >github.com/twitchyliquid64/golang-asm/bio >github.com/twitchyliquid64/golang-asm/dwarf >github.com/twitchyliquid64/golang-asm/goobj >github.com/twitchyliquid64/golang-asm/obj >github.com/twitchyliquid64/golang-asm/obj/arm >github.com/twitchyliquid64/golang-asm/obj/arm64 >github.com/twitchyliquid64/golang-asm/obj/mips >github.com/twitchyliquid64/golang-asm/obj/ppc64 >github.com/twitchyliquid64/golang-asm/obj/riscv >github.com/twitchyliquid64/golang-asm/obj/s390x >github.com/twitchyliquid64/golang-asm/obj/wasm >github.com/twitchyliquid64/golang-asm/obj/x86 >github.com/twitchyliquid64/golang-asm/objabi >github.com/twitchyliquid64/golang-asm/src >github.com/twitchyliquid64/golang-asm/sys >github.com/twitchyliquid64/golang-asm/unsafeheader ># github.com/ugorji/go/codec v1.2.12 >## explicit; go 1.11 >github.com/ugorji/go/codec ># github.com/x448/float16 v0.8.4 >## explicit; go 1.11 >github.com/x448/float16 ># github.com/xtgo/set v1.0.0 >## explicit >github.com/xtgo/set ># go4.org/unsafe/assume-no-moving-gc v0.0.0-20231121144256-b99613f794b6 >## explicit; go 1.11 >go4.org/unsafe/assume-no-moving-gc ># golang.org/x/arch v0.8.0 >## explicit; go 1.18 >golang.org/x/arch/x86/x86asm ># golang.org/x/crypto v0.23.0 >## explicit; go 1.18 >golang.org/x/crypto/blowfish >golang.org/x/crypto/chacha20 >golang.org/x/crypto/curve25519 >golang.org/x/crypto/curve25519/internal/field >golang.org/x/crypto/internal/alias >golang.org/x/crypto/internal/poly1305 >golang.org/x/crypto/sha3 >golang.org/x/crypto/ssh >golang.org/x/crypto/ssh/internal/bcrypt_pbkdf ># golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa >## explicit; go 1.20 >golang.org/x/exp/maps ># golang.org/x/net v0.25.0 >## explicit; go 1.18 >golang.org/x/net/html >golang.org/x/net/html/atom >golang.org/x/net/http/httpguts >golang.org/x/net/http2 >golang.org/x/net/http2/h2c >golang.org/x/net/http2/hpack >golang.org/x/net/idna ># golang.org/x/sync v0.3.0 >## explicit; go 1.17 >golang.org/x/sync/errgroup >golang.org/x/sync/semaphore ># golang.org/x/sys v0.20.0 >## explicit; go 1.18 >golang.org/x/sys/cpu >golang.org/x/sys/plan9 >golang.org/x/sys/unix >golang.org/x/sys/windows ># golang.org/x/term v0.20.0 >## explicit; go 1.18 >golang.org/x/term ># golang.org/x/text v0.15.0 >## explicit; go 1.18 >golang.org/x/text/encoding >golang.org/x/text/encoding/internal >golang.org/x/text/encoding/internal/identifier >golang.org/x/text/encoding/unicode >golang.org/x/text/encoding/unicode/utf32 >golang.org/x/text/internal/language >golang.org/x/text/internal/language/compact >golang.org/x/text/internal/tag >golang.org/x/text/internal/utf8internal >golang.org/x/text/language >golang.org/x/text/runes >golang.org/x/text/secure/bidirule >golang.org/x/text/transform >golang.org/x/text/unicode/bidi >golang.org/x/text/unicode/norm ># golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 >## explicit; go 1.11 >golang.org/x/xerrors >golang.org/x/xerrors/internal ># gonum.org/v1/gonum v0.15.0 >## explicit; go 1.21 >gonum.org/v1/gonum/blas >gonum.org/v1/gonum/blas/blas64 >gonum.org/v1/gonum/blas/cblas128 >gonum.org/v1/gonum/blas/gonum >gonum.org/v1/gonum/floats >gonum.org/v1/gonum/floats/scalar >gonum.org/v1/gonum/internal/asm/c128 >gonum.org/v1/gonum/internal/asm/c64 >gonum.org/v1/gonum/internal/asm/f32 >gonum.org/v1/gonum/internal/asm/f64 >gonum.org/v1/gonum/internal/cmplx64 >gonum.org/v1/gonum/internal/math32 >gonum.org/v1/gonum/lapack >gonum.org/v1/gonum/lapack/gonum >gonum.org/v1/gonum/lapack/lapack64 >gonum.org/v1/gonum/mat ># google.golang.org/protobuf v1.34.1 >## explicit; go 1.17 >google.golang.org/protobuf/encoding/prototext >google.golang.org/protobuf/encoding/protowire >google.golang.org/protobuf/internal/descfmt >google.golang.org/protobuf/internal/descopts >google.golang.org/protobuf/internal/detrand >google.golang.org/protobuf/internal/editiondefaults >google.golang.org/protobuf/internal/editionssupport >google.golang.org/protobuf/internal/encoding/defval >google.golang.org/protobuf/internal/encoding/messageset >google.golang.org/protobuf/internal/encoding/tag >google.golang.org/protobuf/internal/encoding/text >google.golang.org/protobuf/internal/errors >google.golang.org/protobuf/internal/filedesc >google.golang.org/protobuf/internal/filetype >google.golang.org/protobuf/internal/flags >google.golang.org/protobuf/internal/genid >google.golang.org/protobuf/internal/impl >google.golang.org/protobuf/internal/order >google.golang.org/protobuf/internal/pragma >google.golang.org/protobuf/internal/set >google.golang.org/protobuf/internal/strs >google.golang.org/protobuf/internal/version >google.golang.org/protobuf/proto >google.golang.org/protobuf/reflect/protodesc >google.golang.org/protobuf/reflect/protoreflect >google.golang.org/protobuf/reflect/protoregistry >google.golang.org/protobuf/runtime/protoiface >google.golang.org/protobuf/runtime/protoimpl >google.golang.org/protobuf/types/descriptorpb >google.golang.org/protobuf/types/gofeaturespb ># gopkg.in/yaml.v3 v3.0.1 >## explicit >gopkg.in/yaml.v3 ># gorgonia.org/vecf32 v0.9.0 >## explicit; go 1.13 >gorgonia.org/vecf32 ># gorgonia.org/vecf64 v0.9.0 >## explicit; go 1.13 >gorgonia.org/vecf64 >>>> Source unpacked in /var/tmp/portage/dev-ml/ollama-9999/work >>>> Preparing source in /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999 ... >>>> Source prepared. >>>> Configuring source in /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999 ... >>>> Source configured. >>>> Compiling source in /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999 ... >go generate ./... >+ set -o pipefail >+ echo 'Starting linux generate script' >Starting linux generate script >+ '[' -z '' ']' >+ '[' -x /usr/local/cuda/bin/nvcc ']' >++ command -v nvcc >+ export CUDACXX=/opt/cuda/bin/nvcc >+ CUDACXX=/opt/cuda/bin/nvcc >+ COMMON_CMAKE_DEFS='-DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_OPENMP=off' >++ dirname ./gen_linux.sh >+ source ./gen_common.sh >+ init_vars >+ case "${GOARCH}" in >+ ARCH=x86_64 >+ LLAMACPP_DIR=../llama.cpp >+ CMAKE_DEFS= >+ CMAKE_TARGETS='--target ollama_llama_server' >+ echo '' >+ grep -- -g >+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ case $(uname -s) in >++ uname -s >+ LIB_EXT=so >+ WHOLE_ARCHIVE=-Wl,--whole-archive >+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive >+ GCC_ARCH= >+ '[' -z '' ']' >+ CMAKE_CUDA_ARCHITECTURES='50;52;61;70;75;80' >+ git_module_setup >+ '[' -n '' ']' >+ '[' -d ../llama.cpp/gguf ']' >+ git submodule init >Submodule 'llama.cpp' (https://github.com/ggerganov/llama.cpp.git) registered for path '../llama.cpp' >+ git submodule update --force ../llama.cpp >Submodule path '../llama.cpp': checked out 'a8db2a9ce64cd4417f6a312ab61858f17f0f8584' >+ apply_patches >+ grep ollama ../llama.cpp/CMakeLists.txt >+ echo 'add_subdirectory(../ext_server ext_server) # ollama' >++ ls -A ../patches/01-load-progress.diff ../patches/02-clip-log.diff ../patches/03-load_exception.diff ../patches/04-metal.diff ../patches/05-default-pretokenizer.diff ../patches/06-qwen2.diff ../patches/07-embeddings.diff ../patches/08-clip-unicode.diff ../patches/09-pooling.diff >+ '[' -n '../patches/01-load-progress.diff >../patches/02-clip-log.diff >../patches/03-load_exception.diff >../patches/04-metal.diff >../patches/05-default-pretokenizer.diff >../patches/06-qwen2.diff >../patches/07-embeddings.diff >../patches/08-clip-unicode.diff >../patches/09-pooling.diff' ']' >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/01-load-progress.diff >++ cut -f2 '-d ' >++ cut -f2- -d/ >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout common/common.cpp >Updated 0 paths from the index >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout common/common.h >Updated 0 paths from the index >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/02-clip-log.diff >++ cut -f2 '-d ' >++ cut -f2- -d/ >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout examples/llava/clip.cpp >Updated 0 paths from the index >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/03-load_exception.diff >++ cut -f2 '-d ' >++ cut -f2- -d/ >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout src/llama.cpp >Updated 0 paths from the index >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/04-metal.diff >++ cut -f2 '-d ' >++ cut -f2- -d/ >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout ggml/src/ggml-metal.m >Updated 0 paths from the index >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/05-default-pretokenizer.diff >++ cut -f2 '-d ' >++ cut -f2- -d/ >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout src/llama.cpp >Updated 0 paths from the index >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/06-qwen2.diff >++ cut -f2 '-d ' >++ cut -f2- -d/ >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout src/llama.cpp >Updated 0 paths from the index >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/07-embeddings.diff >++ cut -f2 '-d ' >++ cut -f2- -d/ >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout src/llama.cpp >Updated 0 paths from the index >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/08-clip-unicode.diff >++ cut -f2 '-d ' >++ cut -f2- -d/ >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout examples/llava/clip.cpp >Updated 0 paths from the index >+ for patch in ../patches/*.diff >++ grep '^+++ ' ../patches/09-pooling.diff >++ cut -f2- -d/ >++ cut -f2 '-d ' >+ for file in $(grep "^+++ " ${patch} | cut -f2 -d' ' | cut -f2- -d/) >+ cd ../llama.cpp >+ git checkout src/llama.cpp >Updated 0 paths from the index >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/01-load-progress.diff >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/02-clip-log.diff >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/03-load_exception.diff >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/04-metal.diff >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/05-default-pretokenizer.diff >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/06-qwen2.diff >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/07-embeddings.diff >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/08-clip-unicode.diff >+ for patch in ../patches/*.diff >+ cd ../llama.cpp >+ git apply ../patches/09-pooling.diff >+ init_vars >+ case "${GOARCH}" in >+ ARCH=x86_64 >+ LLAMACPP_DIR=../llama.cpp >+ CMAKE_DEFS= >+ CMAKE_TARGETS='--target ollama_llama_server' >+ echo '' >+ grep -- -g >+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ case $(uname -s) in >++ uname -s >+ LIB_EXT=so >+ WHOLE_ARCHIVE=-Wl,--whole-archive >+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive >+ GCC_ARCH= >+ '[' -z '50;52;61;70;75;80' ']' >+ '[' -z '' -o '' = static ']' >+ init_vars >+ case "${GOARCH}" in >+ ARCH=x86_64 >+ LLAMACPP_DIR=../llama.cpp >+ CMAKE_DEFS= >+ CMAKE_TARGETS='--target ollama_llama_server' >+ echo '' >+ grep -- -g >+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ case $(uname -s) in >++ uname -s >+ LIB_EXT=so >+ WHOLE_ARCHIVE=-Wl,--whole-archive >+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive >+ GCC_ARCH= >+ '[' -z '50;52;61;70;75;80' ']' >+ CMAKE_TARGETS='--target llama --target ggml' >+ CMAKE_DEFS='-DBUILD_SHARED_LIBS=off -DGGML_NATIVE=off -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_OPENMP=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ BUILD_DIR=../build/linux/x86_64_static >+ echo 'Building static library' >Building static library >+ build >+ cmake -S ../llama.cpp -B ../build/linux/x86_64_static -DBUILD_SHARED_LIBS=off -DGGML_NATIVE=off -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_OPENMP=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off >-- The C compiler identification is GNU 14.1.1 >-- The CXX compiler identification is GNU 14.1.1 >-- Detecting C compiler ABI info >-- Detecting C compiler ABI info - done >-- Check for working C compiler: /usr/bin/cc - skipped >-- Detecting C compile features >-- Detecting C compile features - done >-- Detecting CXX compiler ABI info >-- Detecting CXX compiler ABI info - done >-- Check for working CXX compiler: /usr/bin/c++ - skipped >-- Detecting CXX compile features >-- Detecting CXX compile features - done >-- Found Git: /usr/bin/git (found version "2.45.2") >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success >-- Found Threads: TRUE >-- Using ggml SGEMM >-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF >-- CMAKE_SYSTEM_PROCESSOR: x86_64 >-- x86 detected >-- Configuring done (1.0s) >-- Generating done (0.3s) >-- Build files have been written to: /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/build/linux/x86_64_static >+ cmake --build ../build/linux/x86_64_static --target llama --target ggml -j8 >[ 0%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o >[ 33%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o >[ 33%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o >[ 50%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o >[ 50%] Building CXX object ggml/src/CMakeFiles/ggml.dir/sgemm.cpp.o >[ 66%] Linking CXX static library libggml.a >[ 66%] Built target ggml >[ 66%] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o >[100%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o >[100%] Building CXX object src/CMakeFiles/llama.dir/unicode-data.cpp.o >[100%] Linking CXX static library libllama.a >[100%] Built target llama >[100%] Built target ggml >+ init_vars >+ case "${GOARCH}" in >+ ARCH=x86_64 >+ LLAMACPP_DIR=../llama.cpp >+ CMAKE_DEFS= >+ CMAKE_TARGETS='--target ollama_llama_server' >+ echo '' >+ grep -- -g >+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ case $(uname -s) in >++ uname -s >+ LIB_EXT=so >+ WHOLE_ARCHIVE=-Wl,--whole-archive >+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive >+ GCC_ARCH= >+ '[' -z '50;52;61;70;75;80' ']' >+ '[' -z '' ']' >+ '[' -n '' ']' >+ COMMON_CPU_DEFS='-DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_OPENMP=off' >+ '[' -z '' -o '' = cpu ']' >+ init_vars >+ case "${GOARCH}" in >+ ARCH=x86_64 >+ LLAMACPP_DIR=../llama.cpp >+ CMAKE_DEFS= >+ CMAKE_TARGETS='--target ollama_llama_server' >+ echo '' >+ grep -- -g >+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ case $(uname -s) in >++ uname -s >+ LIB_EXT=so >+ WHOLE_ARCHIVE=-Wl,--whole-archive >+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive >+ GCC_ARCH= >+ '[' -z '50;52;61;70;75;80' ']' >+ CMAKE_DEFS='-DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_OPENMP=off -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ BUILD_DIR=../build/linux/x86_64/cpu >+ echo 'Building LCD CPU' >Building LCD CPU >+ build >+ cmake -S ../llama.cpp -B ../build/linux/x86_64/cpu -DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_OPENMP=off -DGGML_AVX=off -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off >-- The C compiler identification is GNU 14.1.1 >-- The CXX compiler identification is GNU 14.1.1 >-- Detecting C compiler ABI info >-- Detecting C compiler ABI info - done >-- Check for working C compiler: /usr/bin/cc - skipped >-- Detecting C compile features >-- Detecting C compile features - done >-- Detecting CXX compiler ABI info >-- Detecting CXX compiler ABI info - done >-- Check for working CXX compiler: /usr/bin/c++ - skipped >-- Detecting CXX compile features >-- Detecting CXX compile features - done >-- Found Git: /usr/bin/git (found version "2.45.2") >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success >-- Found Threads: TRUE >-- Using ggml SGEMM >-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF >-- CMAKE_SYSTEM_PROCESSOR: x86_64 >-- x86 detected >-- Configuring done (1.0s) >-- Generating done (0.3s) >-- Build files have been written to: /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/build/linux/x86_64/cpu >+ cmake --build ../build/linux/x86_64/cpu --target ollama_llama_server -j8 >[ 0%] Generating build details from Git >[ 7%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o >[ 15%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o >-- Found Git: /usr/bin/git (found version "2.45.2") >[ 15%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o >[ 23%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o >[ 23%] Building CXX object ggml/src/CMakeFiles/ggml.dir/sgemm.cpp.o >[ 30%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o >[ 30%] Built target build_info >[ 38%] Linking CXX static library libggml.a >[ 38%] Built target ggml >[ 38%] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o >[ 46%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o >[ 53%] Building CXX object src/CMakeFiles/llama.dir/unicode-data.cpp.o >[ 53%] Linking CXX static library libllama.a >[ 53%] Built target llama >[ 53%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o >[ 61%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o >[ 61%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o >[ 61%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o >[ 69%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o >[ 76%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o >[ 84%] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o >[ 84%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o >[ 92%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o >In file included from /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/clip.cpp:21: >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h: In function 'int stbi__parse_png_file(stbi__png*, int, int)': >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h:5450:31: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=[https://gcc.gnu.org/onlinedocs/gcc-14.1.0/gcc/Warning-Options.html#index-Wno-stringop-overflow]] > 5450 | tc[k] = (stbi_uc)(stbi__get16be(s) & 255) * > | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 5451 | stbi__depth_scale_table[z->depth]; // non 8-bit images will be larger > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h:5326:28: note: at offset 3 into destination object 'tc' of size 3 > 5326 | stbi_uc has_trans = 0, tc[3] = {0}; > | ^~ >[ 92%] Built target llava >[ 92%] Linking CXX static library libcommon.a >[ 92%] Built target common >[ 92%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o >[100%] Linking CXX executable ../bin/ollama_llama_server >[100%] Built target ollama_llama_server >+ compress >+ echo 'Compressing payloads to reduce overall binary size...' >Compressing payloads to reduce overall binary size... >+ pids= >+ rm -rf '../build/linux/x86_64/cpu/bin/*.gz' >+ for f in ${BUILD_DIR}/bin/* >+ pids+=' 607' >+ gzip -n --best -f ../build/linux/x86_64/cpu/bin/ollama_llama_server >+ '[' -d ../build/linux/x86_64/cpu/lib ']' >+ echo > >+ for pid in ${pids} >+ wait 607 >+ echo 'Finished compression' >Finished compression >+ '[' x86_64 == x86_64 ']' >+ '[' -z '' -o '' = cpu_avx ']' >+ init_vars >+ case "${GOARCH}" in >+ ARCH=x86_64 >+ LLAMACPP_DIR=../llama.cpp >+ CMAKE_DEFS= >+ CMAKE_TARGETS='--target ollama_llama_server' >+ echo '' >+ grep -- -g >+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ case $(uname -s) in >++ uname -s >+ LIB_EXT=so >+ WHOLE_ARCHIVE=-Wl,--whole-archive >+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive >+ GCC_ARCH= >+ '[' -z '50;52;61;70;75;80' ']' >+ CMAKE_DEFS='-DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_OPENMP=off -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ BUILD_DIR=../build/linux/x86_64/cpu_avx >+ echo 'Building AVX CPU' >Building AVX CPU >+ build >+ cmake -S ../llama.cpp -B ../build/linux/x86_64/cpu_avx -DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_OPENMP=off -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off >-- The C compiler identification is GNU 14.1.1 >-- The CXX compiler identification is GNU 14.1.1 >-- Detecting C compiler ABI info >-- Detecting C compiler ABI info - done >-- Check for working C compiler: /usr/bin/cc - skipped >-- Detecting C compile features >-- Detecting C compile features - done >-- Detecting CXX compiler ABI info >-- Detecting CXX compiler ABI info - done >-- Check for working CXX compiler: /usr/bin/c++ - skipped >-- Detecting CXX compile features >-- Detecting CXX compile features - done >-- Found Git: /usr/bin/git (found version "2.45.2") >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success >-- Found Threads: TRUE >-- Using ggml SGEMM >-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF >-- CMAKE_SYSTEM_PROCESSOR: x86_64 >-- x86 detected >-- Configuring done (1.0s) >-- Generating done (0.3s) >-- Build files have been written to: /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/build/linux/x86_64/cpu_avx >+ cmake --build ../build/linux/x86_64/cpu_avx --target ollama_llama_server -j8 >[ 15%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o >[ 15%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o >[ 23%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o >[ 30%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o >[ 30%] Building CXX object ggml/src/CMakeFiles/ggml.dir/sgemm.cpp.o >[ 30%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o >[ 30%] Built target build_info >[ 38%] Linking CXX static library libggml.a >[ 38%] Built target ggml >[ 38%] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o >[ 53%] Building CXX object src/CMakeFiles/llama.dir/unicode-data.cpp.o >[ 53%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o >[ 53%] Linking CXX static library libllama.a >[ 53%] Built target llama >[ 53%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o >[ 53%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o >[ 61%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o >[ 69%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o >[ 69%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o >[ 76%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o >[ 84%] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o >[ 84%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o >[ 92%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o >In file included from /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/clip.cpp:21: >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h: In function 'int stbi__parse_png_file(stbi__png*, int, int)': >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h:5450:31: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=[https://gcc.gnu.org/onlinedocs/gcc-14.1.0/gcc/Warning-Options.html#index-Wno-stringop-overflow]] > 5450 | tc[k] = (stbi_uc)(stbi__get16be(s) & 255) * > | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 5451 | stbi__depth_scale_table[z->depth]; // non 8-bit images will be larger > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h:5326:28: note: at offset 3 into destination object 'tc' of size 3 > 5326 | stbi_uc has_trans = 0, tc[3] = {0}; > | ^~ >[ 92%] Built target llava >[ 92%] Linking CXX static library libcommon.a >[ 92%] Built target common >[ 92%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o >[100%] Linking CXX executable ../bin/ollama_llama_server >[100%] Built target ollama_llama_server >+ compress >+ echo 'Compressing payloads to reduce overall binary size...' >Compressing payloads to reduce overall binary size... >+ pids= >+ rm -rf '../build/linux/x86_64/cpu_avx/bin/*.gz' >+ for f in ${BUILD_DIR}/bin/* >+ pids+=' 854' >+ '[' -d ../build/linux/x86_64/cpu_avx/lib ']' >+ echo > >+ for pid in ${pids} >+ wait 854 >+ gzip -n --best -f ../build/linux/x86_64/cpu_avx/bin/ollama_llama_server >+ echo 'Finished compression' >Finished compression >+ '[' -z '' -o '' = cpu_avx2 ']' >+ init_vars >+ case "${GOARCH}" in >+ ARCH=x86_64 >+ LLAMACPP_DIR=../llama.cpp >+ CMAKE_DEFS= >+ CMAKE_TARGETS='--target ollama_llama_server' >+ echo '' >+ grep -- -g >+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ case $(uname -s) in >++ uname -s >+ LIB_EXT=so >+ WHOLE_ARCHIVE=-Wl,--whole-archive >+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive >+ GCC_ARCH= >+ '[' -z '50;52;61;70;75;80' ']' >+ CMAKE_DEFS='-DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_OPENMP=off -DGGML_AVX=on -DGGML_AVX2=on -DGGML_AVX512=off -DGGML_FMA=on -DGGML_F16C=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ BUILD_DIR=../build/linux/x86_64/cpu_avx2 >+ echo 'Building AVX2 CPU' >Building AVX2 CPU >+ build >+ cmake -S ../llama.cpp -B ../build/linux/x86_64/cpu_avx2 -DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_OPENMP=off -DGGML_AVX=on -DGGML_AVX2=on -DGGML_AVX512=off -DGGML_FMA=on -DGGML_F16C=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off >-- The C compiler identification is GNU 14.1.1 >-- The CXX compiler identification is GNU 14.1.1 >-- Detecting C compiler ABI info >-- Detecting C compiler ABI info - done >-- Check for working C compiler: /usr/bin/cc - skipped >-- Detecting C compile features >-- Detecting C compile features - done >-- Detecting CXX compiler ABI info >-- Detecting CXX compiler ABI info - done >-- Check for working CXX compiler: /usr/bin/c++ - skipped >-- Detecting CXX compile features >-- Detecting CXX compile features - done >-- Found Git: /usr/bin/git (found version "2.45.2") >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success >-- Found Threads: TRUE >-- Using ggml SGEMM >-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF >-- CMAKE_SYSTEM_PROCESSOR: x86_64 >-- x86 detected >-- Configuring done (1.0s) >-- Generating done (0.3s) >-- Build files have been written to: /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/build/linux/x86_64/cpu_avx2 >+ cmake --build ../build/linux/x86_64/cpu_avx2 --target ollama_llama_server -j8 >[ 7%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o >[ 7%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o >[ 15%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o >[ 15%] Building CXX object ggml/src/CMakeFiles/ggml.dir/sgemm.cpp.o >[ 30%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o >[ 30%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o >[ 30%] Built target build_info >[ 38%] Linking CXX static library libggml.a >[ 38%] Built target ggml >[ 38%] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o >[ 46%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o >[ 53%] Building CXX object src/CMakeFiles/llama.dir/unicode-data.cpp.o >[ 53%] Linking CXX static library libllama.a >[ 53%] Built target llama >[ 61%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o >[ 61%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o >[ 69%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o >[ 76%] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o >[ 76%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o >[ 84%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o >[ 84%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o >[ 84%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o >[ 92%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o >In file included from /var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/clip.cpp:21: >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h: In function 'int stbi__parse_png_file(stbi__png*, int, int)': >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h:5450:31: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=[https://gcc.gnu.org/onlinedocs/gcc-14.1.0/gcc/Warning-Options.html#index-Wno-stringop-overflow]] > 5450 | tc[k] = (stbi_uc)(stbi__get16be(s) & 255) * > | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 5451 | stbi__depth_scale_table[z->depth]; // non 8-bit images will be larger > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999/llm/llama.cpp/examples/llava/../../common/stb_image.h:5326:28: note: at offset 3 into destination object 'tc' of size 3 > 5326 | stbi_uc has_trans = 0, tc[3] = {0}; > | ^~ >[ 92%] Built target llava >[ 92%] Linking CXX static library libcommon.a >[ 92%] Built target common >[ 92%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o >[100%] Linking CXX executable ../bin/ollama_llama_server >[100%] Built target ollama_llama_server >+ compress >+ echo 'Compressing payloads to reduce overall binary size...' >Compressing payloads to reduce overall binary size... >+ pids= >+ rm -rf '../build/linux/x86_64/cpu_avx2/bin/*.gz' >+ for f in ${BUILD_DIR}/bin/* >+ pids+=' 1101' >+ gzip -n --best -f ../build/linux/x86_64/cpu_avx2/bin/ollama_llama_server >+ '[' -d ../build/linux/x86_64/cpu_avx2/lib ']' >+ echo > >+ for pid in ${pids} >+ wait 1101 >+ echo 'Finished compression' >Finished compression >+ '[' -z '' ']' >+ '[' -d /usr/local/cuda/lib64 ']' >+ '[' -z '' ']' >+ '[' -d /opt/cuda/targets/x86_64-linux/lib ']' >+ CUDA_LIB_DIR=/opt/cuda/targets/x86_64-linux/lib >+ '[' -z '' ']' >+ CUDART_LIB_DIR=/opt/cuda/targets/x86_64-linux/lib >+ '[' -z '' -a -d /opt/cuda/targets/x86_64-linux/lib ']' >+ echo 'CUDA libraries detected - building dynamic CUDA library' >CUDA libraries detected - building dynamic CUDA library >+ init_vars >+ case "${GOARCH}" in >+ ARCH=x86_64 >+ LLAMACPP_DIR=../llama.cpp >+ CMAKE_DEFS= >+ CMAKE_TARGETS='--target ollama_llama_server' >+ echo '' >+ grep -- -g >+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' >+ case $(uname -s) in >++ uname -s >+ LIB_EXT=so >+ WHOLE_ARCHIVE=-Wl,--whole-archive >+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive >+ GCC_ARCH= >+ '[' -z '50;52;61;70;75;80' ']' >++ head -1 >++ ls /opt/cuda/targets/x86_64-linux/lib/libcudart.so.12 /opt/cuda/targets/x86_64-linux/lib/libcudart.so.12.5.39 >++ cut -f3 -d. >+ CUDA_MAJOR=12 >+ '[' -n 12 ']' >+ CUDA_VARIANT=_v12 >+ '[' x86_64 == arm64 ']' >+ '[' -n '' ']' >+ CMAKE_CUDA_DEFS='-DGGML_CUDA=on -DCMAKE_CUDA_FLAGS=-t8 -DGGML_CUDA_FORCE_MMQ=on -DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80 -DCMAKE_LIBRARY_PATH=/usr/local/cuda/compat' >+ CMAKE_DEFS='-DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_OPENMP=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -DGGML_CUDA=on -DCMAKE_CUDA_FLAGS=-t8 -DGGML_CUDA_FORCE_MMQ=on -DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80 -DCMAKE_LIBRARY_PATH=/usr/local/cuda/compat' >+ BUILD_DIR=../build/linux/x86_64/cuda_v12 >+ EXTRA_LIBS='-L/opt/cuda/targets/x86_64-linux/lib -lcudart -lcublas -lcublasLt -lcuda' >+ build >+ cmake -S ../llama.cpp -B ../build/linux/x86_64/cuda_v12 -DBUILD_SHARED_LIBS=off -DCMAKE_POSITION_INDEPENDENT_CODE=on -DGGML_NATIVE=off -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_OPENMP=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -DGGML_CUDA=on -DCMAKE_CUDA_FLAGS=-t8 -DGGML_CUDA_FORCE_MMQ=on '-DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80' -DCMAKE_LIBRARY_PATH=/usr/local/cuda/compat >-- The C compiler identification is GNU 14.1.1 >-- The CXX compiler identification is GNU 14.1.1 >-- Detecting C compiler ABI info >-- Detecting C compiler ABI info - done >-- Check for working C compiler: /usr/bin/cc - skipped >-- Detecting C compile features >-- Detecting C compile features - done >-- Detecting CXX compiler ABI info >-- Detecting CXX compiler ABI info - done >-- Check for working CXX compiler: /usr/bin/c++ - skipped >-- Detecting CXX compile features >-- Detecting CXX compile features - done >-- Found Git: /usr/bin/git (found version "2.45.2") >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD >-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success >-- Found Threads: TRUE >-- Using ggml SGEMM >-- Found CUDAToolkit: /opt/cuda/targets/x86_64-linux/include (found version "12.5.40") >-- CUDA found >-- Using CUDA architectures: 50;52;61;70;75;80 >CMake Error at /usr/share/cmake/Modules/CMakeDetermineCompilerId.cmake:814 (message): > Compiling the CUDA compiler identification source file > "CMakeCUDACompilerId.cu" failed. > > Compiler: /opt/cuda/bin/nvcc > > Build flags: -t8 > > Id flags: --keep;--keep-dir;tmp -v > > > > The output was: > > 255 > > #$ _NVVM_BRANCH_=nvvm > > #$ _SPACE_= > > #$ _CUDART_=cudart > > #$ _HERE_=/opt/cuda/bin > > #$ _THERE_=/opt/cuda/bin > > #$ _TARGET_SIZE_= > > #$ _TARGET_DIR_= > > #$ _TARGET_DIR_=targets/x86_64-linux > > #$ TOP=/opt/cuda/bin/.. > > #$ CICC_PATH=/opt/cuda/bin/../nvvm/bin > > #$ CICC_NEXT_PATH=/opt/cuda/bin/../nvvm-next/bin > > #$ NVVMIR_LIBRARY_DIR=/opt/cuda/bin/../nvvm/libdevice > > #$ LD_LIBRARY_PATH=/opt/cuda/bin/../lib: > > #$ > PATH=/opt/cuda/bin/../nvvm/bin:/opt/cuda/bin:/usr/lib/go/bin:/usr/lib/portage/python3.12/ebuild-helpers/xattr:/usr/lib/portage/python3.12/ebuild-helpers:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin:/usr/lib/llvm/18/bin:/usr/lib/llvm/17/bin:/opt/cuda/bin > > > #$ INCLUDES="-I/opt/cuda/bin/../targets/x86_64-linux/include" > > #$ LIBRARIES= "-L/opt/cuda/bin/../targets/x86_64-linux/lib/stubs" > "-L/opt/cuda/bin/../targets/x86_64-linux/lib" > > #$ CUDAFE_FLAGS= > > #$ PTXAS_FLAGS= > > #$ gcc -D__CUDA_ARCH_LIST__=520 -D__NV_LEGACY_LAUNCH -E -x c++ -D__CUDACC__ > -D__NVCC__ "-I/opt/cuda/bin/../targets/x86_64-linux/include" > -D__CUDACC_VER_MAJOR__=12 -D__CUDACC_VER_MINOR__=5 > -D__CUDACC_VER_BUILD__=40 -D__CUDA_API_VER_MAJOR__=12 > -D__CUDA_API_VER_MINOR__=5 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include > "cuda_runtime.h" -m64 "CMakeCUDACompilerId.cu" -o > "tmp/CMakeCUDACompilerId.cpp4.ii" > > /var/tmp/portage/dev-ml/ollama-9999/temp/tmpxft_000004c2_00000000-3_280a110_stdout > 2>/var/tmp/portage/dev-ml/ollama-9999/temp/tmpxft_000004c2_00000000-3_280a110_stderr > > > #$ gcc -D__CUDA_ARCH__=520 -D__CUDA_ARCH_LIST__=520 -D__NV_LEGACY_LAUNCH -E > -x c++ -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ > "-I/opt/cuda/bin/../targets/x86_64-linux/include" -D__CUDACC_VER_MAJOR__=12 > -D__CUDACC_VER_MINOR__=5 -D__CUDACC_VER_BUILD__=40 > -D__CUDA_API_VER_MAJOR__=12 -D__CUDA_API_VER_MINOR__=5 > -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 > "CMakeCUDACompilerId.cu" -o "tmp/CMakeCUDACompilerId.cpp1.ii" > > /var/tmp/portage/dev-ml/ollama-9999/temp/tmpxft_000004c2_00000000-3_280adb0_stdout > 2>/var/tmp/portage/dev-ml/ollama-9999/temp/tmpxft_000004c2_00000000-3_280adb0_stderr > > > # --error 0x1 -- > > In file included from > /opt/cuda/bin/../targets/x86_64-linux/include/cuda_runtime.h:82, > > from <command-line>: > > /opt/cuda/bin/../targets/x86_64-linux/include/crt/host_config.h:143:2: > error: #error -- unsupported GNU version! gcc versions later than 13 are > not supported! The nvcc flag '-allow-unsupported-compiler' can be used to > override this version check; however, using an unsupported host compiler > may cause compilation failure or incorrect run time execution. Use at your > own risk. > > 143 | #error -- unsupported GNU version! gcc versions later than 13 are not supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. > | ^~~~~ > > # --error 0x1 -- > > In file included from > /opt/cuda/bin/../targets/x86_64-linux/include/cuda_runtime.h:82, > > from <command-line>: > > /opt/cuda/bin/../targets/x86_64-linux/include/crt/host_config.h:143:2: > error: #error -- unsupported GNU version! gcc versions later than 13 are > not supported! The nvcc flag '-allow-unsupported-compiler' can be used to > override this version check; however, using an unsupported host compiler > may cause compilation failure or incorrect run time execution. Use at your > own risk. > > 143 | #error -- unsupported GNU version! gcc versions later than 13 are not supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. > | ^~~~~ > > > > > > Compiling the CUDA compiler identification source file > "CMakeCUDACompilerId.cu" failed. > > Compiler: /opt/cuda/bin/nvcc > > Build flags: > > Id flags: --keep;--keep-dir;tmp -v > > > > The output was: > > 1 > > #$ _NVVM_BRANCH_=nvvm > > #$ _SPACE_= > > #$ _CUDART_=cudart > > #$ _HERE_=/opt/cuda/bin > > #$ _THERE_=/opt/cuda/bin > > #$ _TARGET_SIZE_= > > #$ _TARGET_DIR_= > > #$ _TARGET_DIR_=targets/x86_64-linux > > #$ TOP=/opt/cuda/bin/.. > > #$ CICC_PATH=/opt/cuda/bin/../nvvm/bin > > #$ CICC_NEXT_PATH=/opt/cuda/bin/../nvvm-next/bin > > #$ NVVMIR_LIBRARY_DIR=/opt/cuda/bin/../nvvm/libdevice > > #$ LD_LIBRARY_PATH=/opt/cuda/bin/../lib: > > #$ > PATH=/opt/cuda/bin/../nvvm/bin:/opt/cuda/bin:/usr/lib/go/bin:/usr/lib/portage/python3.12/ebuild-helpers/xattr:/usr/lib/portage/python3.12/ebuild-helpers:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin:/usr/lib/llvm/18/bin:/usr/lib/llvm/17/bin:/opt/cuda/bin > > > #$ INCLUDES="-I/opt/cuda/bin/../targets/x86_64-linux/include" > > #$ LIBRARIES= "-L/opt/cuda/bin/../targets/x86_64-linux/lib/stubs" > "-L/opt/cuda/bin/../targets/x86_64-linux/lib" > > #$ CUDAFE_FLAGS= > > #$ PTXAS_FLAGS= > > #$ rm tmp/a_dlink.reg.c > > #$ gcc -D__CUDA_ARCH_LIST__=520 -D__NV_LEGACY_LAUNCH -E -x c++ -D__CUDACC__ > -D__NVCC__ "-I/opt/cuda/bin/../targets/x86_64-linux/include" > -D__CUDACC_VER_MAJOR__=12 -D__CUDACC_VER_MINOR__=5 > -D__CUDACC_VER_BUILD__=40 -D__CUDA_API_VER_MAJOR__=12 > -D__CUDA_API_VER_MINOR__=5 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include > "cuda_runtime.h" -m64 "CMakeCUDACompilerId.cu" -o > "tmp/CMakeCUDACompilerId.cpp4.ii" > > In file included from > /opt/cuda/bin/../targets/x86_64-linux/include/cuda_runtime.h:82, > > from <command-line>: > > /opt/cuda/bin/../targets/x86_64-linux/include/crt/host_config.h:143:2: > error: #error -- unsupported GNU version! gcc versions later than 13 are > not supported! The nvcc flag '-allow-unsupported-compiler' can be used to > override this version check; however, using an unsupported host compiler > may cause compilation failure or incorrect run time execution. Use at your > own risk. > > 143 | #error -- unsupported GNU version! gcc versions later than 13 are not supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. > | ^~~~~ > > # --error 0x1 -- > > > > > >Call Stack (most recent call first): > /usr/share/cmake/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD) > /usr/share/cmake/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test) > /usr/share/cmake/Modules/CMakeDetermineCUDACompiler.cmake:131 (CMAKE_DETERMINE_COMPILER_ID) > ggml/src/CMakeLists.txt:271 (enable_language) > > >-- Configuring incomplete, errors occurred! >llm/generate/generate_linux.go:3: running "bash": exit status 1 > * ERROR: dev-ml/ollama-9999::guru failed (compile phase): > * go generate ./... failed > * > * Call stack: > * ebuild.sh, line 136: Called src_compile > * environment, line 2574: Called ego 'generate' './...' > * environment, line 1085: Called die > * The specific snippet of code: > * "$@" || die -n "${*} failed" > * > * If you need support, post the output of `emerge --info '=dev-ml/ollama-9999::guru'`, > * the complete build log and the output of `emerge -pqv '=dev-ml/ollama-9999::guru'`. > * The complete build log is located at '/var/tmp/portage/dev-ml/ollama-9999/temp/build.log'. > * The ebuild environment file is located at '/var/tmp/portage/dev-ml/ollama-9999/temp/environment'. > * Working directory: '/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999' > * S: '/var/tmp/portage/dev-ml/ollama-9999/work/ollama-9999' >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 935842
: 897436