Does llama.cpp support CUDA through HIP? #10892
MarioIshac
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am already running llama.cpp using HIP for ROCm and it is great, but had the above question out of curiosity. If HIP is a thin wrapper of CUDA on CUDA machines, does that mean that following this AMD guide would produce a
llama-server
/ artifact that would be GPU accelerated on an Nvidia machine?Beta Was this translation helpful? Give feedback.
All reactions