Oddbean new post about | logout
 I noticed llama.cpp supports ROCM (amdgpu) now! I can sample the 8B parameter llama 3 model with my 8GB VRAM graphics card! It's fast! Local ai ftw.

https://cdn.jb55.com/s/rocm-llama-2.mp4 
 Nice bro! 
 Did you use any specific guides ?  
 nah