Oddbean new post about | logout
 iirc they all have CUDA but the Quadro CAD cards have more cores 
d | 1 years ago (raw) | root | parent | reply | flag +1
 I see these dirt cheap Tesla K40 cards on ebay and it's pretty tempting to buy one to screw around with GPGPU programming 
 ...but I already have a 3060, so, if I just want to dip my toe in presumably I can use that 
 @d @Rance :gentoo: i have a Quadro RTX 4000 in my desktop and it's way faster than my old 1070 at stable diffusion but i vaguely remember seeing some benchmarks of game performance where the two weren't that far apart 
 @09085299 @Rance :gentoo: @d 

I am getting this video card for LlamaGPT and for some AI artwork.

Nvidia Quadro P1000 4GB GDDR5

How good will it be? 
 @verita84 :Debian_logo: :firefox: :bing: :android: ... @Rance :gentoo: @d @09085299 Absolutely awful, the ram size matters for AI work. 
 @Salastil @Rance :gentoo: @d @09085299 

Will it be functional but just take a little longer? 
 @verita84 :Debian_logo: :firefox: :bing: :android: ... @Rance :gentoo: @d @09085299 It wont be functional, pytorch has to reserve memory off the bat and when you're memory strapped it wont work. I can't run many models using my 1060 6GB because it doesn't have enough memory on the card. 
 @Salastil @Rance :gentoo: @d @09085299 

oh fuck me. 
 The Quadro series is good for parallel transcoding of video, its the entire gimmick of the series. 
 @Salastil @Rance :gentoo: @d @09085299 

well it will help the Jellyfin server at least 
 how come you opted for the 4gb? was it 10 bucks or something? 
 @Nozahagmelons @Salastil @Rance :gentoo: @d @09085299 

I paid 80 each I think.....Best I could do for the OptiPlex 3050 
 @Nozahagmelons @Salastil @Rance :gentoo: @d @09085299 

Any Alternative to stable diffusion that I can use and self-host? 
 not to my knowledge, but KoboldAI is pretty cool for text generation 
 What? You're too good to just walk into a Microcenter and steal a RTX 4090ti? You'll never get ahead in Biden's America until you start playing by the new rules.

https://media.salastil.com/media/b247a6e82ca21ed1fda4dd9ee783e7c1d37c5805644afc96e2f4c415e456e9ec.png 
 Even my old ass Quadro M4000 has 8 GB VRAM and can absolutely own any modern game on ultra settings even shit optimized games like fallout 76 
 @Salastil @Rance :gentoo: @d @09085299 

So if you use AI with a CPU, it loads all the models into RAM from the start. When you use a GPU, are you saying it loads it all on the GPU instead of RAM? 
d | 1 years ago (raw) | root | parent | reply | flag +1
 you can pick up these Tesla K40 12GB cards I'm talking about for like $40

they're strictly for compute though, no video output 
 @d @Rance :gentoo: @09085299 

it's a old Dell from 2016/2015 . Low-Profile slots