Oddbean new post about | logout
 @verita84 :Debian_logo: :firefox: :bing: :android: ... @Rance :gentoo: @d @09085299 Absolutely awful, the ram size matters for AI work. 
 @Salastil @Rance :gentoo: @d @09085299 

Will it be functional but just take a little longer? 
 @verita84 :Debian_logo: :firefox: :bing: :android: ... @Rance :gentoo: @d @09085299 It wont be functional, pytorch has to reserve memory off the bat and when you're memory strapped it wont work. I can't run many models using my 1060 6GB because it doesn't have enough memory on the card. 
 @Salastil @Rance :gentoo: @d @09085299 

oh fuck me. 
 The Quadro series is good for parallel transcoding of video, its the entire gimmick of the series. 
 @Salastil @Rance :gentoo: @d @09085299 

well it will help the Jellyfin server at least 
 how come you opted for the 4gb? was it 10 bucks or something? 
 @Nozahagmelons @Salastil @Rance :gentoo: @d @09085299 

I paid 80 each I think.....Best I could do for the OptiPlex 3050 
 @Nozahagmelons @Salastil @Rance :gentoo: @d @09085299 

Any Alternative to stable diffusion that I can use and self-host? 
 not to my knowledge, but KoboldAI is pretty cool for text generation 
 What? You're too good to just walk into a Microcenter and steal a RTX 4090ti? You'll never get ahead in Biden's America until you start playing by the new rules.

https://media.salastil.com/media/b247a6e82ca21ed1fda4dd9ee783e7c1d37c5805644afc96e2f4c415e456e9ec.png 
 Even my old ass Quadro M4000 has 8 GB VRAM and can absolutely own any modern game on ultra settings even shit optimized games like fallout 76 
 @Salastil @Rance :gentoo: @d @09085299 

So if you use AI with a CPU, it loads all the models into RAM from the start. When you use a GPU, are you saying it loads it all on the GPU instead of RAM?