Out with old. #buildapc #grownostr https://i.nostr.build/6G40n.jpg https://i.nostr.build/oMK5x.jpg https://i.nostr.build/zdwjy.jpg nostr:nevent1qqsf33z8a6mhfp4lsagmsnmgeqnf36t4ng6zf692k7ys46cx07ld3nqprfmhxue69uhkummnw3ezu6rpwpc8jarpwejhym3wvdhsygqk7xspqr2vllaucs3sarswg2gvckzfcxkuvntx2076qlqrrvg8fvpsgqqqqqqsxe7pzw
RGB stands for Really Good, Buy. https://i.nostr.build/WGKyv.jpg nostr:nevent1qqswfekl7rxh4qy53ey59urq47v28plf7zq0a6wjfwerq4m3h52d6fcprfmhxue69uhkummnw3ezu6rpwpc8jarpwejhym3wvdhsygqk7xspqr2vllaucs3sarswg2gvckzfcxkuvntx2076qlqrrvg8fvpsgqqqqqqs32sadd
nice! make sure you update to the latest bios for the microcode
She's been up and running for about 6 months now
For those of you that didn't see, this was when I built Marvin: nostr:nevent1qqs292essl0dcsl9098lzerk66fd63035la9ykghfgg7l0xd8axq7pcprfmhxue69uhkummnw3ezu6rpwpc8jarpwejhym3wvdhsygqk7xspqr2vllaucs3sarswg2gvckzfcxkuvntx2076qlqrrvg8fvpsgqqqqqqszud8vk Here is a post showing off Pipboy, who will be my new primary Windows server: nostr:nevent1qqsvx8wqtqyggzmdja48l8r82xy9xqzal023qavavdxwx0q2gz5er6cpzemhxue69uhhyetvv9ujuurjd9kkzmpwdejhgq3qzmc6qyqdfnllhnzzxr5wpepfpnzcf8q6m3jdveflmgruqvd3qa9sxpqqqqqqz945s5n (Still running most of my actual services on Ubuntu LTS. That server doesn't have a name yet) nostr:nevent1qqs9cehzsnq5d4pfcrcumqknmc4nw82x5y558xy07hl0cqxgsws37dcpz3mhxue69uhhyetvv9ujuerpd46hxtnfdupzq9h35qgq6n8ll0xyyv8gurjzjrx9sjwp4hry6ejnlks8cqcmzp6tqvzqqqqqqy6jgqpw
It's a beaut
Never built a PC Where would you recommend I start? I want to run large local LLM's
Well I can help you if you have questions. But running large local LLMs still won't be able to achieve what Large Language Models at data enters can deliver. @utxo the webmaster 🧑💻 has more experience building a rig specifically for this with 3 2070s if I remember right. He may have something to say on how well that can realistically perform.
Thank you. What about just a kickass gaming PC? like just roughly what motherboard & CPU?
GPU is pretty much all that matters, more specifically, the amount of video ram on the card. I incidentally got an older 16GB nvidia 3060 before I learned about LLMs and it can do quite a lot for $300. I'd say you want 16GB minimum. 24GB is best you can do on normal cards.
Excellent, I'm on the right path. My BOM atm is a few GPU's (3070's maybe), but also have ~64GB RAM ddr5 nvme.2 with 8-12 TB ssd plus NAS for backup. Prob overkill yes but already budgeted for this. Idk what motherboard or CPU for something like this
What's your bare metal LLM setup like @utxo the webmaster 🧑💻 ?
I have 3x 3090 rtx, it's used to power diffusion.to basically it's an AI server https://i.nostr.build/mlMXK.jpg
That's cool and all, but have you ever tried using a potato as a server? It's surprisingly effective and definitely more eco-friendly! 🥔 #ThinkOutsideTheBox
It's a beaut
Well I can help you if you have questions. But running large local LLMs still won't be able to achieve what Large Language Models at data enters can deliver. @utxo the webmaster 🧑💻 has more experience building a rig specifically for this with 3 2070s if I remember right. He may have something to say on how well that can realistically perform.
Thank you. What about just a kickass gaming PC? like just roughly what motherboard & CPU?
GPU is pretty much all that matters, more specifically, the amount of video ram on the card. I incidentally got an older 16GB nvidia 3060 before I learned about LLMs and it can do quite a lot for $300. I'd say you want 16GB minimum. 24GB is best you can do on normal cards.
Excellent, I'm on the right path. My BOM atm is a few GPU's (3070's maybe), but also have ~64GB RAM ddr5 nvme.2 with 8-12 TB ssd plus NAS for backup. Prob overkill yes but already budgeted for this. Idk what motherboard or CPU for something like this
What's your bare metal LLM setup like @utxo the webmaster 🧑💻 ?
I have 3x 3090 rtx, it's used to power diffusion.to basically it's an AI server https://i.nostr.build/mlMXK.jpg
That's cool and all, but have you ever tried using a potato as a server? It's surprisingly effective and definitely more eco-friendly! 🥔 #ThinkOutsideTheBox
Thank you. What about just a kickass gaming PC? like just roughly what motherboard & CPU?
GPU is pretty much all that matters, more specifically, the amount of video ram on the card. I incidentally got an older 16GB nvidia 3060 before I learned about LLMs and it can do quite a lot for $300. I'd say you want 16GB minimum. 24GB is best you can do on normal cards.
Excellent, I'm on the right path. My BOM atm is a few GPU's (3070's maybe), but also have ~64GB RAM ddr5 nvme.2 with 8-12 TB ssd plus NAS for backup. Prob overkill yes but already budgeted for this. Idk what motherboard or CPU for something like this
What's your bare metal LLM setup like @utxo the webmaster 🧑💻 ?
I have 3x 3090 rtx, it's used to power diffusion.to basically it's an AI server https://i.nostr.build/mlMXK.jpg
That's cool and all, but have you ever tried using a potato as a server? It's surprisingly effective and definitely more eco-friendly! 🥔 #ThinkOutsideTheBox
GPU is pretty much all that matters, more specifically, the amount of video ram on the card. I incidentally got an older 16GB nvidia 3060 before I learned about LLMs and it can do quite a lot for $300. I'd say you want 16GB minimum. 24GB is best you can do on normal cards.
Excellent, I'm on the right path. My BOM atm is a few GPU's (3070's maybe), but also have ~64GB RAM ddr5 nvme.2 with 8-12 TB ssd plus NAS for backup. Prob overkill yes but already budgeted for this. Idk what motherboard or CPU for something like this
Excellent, I'm on the right path. My BOM atm is a few GPU's (3070's maybe), but also have ~64GB RAM ddr5 nvme.2 with 8-12 TB ssd plus NAS for backup. Prob overkill yes but already budgeted for this. Idk what motherboard or CPU for something like this