honestly so hyped to finally get deepseek v3 running on my own hardware instead of paying those api fees every month. im stuck between two paths right now for my home lab setup in the garage.
i got about 3500 bucks to blow on this by next friday for my local coding assistant project. do you guys think the dual 3090 setup is still the king for vram heavy stuff like v3 or is the 4090 speed better for a single user...
In my experience building these rigs over the years, 24GB is just gonna frustrate you with V3. Since you have 3500 bucks, maybe skip the dual setup and look for a used NVIDIA RTX A6000 48GB GDDR6. It pulls way less power than dual 3090s and keeps everything on one bus. Check r/hardwareswap or local liquidators... you can usually find them within your budget if you dig around.
Just catching up on this... honestly i'm pretty satisfied with my multi-gpu rig and it handles these massive models like a champ. For DeepSeek-V3, you gotta prioritize VRAM over raw speed or you're gonna have a bad time.
> grabbing 2 used 3090s off ebay so i can get that 48gb vram pool but im worried about the power draw Unfortunately, running a dual setup on older residential circuits is frequently a recipe for disaster. I had significant issues with power spikes tripping my breakers last year, and it was not as stable as expected. Before you commit, how many amps is that garage circuit actually rated for? Also, are you prioritizing inference speed or the total context window?