What are the best s...
 
Notifications
Clear all

What are the best system prompts for DeepSeek coding tasks?

5 Posts
6 Users
0 Reactions
253 Views
0
Topic starter

Hey everyone! I’ve been switching over to DeepSeek-V2 for my daily coding tasks, specifically for React and Python development. While the model is incredibly smart, I’m struggling to find a system prompt that consistently delivers clean, production-ready code without too much fluff. I’ve tried a few basic 'expert coder' templates, but I still run into issues with overly verbose explanations or missing edge cases. I really want to optimize it for better error handling and modularity during complex refactoring. Does anyone have a tried-and-tested system prompt that works specifically well with DeepSeek’s architecture to keep the output concise and technically accurate? I’d love to see what’s working for you guys!


5 Answers
12

yo, i totally feel u on this. i’ve been using DeepSeek-V2 for months now and honestly, those standard 'you are a senior dev' prompts are highkey disappointing... they always end up yapping about why they chose a specific list comprehension instead of just giving me the clean production code. basically, after 5 or 6 years of messing with different LLMs, i’ve realized that generic templates dont work well with DeepSeek-Coder-V2-Instruct cuz it’s tuned to be a bit too 'chatty' by default. so i compared a few approaches over the last year:
- Generic Expert: it's what everyone uses but it’s honestly mid. i had issues with it hallucinating older React patterns.
- Constraints-heavy: basically just a list of 'donts.' It's better, but the model feels kinda lobotomized and loses some logic... well actually, it just gets boring.
- The 'Thought-Block' method: this is the best choice i've found. Here's what i recommend: use a system prompt that forces it to separate its reasoning from the code. I tell it: 'Use a hidden thought process. Only output the final modular code. Prioritize Python type hinting and React functional components with clean error boundaries.' tbh even with the newer DeepSeek-V2.5, it still misses some edge cases in complex Python refactoring occasionally. it’s not as good as i expected for massive migrations, but for daily React tasks, it’s solid if u keep the prompt strict. Also, literally tell it 'NO PROSE' in the system instructions. It sounds mean but it's the only way to kill the fluff. anyway, hope that helps ur workflow... gl! peace.


12

> I really want to optimize it for better error handling and modularity during complex refactoring. Honestly, I've been there too!! I'm still kinda figuring things out, but I've found that the way you frame the prompt for DeepSeek-V2 makes a huge difference. Like, it's so smart but sometimes it just wont stop talking lol. For your situation, I've compared two main styles: 1. The 'Code-Only' Constraint: You basically tell it 'Zero prose, only code blocks.' Pros: super fast, zero fluff. Cons: It definitely skips the error handling you're looking for because it's rushing. 2. The 'Modular Refactor' Framework: This is what I use now. I tell it to 'think step-by-step about edge cases first, then provide modular Python/React code.' It's a bit more text, but the code quality is sooo much better. It actually thinks about the stuff that usually breaks, you know? Wait, I mean, the 'Modular' one usually gives me better results for Python refactoring specifically. It feels more like a real dev wrote it... anyway, just my two cents! gl!


3

I totally agree with the idea that those massive prompts actually make the code harder to trust. I am still pretty new to the DeepSeek scene myself, but I have been trying to see where it fits compared to the other big names I have used for work. Basically, looking at the market right now, DeepSeek seems to handle raw logic differently than the others.

  • Claude 3.5 Sonnet usually feels more natural for React components but can be expensive
  • GPT-4o is fast for basic Python things but gives me way too much fluff like you guys mentioned
  • DeepSeek-V2 really shines for complex refactoring if you just give it a strict checklist instead of a roleplay story Honestly, I mean, why do we even use those senior dev templates? I have found it works better when I just tell it to follow a specific style guide. Does anyone else feel like it performs better when you tell it to think in steps? I am still learning the best way to word things, but short instructions seem to stop it from yapping so much.


3

Honestly, I kinda disagree that we should be looking for one master system prompt to fix everything. I am still pretty new to this, but it seems like the more stuff we cram into the system prompt, the more the model gets confused about what is a rule and what is the actual task. Instead of using those long templates everyone talks about, I have been trying a more DIY approach where I keep the system prompt totally empty or just one sentence like You are a technical assistant. Then, I put all my specific technical specs into a separate text file that I paste in as the first message. It feels way more reliable because you are basically giving DeepSeek-V2 a specific manual to follow for that one job. Here are a couple quick things that worked for me:

  • Just feed it the raw documentation for your specific React hooks or Python libs so it does not have to guess.
  • Write out a simple list of your own personal coding standards like no comments or use type hints and paste that every time. Doing it yourself this way instead of relying on a pre-made prompt feels like it gives way more control over the output logic.


1

Just found this thread today and yeah, I am kinda in the same boat. I am still a bit of a beginner with this whole AI coding thing, but my biggest fear is always putting something live that just... breaks. Honestly, when I first started using DeepSeek, I tried those really long prompts but I realized they made the code way too complex for me to even verify if it was safe. I have been playing around with my current setup and focusing way more on reliability than just getting the feature done fast. One thing I learned the hard way was that if I did not emphasize safety, the model would give me these cool looking Python functions that totally failed when the data was a bit messy. Now I basically just try to make it focus on:

  • validating all the inputs first
  • adding basic error blocks so it doesnt crash the whole app
  • making sure it stays away from super complex logic I cant test I am still trying to figure out if there is a better way to make sure it handles edge cases in React because sometimes it misses those simple null checks and the whole UI crashes. It is definitely a learning process for me but I am finding that being a bit paranoid about reliability helps a lot more than trying to be super fancy with the prompts. Does anyone else worry about the code just randomly failing in production or is it just me?


Share: