Which web UI is bes...
 
Notifications
Clear all

Which web UI is best for self-hosting DeepSeek?

8 Posts
9 Users
0 Reactions
142 Views
0
Topic starter

Hey everyone! I’ve finally managed to set up a dedicated home server with enough VRAM to run the DeepSeek-R1 models locally. Now that I’ve got the backend sorted out—I'm currently using Ollama for the heavy lifting—I’m looking for the perfect web interface to actually interact with it.

I’ve been seeing a lot of buzz around Open WebUI because of its ChatGPT-like feel, but I’ve also heard people mention things like LibreChat and Text-Generation-WebUI (Oobabooga). My main goal is to find something that is snappy and handles the specific "thinking" tokens of DeepSeek-R1 properly. I’d love a UI that can cleanly separate the reasoning process from the final response so it doesn't just look like one massive wall of text.

I’m also interested in features like built-in RAG support for my local documents and maybe something that works well on mobile browsers so I can use it around the house. I'm a bit overwhelmed by the options and don't want to spend hours configuring something that ends up being clunky.

For those of you already self-hosting DeepSeek, which web UI has provided the smoothest experience for you? Is Open WebUI the clear winner, or is there a hidden gem I should try instead?


8 Answers
11

For your situation, I would suggest Open WebUI. Honestly, I had issues with LibreChat—it just wasn't as good as expected for those specific DeepSeek reasoning tokens. Open WebUI actually handles `` tags natively now, which is basically what you need to avoid that wall of text. Plus, the RAG implementation is built-in and much smoother than setting up Text-Generation-WebUI. It works great as a PWA on mobile too... so yeah, definitely the winner imo. gl!


10

yo, honestly i wasted sooo much time trying Text-Generation-WebUI (Oobabooga) and it was such a letdown... literally couldn't handle R1 reasoning without looking like a mess. i switched to Open WebUI v0.5.0 and it's highkey the winner. - Open WebUI handles those 'thinking' tags perfectly
- RAG is basically built-in
- mobile layout actually works unfortunately, others are kinda clunky, so just stick with this. gl!


3

Late to the party but seconding the recommendation above! Honestly, Open WebUI is highkey the best *value* since it's free. I mean, I've been self-hosting for years and it works well with DeepSeek reasoning tags natively. Pro tip: run it via Docker—it makes updates a breeze so you dont waste time. It's super light on resources too, basically perfect for a home server budget. No complaints here! Cheers!


2

Yep, this is the way


2

.


1

So im pretty new to setting up home servers - but i spent the last week doing a bit of market research on different UIs to see what actually fits the 'enterprise' feel vs just a basic chat box. Since everyone already mentioned the big one, i looked into some alternatives that focus more on the RAG and data side... - AnythingLLM: This seems like a massive contender if youre serious about the document side of things. It has its own built-in vector database - basically it handles the RAG stuff way more 'professionally' than a simple plugin. I think it works with Ollama backends easily, though im still testing how it handles the DeepSeek reasoning blocks.
- LobeChat: Honestly, this one has the most 'modern' market feel. It supports a lot of plugins and has a really clean mobile interface. Its very snappy, but im not 100% sure if the reasoning tags look exactly how you want them yet - might need a custom CSS tweak?
- Jan.ai: This is more of a local-first desktop app but they have been moving into the server space. Its super clean, though maybe a bit too simple if you want deep RAG features. Has anyone tried indexing like... thousands of files with AnythingLLM? Im curious if it stays snappy with DeepSeek-R1 or if it starts to lag.


1

Building on the earlier suggestion, I'm pretty satisfied with how Open WebUI v0.5 has evolved lately, especially for the R1 models. But I gotta ask, what kind of scale are you looking at for the RAG stuff? Like, are we talking ten docs or ten thousand? Reason I ask is that while Open WebUI handles the thinking tags perfectly, it can get a bit bogged down if you're doing heavy-duty document management. If you're going for a massive library, AnythingLLM Desktop might actually serve you better even if the interface feels a bit different. Personally, I've been running Open WebUI on my home server for months now and I have no complaints. It stays snappy and the mobile PWA is actually usable, unlike some other tools I've tried over the years. Just stick to Docker and you'll save yourself a lot of configuration gray hairs.


1

Would love to know this too


Share: