What are you guys actually using for research papers this year that doesnt just hallucinate half the citations? Ive been in academia for like six years now so I know my way around a Zotero library and basic LaTeX formatting but honestly the sheer amount of new AI stuff hitting the market since January is making my head spin. Im currently working on a pretty heavy literature review for a paper on synaptic plasticity and its honestly getting exhausting trying to keep track of what tool does what. I tried using the standard GPT-4 stuff for a while to help bridge the gap between my raw notes and a decent first draft but the hallucination problem is still so bad with specific citations that I end up spending more time fact checking than if I just wrote it from scratch.
Im looking for something more specialized for 2024. Ive heard people mention things like Jenni AI or Paperpal or even Scite for the verification side of things but I dont want to waste money on five different subscriptions that all do the same thing. My budget is probably around 25 bucks a month tops since im on a grad student stipend and I really need something that can handle complex technical jargon without stripping the nuance out of the science.
Does anything actually exist yet that can take a folder of my own PDFs and help me synthesize the info without making stuff up? I tried a tool last week—cant remember the name now—that completely misread a p-value in one of my source papers and it nearly ruined a whole section of my discussion. Is there a consensus on what is actually reliable for high-level academic work right now or are we still just in the wild west phase where everything is half-baked? Id love to hear what your workflows look like these days especially if youre dealing with STEM stuff where the details actually matter...
I've been sticking to a pretty tight budget lately. Honestly, I'm super satisfied with this simple setup:
I switched my lab workflow to SciSpace Premium Academic Plan for meta-analyses recently. I dont trust general models.
Interested in this too
Disagreeing slightly on the subscription front. Over the years, I've found that Google NotebookLM AI Research Assistant handles a folder of PDFs better than paid wrappers because it anchors responses strictly to your text. It wont just invent synaptic mechanisms out of thin air.
Look, if you want accuracy in a field as dense as synaptic plasticity, stop using general-purpose models for the heavy lifting. I have found a two-pronged setup works best to stay under that 25 dollar mark while ensuring technical precision.
Building on the earlier suggestion about keeping responses grounded strictly to your own files, Ive been really satisfied with a custom local indexing setup lately. No complaints here! Its honestly a total game changer for technical work because you can basically force the system to show you the exact page and line it pulled from every single time. Going the DIY route has some big perks: