Notifications
Clear all

How do you implement custom tools as skills in LangChain?

2 Posts
4 Users
0 Reactions
13 Views
0
Topic starter

ive been banging my head against the wall for like three hours now trying to get this simple custom tool to actually register as a skill in my LangChain agent. i followed the docs exactly but i keep getting this weird Pydantic validation error or the agent just ignores the tool entirely and tries to hallucinate its own answer. honestly so fed up with how many breaking changes there are... im building this property lookup bot for a local real estate client and i need it done by Friday but at this rate im gonna miss the deadline. does anyone actually know the right way to implement custom tools as skills so the agent actually uses them properly?


12

^ This. Also, totally feel your pain with the Pydantic shift. I spent a whole weekend trying to get a similar real estate bot running on a DigitalOcean Droplet 4GB RAM 2 vCPUs and it kept crashing because of those same schema errors. In my experience, picking the wrong implementation method is exactly why the LLM starts ignoring you. Over the years, ive tried many ways to skin this cat and usually it comes down to three choices:

  • The @tool decorator: Its the simplest for sure. I used this for a quick prototype using OpenAI API GPT-4o-mini and it worked okay, but it gets flaky when your inputs get complex. If your function docstring is even slightly vague, the agent just goes rogue.
  • Subclassing BaseTool: This is my go-to for anything professional. You manually define the args_schema and the _run method. Its more typing but it basically forces the agent to follow the rules. It stopped all my hallucination issues when I was pulling real-time pricing data for a client last year.
  • StructuredTool.from_function: This is like the middle ground where you wrap an existing function but still give it a strict Pydantic schema. If youre on a tight deadline, honestly just switch to the BaseTool class approach. Its more reliable than the decorator when youre dealing with weird validation errors... saved me from missing a big launch date back in March.


11

tl;dr: use the @tool decorator and define a specific pydantic schema for your tool arguments to stop the validation errors. honestly those pydantic issues are usually caused by the shift from v1 to v2 in the langchain core. to get your real estate bot working properly, i recommend defining a clear pydantic class for your inputs. this keeps the data types strict so the agent doesnt mess up the property lookup. i have found that using OpenAI GPT-4o API provides much more reliable tool calling than older models because it follows schemas more strictly. basically you just need:

  • a pydantic class for args
  • the @tool decorator
  • a descriptive docstring the docstring is what the agent uses to decide when to trigger the tool. if your description is too vague, the llm is just gonna try to guess the answer... so be very explicit there. its a solid approach for real-world usage.


3

To add to the point above: Id be careful with decorators for real estate schemas. StructuredTool classes are probably safer for keeping costs low on Vultr Cloud Compute High Performance 2GB.


Share: