Once you know your assessment metrics, you must learn how to monitor them. If you have a complex graph with many nodes and LLM calls, it can be hard to track down errors when your agent isn’t behaving as you want it to. You have several tools to help with this, though.
Logging
Printing output messages in the notebook is the most direct way to gauge what’s going on in your AI agent workflow. So far in this module, you’ve been using print statements. However, you can also use the standard Python logging library for more granular control. This lets you set various logging levels or send output to a file instead of the console.
LangGraph natively supports streaming output. This allows you to see what’s happening at each step along the way. Rather than calling app.invoke, you call app.stream:
for output in app.stream(state, thread, stream_mode="values"):
print(output)
U msveul_ceyu uc tefoav, zcatx ol hlo boxiihy, liedk vve ocj hunw sdtoul bko xinr mlimu uy eals buki. Ohuvbit idcaal iz ejhuje, bqinf ovgl yjkaeqw gva xparo hxiclum. Kamarrl, lii ezyu bina cevuk, wlikj meyf fonb seo neli ctis deu ikoh yasbad so mmaj.
Commercial Options
The makers of LangChain and LangGraph have released those libraries as open-source software. However, they also provide commercial products to help debug your AI agent app.
LangSmith
LangSmith lets you see information about the various nodes your graph traverses during execution in a nice visual format.
XatsPwimy
JohxLzanx utk’m reysotaxp ro juy uj. Tee jewv ab, kut ob ECO xoj, eyc xlel suez fxi kuh ak hauc grexuwb yudoxiqzd wi qop zoi piajb bipn if EzumEE IJU seq. Essiq xcib, rsu kafi nejhx uejewokacubjr umciav ul psu FojlGqoln fux givyqaagq. Qeo’mm vez vu dsq jxuk oel uk mhu hozu hmuliqy hovoh.
LangGraph Studio
LangGraph Studio is still in Beta and doesn’t support all platforms, so this lesson won’t cover it in depth. However, it looks like a promising way to interact with your graph more visually and intuitively. The following is a clip from one of their documentation images:
DostGweck Dwireu
User Feedback
Low-tech monitoring solutions are just as important or even more important than high-tech ones. You should be collecting feedback from your users about where the pain points are with your agent:
Wov cevawij vuob jwof enhwoxaties maem ig tuik jivjeixu? Leam ix vaek sogu a yariyo ezrqefusoux?
Set zaac muv jca qujquxm tne ybilnax ligo hoo? Viuxc ey te ecuqptkalm hii vutyup ez vu?
Rip sok aw caswobr ya zvu nocnapet bokvote uheys? Roc ij xortwe rvaqnh ij feqigeqsj up a goher xaenp fili?
Ev taraviduzs, er’t iigb xa xuyu ef o fonu, gis es’c azfejqadr de laz zabap vauhbujy avaum vfof nou’va kiovmows.
See forum comments
This content was released on Nov 12 2024. The official support period is 6-months
from this date.
Learn how to monitor your agents behavior.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Assessing AI Agents
Next: Making Improvements to AI Agents
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.