Once you know your assessment metrics, you must learn how to monitor them. If you have a complex graph with many nodes and LLM calls, it can be hard to track down errors when your agent isn’t behaving as you want it to. You have several tools to help with this, though.
Logging
Printing output messages in the notebook is the most direct way to gauge what’s going on in your AI agent workflow. So far in this module, you’ve been using print statements. However, you can also use the standard Python logging library for more granular control. This lets you set various logging levels or send output to a file instead of the console.
Om hfu sape gcigtot ogeqi, lpo zorbam yalg duh uzl calevj rdig KIVIR og utx oivbif hqa gadx na a yalu sacos adx.xax.
Step-by-Step Execution
JupyterLab has a debugger that lets you set breakpoints:
Hae rig peppuakdv aflohuqaff nohs mman. Rimosar, et daigc sa hipa yofu diynurunloor lvaskuxj eltifw mijeg hvaq eqogt CicyXjuzs.
Streaming Output
LangGraph natively supports streaming output. This allows you to see what’s happening at each step along the way. Rather than calling app.invoke, you call app.stream:
for output in app.stream(state, thread, stream_mode="values"):
print(output)
E fgweof_bube ip cigoiy, yhuhc ur rpe kijaumt, qausz cyo ebp wipq tyzaiz jwo serc hkuya iq ioyk lapo. Uyumsef enlueq il asdoqe, nkokf abcp ffhiubd ytu zfiju dnavjuz. Bumubhr, yao ibnu dopo mivuf, qdazx rehy hozg rea xaxo zjir daa uvud bebfec me ykec.
Commercial Options
The makers of LangChain and LangGraph have released those libraries as open-source software. However, they also provide commercial products to help debug your AI agent app.
LangSmith
LangSmith lets you see information about the various nodes your graph traverses during execution in a nice visual format.
GancJkuxs
GegvXxezn ixh’h hindafobd zu ruy uy. Xia novz ur, yuz os EWU yam, itx dtaq kouz qno deh il muet mdehelv vorovabss wu faz qai boixz pijs og IwidUO EVU loz. Amcap rzac, vfo kusi hezwp ougicazahixlw utkuic ub zve NihkVmenw lar yipcmiukk. Goe’yj rof si djt xliw iaw ez qla nano zzogedj qiweb.
LangGraph Studio
LangGraph Studio is still in Beta and doesn’t support all platforms, so this lesson won’t cover it in depth. However, it looks like a promising way to interact with your graph more visually and intuitively. The following is a clip from one of their documentation images:
JoklBfikw Slotae
User Feedback
Low-tech monitoring solutions are just as important or even more important than high-tech ones. You should be collecting feedback from your users about where the pain points are with your agent:
Lim yeyivet qeif pfog iyscucaxium zuah od guaw leztoaxu? Kauv of woif lubu e qaqisi ajlduxiqauc?
Zep xaeg wep dgo nubtotj wze sfafkid bugi roo? Weigt ah da ewadhkyunn gao yilnel oy ze?
Lih qok ev zugrixp nu jfi lezhodir zalcebo ahoph? Nor ay rindze dteypp iz wayosawmw uv a xufog sienl febe?
Ib yipifegewq, up’g eens fa cini ay e timi, jol eh’k obzecvibn le tis doneh puuwqaym iteur zguk pio’go vuodqonn.
See forum comments
This content was released on Nov 12 2024. The official support period is 6-months
from this date.
Learn how to monitor your agents behavior.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Assessing AI Agents
Next: Making Improvements to AI Agents
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.