LangGraph with Streamlit Intersection
Building intelligent applications that offer meaningful user interactions can be challenging in the evolving field of AI and Large Language Models (LLMs). LangGraph is a framework for managing multi-agent workflows through a graph-based architecture, enabling complex AI-driven task orchestration. Representing agents as nodes in a graph facilitates dynamic, conditional interactions between specialized agents and integrate features like tool calls, memory, and structured outputs. This makes it ideal for building intelligent, multi-agent applications that require advanced coordination and context-awareness. However, the complexity of these models often requires an intuitive interface for users to interact with them.
This is where Streamlit comes in. As an open-source framework designed to simplify the creation of interactive web applications, Streamlit allows developers to easily build interfaces that display real-time data and AI-driven insights. When combined with LangGraph, Streamlit provides a seamless way to make sophisticated NLP models accessible and engaging. Together, these tools empower developers to create powerful, user-friendly applications that bridge complex AI models with intuitive, interactive experiences.
there are 3 different overarching paradigms within LangGraph that I will be going over between the two intersections — callback handlers, asynchronous stream events, and dynamic interrupts. But first, we have to understand Streamlit and its mechanisms. There are Demos throughout the article and an open-source code repo at the bottom :D.
Streamlit Rerun Mechanism
Streamlit’s rerun mechanism is central to maintaining the interactivity of its applications. Each time a user interacts with the app — whether by changing a widget value (like a slider or button), uploading a file, or adjusting parameters — Streamlit automatically triggers a rerun of the entire script. This design ensures the app remains responsive and dynamic by continuously updating the interface in response to user input.
Streamlit's adoption of a “declarative” approach, specifically focuses on what the UI should look like based on the current state, rather than how to update it step by step. That makes it so that every time an interaction happens, it starts from the top and re-executes all code. Streamlit keeps track of widget states, so even though the script reruns from scratch, the app’s interface maintains continuity. For example, if a user sends a new message, the app re-renders based on the new input while preserving the rest of the widget settings. This makes the development experience simple, and developers can manage the state for persistence if they desire but not necessary. The rerun mechanism ensures that changes in inputs are immediately reflected in the app’s output (kind of like a ReactJS re-render), making it ideal for real-time data visualization and interactive AI applications.
Streamlit In-Order Rendering Mechanism
Streamlit’s in-order rendering mechanism is a key feature that contributes to its simplicity and intuitive user experience. In Streamlit, the layout and output of the application are determined by the order in which the functions and commands are written in the script. This means that as the script executes from top to bottom, each component — whether it’s a text output, chart, or widget — renders in the sequence it appears in the code. This linear structure allows developers to easily control the flow of the application’s interface, as they can visually and logically arrange elements to match the desired user experience. For example, if a developer wants to display a title followed by a status bar and then a chat message, they simply write those commands in that order. As the script reruns with each interaction, Streamlit retains this order, ensuring that the interface remains consistent and predictable.
While Streamlit’s in-order rendering mechanism simplifies app development by maintaining a straightforward, top-to-bottom execution flow, it can pose challenges when you need to handle dynamic or conditional content that should be rendered later in the script. Since Streamlit executes the entire script sequentially, it becomes difficult to insert future UI components that depend on interactions or data not yet available at the time of execution. For example, if you want to display a tool call acknowledgment for that recent message, only after a user invokes a message will trigger the LLM to make that tool call. The rigid in-order rendering can make it tricky to conditionally place these elements.
To work around this limitation, Streamlit provides tools like st.empty()
and st.container()
, which act as placeholders that can be updated later in the script. These placeholders allow developers to "reserve" space for UI components in advance and then conditionally render or modify them during reruns based on user input or external data. For instance, you can initialize an empty slot st.empty()
at the top of the script and later fill it with text once the necessary conditions are met. This approach maintains the flexibility of dynamically updating the interface without disrupting the sequential flow of the app, making it a powerful method for handling conditional rendering in Streamlit applications. We will be using this “trick” a lot throughout the intersections, it helped me think about
CSS Flexible Box Layout to align the Streamlit components to my expectations.
Now let's dive into LangGraph and see how we work through both of these mechanisms.
what is the weather like in the coolest cities and Asian food in those cities?
Callback Handlers in LangGraph
Using LangChain we can tap into a core system that allows us to subscribe to events that get broadcasted as the graph passes through its nodes. There are many events, but for these examples, we are most interested in `on_llm_new_token` and `on_tool_start`. We use a BaseCallbackHandler mixin that allows us to selectively implement specific methods that will trigger certain things we want. Do check out the code that follows each example, I added comments throughout explaining exactly what's going on.
Simple Streaming — no tools
Tool calling and Streaming using Streamlit’s official CallbackHandler
Tool calling and Streaming via CustomCallbackHandler
Asynchronous Stream Events in LangGraph
Using LangChain’s asynchronous streaming events we can approach the same visual appearance as callbacks just using another paradigm. However, I will note that astream_events has a little fewer events than callback_events.
LangGraph’s Dynamic Interrupts
Dynamic Interrupts is a LangGraph concept where you can conditionally pause a graph based on a condition you set against the graph's state. You update the state, then call a resume by setting the input to True. The Graph will then rerun the condition to check and resume the graph as normal. This is dynamic because it will not always stop at a specific node depending on the condition of the state if the condition passes it continues through the graph. Let us see it in action.
Let’s Put it all Together
To bring it all together we can see there are some unique approaches we can do to mix 2 amazing open source libraries to achieve a seamless UI/UX interaction between LangGraph and Streamlit. Each example I've gone over is in the GitHub repo below and has comments throughout the codebase describing the potentially unclear implementations and more info on external reasonings.
Thanks for the read!