top of page
Designing an AI Assistant Builder
Web
Rasa is a conversational AI platform that offers a powerful framework for conversational AI teams to build assistants. Trusted by enterprises like Telekom, Autodesk, Adobe, and Accenture, Rasa initially focused on technical teams building assistants with code. However, as competitors began offering UI tools that enabled non-technical users to build assistants, we started losing potential customers. To expand our customer base and make Rasa accessible to a broader audience—including conversational designers, business analysts, and content editors—we needed to introduce a UI product to support the bot-building experience.
I was the lead designer for Flow Builder, the core feature of Rasa Studio — a completely new product built to empower non-technical users to create conversational assistants without writing code. Following a major company-wide layoff and strategic shift, Rasa made the bold decision to sunset its previous UI product, which focused on optimizing assistants rather than building them, and to rebuild a new platform from scratch. I was actively involved in this reboot from its earliest stages. This wasn’t just about designing a feature — it was about shaping the foundation of an entirely new product. I collaborated closely with a product manager, engineers, and machine learning researchers throughout the discovery and design phases. My responsibilities spanned the full design process: conducting UX research, defining user flows, wireframing, prototyping, and delivering high-fidelity UI designs.
RasaMainImage.png
What’s hard, for whom, and why: understanding assistant-building experiences in Rasa
To gain deeper insights into user needs and pain points when building assistants with Rasa—and to identify which problems to prioritize—we conducted several UX interviews with both technical users (engineers and data scientists) and less-technical users (conversational designers and content editors). We also wanted to understand which tasks should be handled by whom, and how Rasa could better support users throughout the assistant-building journey.
Interviews.png
We interviewed many customers and analyzed the findings in Dovetail. 
Key learning 1: some of Rasa’s concepts are too complex for less-technical users to understand
In other platforms, building AI assistants feels like creating a simple flowchart — a visual and intuitive process of defining "if A, then do B." But in Rasa, we used to use a primitive called Stories. Stories are the sequences of interactions that train a machine learning model. Unlike rule-based systems, stories allow the assistant to learn from data, making behavior more flexible but less predictable. Stories predict the next best action based on conversation context and history — not a fixed decision tree. This requires a solid understanding of machine learning concepts, not just basic logic. To train a high quality assistant, users also need to craft a diverse set of well-designed stories that don’t overlap excessively, ensuring enough variation to capture different scenarios while maintaining balance in the training data — a key requirement rooted in ML best practices. Additionally, Rasa also offered primitive called Rules for simpler, deterministic logic, but users often found it difficult to know when to use Stories vs. Rules — or how they worked together. 
Key learning 2: building assistant with Rasa is time consuming
Writing stories takes a lot of time because if there are many possible scenarios, users have to copy and repeat the same information over and over. It’s also hard to keep these stories up to date—if something changes, users have to go back and fix every story that includes that part. Because of this, building and managing stories manually becomes really difficult as the assistant grows. The image below illustrates this challenge.
Stories.png
Key learning 3: content writers need better content management system
As the assistant grows, so does the content. Many bot’s responses and elements are reused across different parts of the assistant, making it difficult to track and manage. Content writers and editors need an easy way to map out and understand how each piece of content is used. This ensures that when modifications are made, the meaning and context of the updates work correctly in all the places where the content is reused.
Sharing insights with the team
I documented these insights and created a customer journey map for less-technical users to highlight where they struggled most. Meanwhile, our UX researcher developed a team workflow map to understand how different roles within cross-functional teams collaborated and where responsibilities could be clarified or supported.
DocumentInsights.png
I documented the insights from the study and shared with the team.
Screenshot 2025-05-28 at 15.55.01.png
Customer journey map of Conversational Designer I created.
Ideate solutions with the team
Based on insights from user interviews, I created several “How might we” questions to guide our design process. I shared these questions with the product manager to align on which ones to prioritize, and I facilitated a workshop with the PM, engineers, and CTO to sketch potential solutions for the problems defined in the "How might we" prompts.
Following the workshop, I created early prototypes based on the ideas generated and used them to gather feedback from customers. 

Some of the How Might We questions we prioritized were:
  • How might we help the team reduce the time to build stories and rules?  
  • How might we help the team reduce the time to update stories and rules?
  • How might we enable UX Leads (non-technical users) to build stories without requiring any machine learning knowledge?
Screenshot 2025-05-28 at 15.53.55.png
Design ideation sessions with PM, Engineers and CTO
Solution Proposal 1: Don't worry about implementation, first focus on defining conversations between user and an assistant
From our research, we learned that conversation designers almost always begin by creating a “Sample Script” — a dialogue between the assistant and the user. Ultimately, their goal is to craft seamless, natural conversations, and starting with a sample script helps achieve that by avoiding distractions like branching logic or backend implementation details.
Inspired by this, I designed a solution that allows less-technical users to first define the conversation between the assistant and the end-user, without needing to worry about implementation complexities such as branching logic or backend structure.
Additionally, I introduced a flowchart view so users can visually see the end-user journey in context.
DefineConversations.png
In this UI, users can enter what the end-user or the bot says, alongside a flowchart where the current path or branch being focused on is highlighted.
Once the conversation is defined, users can either manually specify which parts should be stories or rules, or let the system automatically determine this and generate additional stories to enhance the diversity and quantity of the training dataset. The generated code is then presented for users to review and understand. This approach addresses common user challenges, including knowing what should be implemented as stories versus rules and ensuring there is enough varied training data for the assistant to perform well.
GenerateStories.png
In this UI, users can allow the system to generate a variety of stories, ensuring there is enough training data for the assistant to perform well.
While Rasa supports several advanced concepts beyond stories and rules, we chose to focus the first version of this feature on covering just the core conversational logic. More advanced configurations — such as storing user-provided information (e.g., capturing and saving a user’s name for later use), or adding logic to check user status (e.g., “if the user is logged out, do A; if logged in, do B”) — could still be handled by engineers outside of Studio. That’s why I added a feature allowing users to insert notes within the conversation to specify what additional work needs to be done.

This approach minimized the engineering effort required for the initial release while still empowering non-technical users to shape the assistant’s behavior without needing a deep understanding of Rasa concepts like stories or rules. The idea was that approximately 80% of the assistant could be built and maintained within Studio, while the remaining 20% would be managed by engineers.
Notes.png
In this UI, users can add a note to specify what needs to be implemented by engineers.
Solution Proposal 2: Update in one place, update all stories
I also proposed a solution that allows users to diverge conversations by adding tabs or paths. This enables them to update content in a single place, rather than having to modify every story that contains the same information—a major pain point expressed by users who were frustrated with repeatedly copying and pasting content, and later having to go back and fix each story individually.
Tabs.png
In this UI, users can click on ‘Add path’ to create another scenario or story. Updating any content above this path will automatically update all the stories.
Solution Proposal 3:  CMS for assistant's content
I added a CMS for the assistant’s content to make it easier for content writers and copywriters to search and quickly see where specific responses are used when making updates. This feature addresses a common pain point expressed by content editors, who often struggle to understand where a bot response is reused and how a change might impact other parts of the assistant.
CMS.png
In this UI, users can search for, view, and edit the assistant’s messages. They can also see which “skills” (the capabilities defined by users when building each part of the assistant) the message is used in.
Collecting feedback on the solutions
There were many learnings from the feedback sessions, but here are the most important takeaways:
 
  • Users wanted to build almost everything directly within the UI, including technical configuration tasks. The initial idea of handling 80% in the UI and 20% in the CLI wasn’t sufficient to drive adoption among most users.
  • Non-technical users also requested a drag-and-drop-style bot builder with a flowchart interface, enabling them to directly define all branching logic and handle backend configurations without writing code.
However, representing all implementation details as flowcharts is challenging with Rasa because flowcharts are designed to show fixed, step-by-step paths—like “If A happens, then always do B, then C.” In contrast, Rasa uses machine learning to manage conversations, which means the path the assistant takes is not fixed, but based on probabilities and context. As mentioned earlier, users can—and should—create many different training examples that reflect how real conversations can vary in order. For example, one user might start by saying, “I want to transfer money to Karin,” while another might say, “I want to transfer money,” prompting the assistant to then ask, “Who do you want to send it to?” Each story captures one of the many ways a conversation leading to the same goal might unfold. Because of this variability, it’s not possible to show all the different scenarios in a single, simple flowchart.

This makes it difficult to create a UI that accurately represents or builds these interactions without losing the nuance of how ML models work. This was a complex concept that non-technical users struggled to grasp.

As we spoke with more customers, we found that many business users—especially in regulated industries like banking and insurance—prioritized assistants behaving exactly as defined, even at the cost of conversational flexibility. While stories offer adaptability, these users valued predictability and control over naturalness.

At this point, it became clear that this wasn’t a problem we could solve with design alone. To truly make the assistant-building experience accessible to a wider range of users, we had to simplify the complexity at the framework level. 
Designing a simpler, more visual framework
To support a wider range of teams and use cases, we set out to redefine how assistants could be built—without sacrificing the power and flexibility Rasa is known for. A product manager led this effort, working closely with me and customer success engineers to identify common assistant use cases across the key industries we were targeting.
Screenshot 2025-05-28 at 17.43.20.png
Some of the common assistant use cases
Once these patterns were defined, engineers started exploring a new, simpler, yet powerful Rasa framework. I collaborated closely with them to design how new backend primitives would be surfaced in the UI—using a more intuitive, flowchart-style format that non-technical users had been asking for. This was a true product-engineering partnership, where the system was shaped collaboratively from both ends.

In the new UI, we enabled users to specify every detail of implementation and build an assistant without having to go out of the tool.
FinalDesign.png
The canvas view allows users to define all the business logic needed to handle a specific topic or request from the assistant’s end users.
With the new framework, we empowered users to:
  • Create business logic to ensure the assistant behaves consistently and predictably—especially important for regulated industries.
  • Leverage pre-built patterns that come with Rasa to handle unexpected scenarios like digressions, unsupported requests, or multi-intent messages, that Stories were good at handling.
FlowList.png
List of 'Flows' that describe the logical steps the AI assistant uses to complete a task. Custom flows are those users create themselves, while system flows are the pre-built patterns that come with Rasa to handle unexpected scenarios.
Abstract Rasa's concepts
Enabling non-technical users to build nearly every part of an AI assistant required a solution that made it easy for them to understand each component and how to use it—without spending hours reading documentation to learn Rasa’s concepts. It was important to abstract these concepts as much as possible so that non-technical users could grasp them quickly.  For example, Rasa includes a concept called a “Slot,” which stores information provided by users so the assistant can use it to complete tasks. If the assistant asks how much money a user wants to transfer, the slot named “transaction amount” captures that value and sends it to the backend system for processing.  In the UI, rather than exposing this concept as “Slot,” I chose to simplify it by labeling it “Collect information.” Adding this step to the flowchart allows users to specify what kind of information they want to collect and what message should be shown to the user to gather it. 
Collect.png
The Collect Information step allows users to define which information (or slot) to collect and specify the message the assistant should use to prompt the end user for that information.
Flowchart UI designed for easy scalability
Customer-designed flowcharts were often messy and hard to follow. This was a significant pain point for engineering teams who had to collaborate on assistant development using these complex diagrams. As flows grew larger, they became increasingly difficult to manage.
flowchart.png
This flowchart was created by a customer. It has been intentionally blurred to protect confidential information, but it still conveys how complex such diagrams can become—and how challenging they can be to interpret visually.
To solve this, I introduced an auto-layout system that keeps the flowchart clean and well-organized, regardless of how much content is added.
I also chose a vertical layout, aligned with the natural flow of chat UIs, allowing users to easily follow both the chat and the flowchart in parallel.
autolayout.png
Clicking on one of the '+' icons in the flowchart allows users to select a step or node type to add. Once added, the flowchart automatically updates its layout, enabling users to modify the flow without manually rearranging other nodes.
Impact of LLM
While we were working on the new builder, ChatGPT-3 launched—dramatically shifting expectations around what AI assistants could do. To better understand our customers’ evolving needs, I sent out surveys to gather insights on how they imagined using large language models (LLMs) in their assistant-building workflows. 
Screenshot 2025-05-28 at 23.55_edited.jp
Survey Results: The survey was sent to 21 participants.
For our first iteration, we chose to implement a generative feature that creates a flow based on the description provided by the user. This was a popular request from both survey participants and internal stakeholders. It served as a natural starting point—helping users accelerate the building process while aligning with our limited team capacity and tight development timeline. This approach allowed us to deliver value quickly and laid the foundation for more advanced LLM-driven features in the future.
GenAI.png
Users can specify the information they want to collect through the flow. Based on this input, Rasa will generate a tailored flow, which users can then review, revise, and edit as needed.
Validating the design
I conducted internal usability tests with team members who had not been involved in developing the new Rasa concept. Additionally, I shared the designs with select customers to gather feedback. The overall response to the new flow builder feature was very positive, leading us to move forward with the proposal. However, the generative feature for creating flows—while sounding promising—received less enthusiasm. Many customers have complex, highly customized flows tailored to their specific business logic. As a result, they felt that automatic flow generation, although helpful, wouldn’t significantly reduce their workload. Given our limited engineering capacity and tight timelines, we strategically decided to prioritize the manual flow builder, which addressed a clearer and more immediate need. The generative capability remained on our roadmap as a future enhancement once the manual builder is fully established.
Selecting a library for the flowchart canvas and beginning implementation.
As engineers evaluated several libraries to implement the flowchart, I contributed by defining a set of selection criteria—both for immediate needs and future scalability. Based on this, we selected a library that would not only support the initial implementation but also scale with our long-term vision for the product. We've then started implementing the designs.
🤰🏻Taking a pause for baby, then back to Rasa
While I was excited to stay involved throughout the implementation and QA phases—as I typically am—I was also preparing for an exciting personal milestone: the arrival of my baby boy. I went on maternity leave shortly thereafter.

When I returned, the flow builder feature had been fully implemented, and we had successfully onboarded 9 additional customers using the new system. I’m now part of the Platform Team, where I focus on ensuring that our two core products—Rasa Pro (CLI/backend) and Rasa Studio (UI)—work together seamlessly, delivering a unified and frictionless experience for our users.

© 2025 by Karin Kotani

bottom of page