The Engineering Realities of AI Chatbot Development

· 4 min read
The Engineering Realities of AI Chatbot Development

Chatbots are everywhere these days. Since ChatGPT burst onto the scene last year, companies like ours have been exploring leveraging this exciting new technology. Our goal is to help businesses better utilize their internal data. Building a chatbot emerged as one of the most effective ways to provide users with a wide range of options. Today, we work with various large language models and can construct multifaceted AI chatbots using the agents approach within the LangChain framework.

Interacting via chatbot offers an intriguing new way for people to engage with applications. While some may see chatbots as a novel concept, they have experienced multiple waves of popularity. This recent surge likely won’t be the last. We’re just at the start of discovering the many ways conversational AI can enhance how we obtain information and accomplish tasks.

The History of Chatbots

The earliest examples of chatbots can be traced back to the 1960s. Joseph Weizenbaum at the MIT Artificial Intelligence Lab developed an ELIZA chatbot during this pioneering era. Using only simple pattern matching and word substitution, ELIZA simulated conversation by responding to typed input. Although limited by today’s standards, ELIZA represented an exciting early milestone in creating conversational AI agents.

The fresh wave of enthusiasm came in the mid-2010s when chatbots emerged as a hot tech trend. Companies eagerly adopted chatbots for customer service, marketing, and integration into platforms like Facebook Messenger. Chatbots promised smoother engagements and exciting new capabilities compared to previous technologies. Yet many users soon grew bored or frustrated with early chatbots’ limited skills. Without powerful natural language processing, these chatbots struggled with tasks requiring memory, context understanding, and fundamental conversational ability. After a few years, the hype around chatbots faded as consumers realized the technology remained immature.

However, periods of excitement around chatbots laid the foundations for the future. Researchers used failures as lessons to incrementally advance the underlying tech. Chatbots have evolved tremendously thanks to those pioneering efforts. With today’s neural networks enabling deeper language processing, chatbots can finally hold meaningful, multi-turn conversations beyond simple pattern recognition. As the technology matures, chatbots gain the potential to transform how we, humans, interact with machines. But it took many steps over decades to reach this point. Each wave of progress is built on previous innovations to move closer to the goal of fluid, natural conversations.

The Rise of Modern Chatbots

The chatbot landscape looks much different nowadays. Big companies are eagerly integrating AI chatbots into products — from Bank of America and H&M to Pizza Hut, Uber, Lemonade, KLM, Starbucks, Intuit, Microsoft, Google, Facebook, and Softbank. With the influx of new large language models powered by transformers, like OpenAI’s GPT, Google’s Bard, Anthropic’s Claude, and Facebook’s LLaMa, both building and using chatbots feels smoother. But I don’t want to overstate and say the process is now “easy” or “straightforward.”

While creating a robust chatbot experience that can handle many use cases is more enjoyable with modern AI, there are plenty of challenges. The user experience shows improvement, too, yet frustration points remain. As someone who has built chatbots, I can say that it takes a lot of effort to craft something that truly connects with users and feels natural. The barriers to a great experience aren’t entirely gone. But the landscape looks brighter compared to previous eras, with new opportunities always opening up.

The Engineering Behind Chatbots

While LLMs can provide a tolerable answer for most questions, it’s still your job as an engineer to gather and provide the proper context. Use agents and tools to avoid recreating painful dialog trees like years past (check out LangChain’s docs if you’re curious). The agent can choose the right tool based on previous conversations and descriptions. However, supporting various use cases brings nuance.

Let’s say you need to work with a company’s data warehouse to answer questions using documents and numbers. You’d need at least two tools.

The first tool would search the database for relevant docs, sending them and the prompt to the LLM for an answer. Vector search isn’t a silver bullet; old-fashioned full-text search works better for many use cases.

The second tool would handle numbers. It must pull the data from the database, perform required calculations, and present it to the user. To spice things up, you can generate an SQL query based on the user’s prompt, execute it to get the correct numbers and visualize the results as charts and graphs.

The Complexity of Capabilities

The capabilities of the chatbot are built by the engineers. You could have added more tools if you needed — one could retrieve the data from the web or request it from a 3rd-party data provider, and another could process uploaded files for context on the fly. Each tool poses engineering challenges to prepare the correct data for the LLM. But the more tools and prompt types you enable, the harder it gets to maintain a quality user experience that feels like an “intelligence.” This is a typical tradeoff in engineering — more features make it tough to add and keep the whole system running smoothly.

The magic of AI is the result of engineering work by software developers who create the products. For example, the ChatGPT is a web and mobile chatbot that uses the LLM, GPT, under the hood. The chatbot keeps track of your questions and uses them as context for the LLM. It also has features like plugins, which gather and prepare the data and submit it with your prompt.

The LLM itself is just a statistical function. Give it input, get output. It’s not AI like most imagine, where it can think or do something by itself. The magic of ChatGPT comes from the engineering work of successfully creating a product that harnesses the LLM.

I personally love the chatbot as an interface to the complex system. If done well, it gives the users a lot of power over companies’ data. That’s why applications exist, after all. Users want to input, manipulate, fetch the data, and make sense of it. Many things that are now possible thanks to LLM-powered chatbots would require months, if not years, of engineering effort to create as a traditional application.

But there’s no magic in AI chatbots. They are a result of solving complex engineering problems. I barely scratched the surface in this post, showing examples of a few of the most high-level challenges. However, the deeper you get into it, the more problems you find. But it’s an enjoyable and rewarding process, like most software development.

Originally published on