Setting Up Your Company's Internal AI Chatbot
- paulepp
- Aug 2
- 4 min read
Below are some of my observations from my experience of helping a large organization set up an LLM-based AI chatbot that was designed to provide staff with a more efficient way to access information. We learned that a successful rollout requires three major components, all of which work together.
Setup: Keep it Simple.
The project took the organization’s policy library and an internal informational website about the organization and made them both knowledge sources for an LLM-powered chatbot that was made available to all staff through the organization’s cloud-based business productivity suite (a very common one that most of you will know). A couple of the organization’s technical staff had taken an interest in LLMs many months prior. They gave the chatbot a few other knowledge sources, configured some greetings and other responses that it would use to guide use of the system. These included creating paths to reach a human being (escalating certain issues), and forbidding certain types of queries that weren’t practical for the work environment. Then they published it through an app that was used by every staff member every day.
It was a great idea and a perfect way to getting started. They chose an out-of-the-box chatbot configuration tool that was available within an environment that had a security setup that they already knew and understood. They combined the conversational interface of an LLM with knowledge bases that were very detailed and, in many cases, highly specific. And then they published in a place that was familiar and easy for staff to find. This was a big win and
Adoption: “Is it just like Google (search) except better?”
This is the stage where I got involved. As soon as this service was released, I combined an experimentation regimen with a lot of research about chatbots and LLMs. I dove into the configuration tool and learned how it could use conversational flows, topics, actions, and triggers - all without having to write any code. I learned about hard-coding trigger phrases with classic orchestration and how to incorporate generative orchestration so that you could actually us AI to decide what topics, tools and knowledge sources would be used to provide the output. I started to see such an application could be used in all sorts of beneficial ways. It could reduce unnecessary questions to staff leadership about important, yet highly procedural items, leaving more time for richer interactions and deeper work. It could provide a more fun way for staff to learn about the organization in which they worked, reducing burnout and frustration. It could boost practical AI literacy among staff who might be interested in learning about AI to improve their work overall.
After working with the aforementioned technical staff to fix connections and adjust knowledge sources, the chatbot began working. It was returning outputs that were saving time for those that used it. And we had a way to see how people were using it - what their queries were and the responses that were given back. Even thought it worked incredibly well for certain types of questions (policies and procedures especially) not many people were using it. This made it hard to see what people thought the tool actually did. It also made it hard to improve. I was also meeting with a rotating group of staff weekly at this time to provide general education about AI chatbots and applications of LLMs to various types of work.
I quickly realized that the concept of a “research assistant that never grows tired” was a multi-faceted concept that required time and explanation - and lots of practice - for many. As a result, I met with staff regularly and showed them different ways that the chatbot could be used. I also encouraged them to update the knowledge sources that were being used as source material because this would enable to them to use the chatbots in the ways they wanted - to showcase the type of work that they did or help answer common questions that they would receive from other departments. It became clear that this was a natural part of the process of AI adoption and that the use of these tools were becoming part of a knowledge ecosystem that would develop over time.
Maintenance: Chatbots are part of ecosystems that grow.
Ask any AI engineer about how they build AI models (if you happen to spot one on the street corner) and they will tell you that the models are “grown” rather than “built,” which is a result of the unsupervised (machine) learning processes that are used in both the pre-training and the fine-tuning phases of LLM development. But this metaphor also extends to the ecosystems that are created when knowledge sources are connected to the type of chatbot we implemented for this organization. Just as the frontier AI models are trained on the corpus of information that is the internet, an AI chatbot trains on the knowledge sources you give it for any organization. But these knowledge sources change over time and develop in ways that we can’t even always predict. You might find, for example, that a chatbot that was designed to educate staff about certain policies ends up helping staff improve their own performance because of how they are able to cross-reference their responsibilities with certain policies or procedures.
As you find new uses for your chatbot, you can give your staff the ability to explore and expand the knowledge sources, empowering them to come up with new ways to create value for your organization through improved access to knowledge. This decentralized approach not only helps guarantee that the information that your chatbot references is useful for your staff (since they are the ones helping to create it) but it also helps them gain a hands-on understanding of how these tools can be used from day to day.
If you want to create a culture that embraces AI at your company, get started now. Keep it simple, but do something. Chatbots can be an incredibly valuable tool and they are still under-utilized because they can only be understood through experience and practice. As your company’s information ecosystem grows, so will your the knowledge of your workforce - and they will be your greatest asset in the flywheel of innovation that will begin as soon as you begin.