Thank you for Subscribing to CIO Applications Weekly Brief
CIO Applications Weekly Brief
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from CIO Applications
Thank you for Subscribing to CIO Applications Weekly Brief
How did Rasa get started, and what kind of value proposition do you deliver today?
Three and a half years ago, I undertook a project with my co-founder and Rasa CTO Alan Nichol to build an AI assistant. In the process, we realized that the tools required to create great interactive conversational AI were not available to developers. We wanted to build the tools and infrastructure that we as developers would like to use—the result was Rasa. We open sourced our work, and it eventually grew to become the company we know today. We’ve seen tremendous growth over the years, with more than three million downloads and a large developer base. We have also been fortunate to work with many Fortune 500 companies along the way.
Early on, we decided to heavily invest in our in-house research. Alan holds a Ph.D. in machine learning and leads our research team, who publish their work to peer-reviewed academic journals. We’re able to very quickly get the latest research into the product, which allows Rasa users to benefit from the most cutting edge NLU and dialogue management.
The decision to go open source has certainly played a factor in our growth. Unlike software teams that work in isolation, our collaboration with thousands of users means we’re able to innovate very quickly. For companies using Rasa to build AI assistants, open source also means that the full code is available to build upon and customize. This has the added benefit of transparency and explainability. If the model isn’t returning a result you expected, it’s actually possible to see why, unlike what “black box” developers sometimes face with other machine learning solutions.
What kinds of challenges are large companies facing in the chatbot space? How is Rasa addressing those issues?
A challenge many companies face as they adopt conversational AI is avoiding repeat work and protecting the investment they’ve already made. Many enterprises start with a simple use case like an FAQ assistant, or an internal application, like an IT help desk assistant. Later, as companies validate the technology, they want to scale up and create additional assistants for other lines of business or use cases.
Rasa’s models can be trained for any specialized domain or spoken language, so customers can scale up as they create additional assistants across their organization.
We also see companies looking for a conversational AI solution that won’t create a weak point in their security posture. For regulated industries like financial services or healthcare, SaaS isn’t an option. We deploy on-prem or private cloud, so messages sent between the user and the assistant never leave the customer’s infrastructure, allowing companies to fully control their data and intellectual property.
What kind of principles underlie your conversational AI framework? What features and functionalities does it provide to clients?
Our core product offering is Rasa Open Source, which is the open-source framework that handles natural language understanding (NLU) and dialogue management. Rasa Open Source stands out because the dialogue management engine and NLU use machine learning. This means you can train a Rasa assistant to recognize conversation patterns from training data. These patterns are used to predict the assistant’s response at each conversation turn. This allows a Rasa assistant to learn by example instead of relying on hard-coded rules to tell it what to do. On top of Rasa Open Source, we offer Rasa X, a free toolset for improving and managing a Rasa assistant, and Rasa Enterprise, a subscription to help large companies ship and scale AI assistants.
There’s a lot of hype around AI assistants, but we believe in taking a more practical view. Our product principles guide us toward building powerful and flexible tools for developers, even if it means it’s not “easy.” Drag and drop solutions are great for teams who are experimenting or just getting started. But when you’re ready to build an assistant that’s mission-critical for the business, you need a tool that developers can customize and configure. Really good AI assistants aren’t built in a few hours.
At the same time, we want to lower the barrier to entry, so that developers who don’t specialize in machine learning (or data scientists who aren’t developers) can build AI assistants that perform well. We have an amazing community of developers who contribute to Rasa, and there are a lot of resources out there for developers who are just getting started. We also focus on building sensible default configurations within the product that work for the majority of users, so you don’t have to be an expert to get good results. If you’re a more advanced user, the machine learning algorithms are accessible for you to tune.
What differentiates your company from other solution providers in the market? What steers you ahead of the competition?
Rasa frequently appears on conversational AI vendor shortlists alongside some of the biggest names in tech: Google, Microsoft, Amazon, IBM. Part of the reason Rasa competes at this level is that conversational AI is all that we do, unlike providers who offer APIs for conversational AI as an add-on service to their cloud hosting.
I think the critical thing is that Rasa is an open-source solution. After download, developers are in full control of both their application code and their data, and they have the freedom to tweak the underlying machine learning to get the best performance for their data set and use case. Another thing is that we invest a lot of resources in machine learning research. We have a rapid research-to-product cycle. Sometimes, it only takes a few days for our research team to discover something that improves the efficiency of the model and ship it to Rasa Open Source. The third thing is the Rasa community. If you are a developer new to the conversational AI space, a supportive community provides resources you can leverage to decrease the learning curve. We are fortunate to have an amazing community worldwide that contributes to knowledge and product improvements.
What’s the next big step for Rasa? Any product enhancements or expansion plans on the horizon?
Over the last year, we’ve seen explosive growth, with downloads increasing 6x to over three million. To meet that growth, our team is expanding too: we’ve hired people across the world to support our enterprise user base and open source community.
We also continue to focus heavily on product development. We’ve just released a new NLU architecture that outperforms the current state of the art. Second, we’re making it easier for teams to incorporate engineering best practices into the way they build AI assistants. DevOps for machine learning applications has been a hot topic, and we want those best practices to be the default in Rasa. We recently released Integrated Version Control in Rasa X, which allows teams to version control training data and hook into downstream workflows like CI/CD.
We are doubling down on what works and what provides value for our uses and bringing those enhancements into the product. So we are excited about what the future holds.
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Follow Rasa Technologies on :