Dispelling the Myths of AI
Q&A with Charli AI's VP of Data Science, Elham Alipour
Just like “in the cloud” had its over-usage frenzy in the mid-2000s, “AI” has become the tech buzzword of the past several years. As the usage of the term has gone up, its clarity has gone down. Anyone building software right now had better find a way to massage AI into their marketing or risk looking antiquated.
At Charli, AI has been core to our architecture from day 1, no bolt-on solutions can be found here, our Data Science team works in tandem with engineering, which is pretty unique and not something you’ll see at other software companies. I wanted to help dispel some of the confusion around AI by sitting down with our VP of Data Science, Elham Alipour, and talk about what AI (actually) means today in the world of tech and help bust some of the more common myths you hear in the news.
KC: At its core, what is artificial intelligence (AI) anyway?
Elham: I think the biggest mistake people make when thinking about AI is thinking of it as a single “thing”. The truth is, AI is a complex and sophisticated orchestration of various algorithms and processes. It’s the sum of its various parts.
KC: Also, there’s a tendency to assume that we’re trying to replicate the human brain with AI. But that’s not it either because, to be frank, the human brain isn’t that good! 🤔🙃
At its core, AI is about a series of processes that have the ability to learn, adapt and apply its knowledge to new scenarios.
Elham: I would agree. At Charli, for instance, our AI is a collection of models and algorithms that line up with the capabilities we offer our users.
That means we’re applying different approaches at different stages of the game to enable the intelligent and coordinated processing of content and data all the way down the line.
KC: How are machine learning (ML) and AI, related?
EA: This is often where people start to conflate the terminology. In essence, ML is one method that can be used when developing the complete AI process. Think of it as one type of railcar when you’re assembling a train.
In other words, AI is an umbrella term that includes various technologies such as machine learning, deep learning, transfer learning, computer vision, natural language processing (NLP), recommender systems, and more.
KC: At Charli, we use ML in the development of our AI – in addition to a lot of other methods that Elham just listed. We also have our own proprietary techniques.
Machine learning essentially allows machines to learn from data without having to directly program it.
It can become highly complex but is only one method used when developing AI.
KC: I’ve heard people say you can get AI “off the shelf”. Is that true?
EA: If only it were that simple! Long answer short, no you can’t (and wouldn’t) buy AI off the shelf. You can buy ML algorithms off the shelf, but with AI it just wouldn’t work. If you want to get true automation built into your app or enterprise, AI requires extensive assembly, orchestration, and continuous learning.
KC: I think it’s also important to note that these off-the-shelf capabilities for ML have real limitations on their completeness and accuracy. The algorithms are trained to be “general purpose”, meaning they want to serve a broad audience.
Because of that, it’s highly unlikely they will address the specific needs within your organization or app – especially when it comes to designing automation and AI.
KC: Along a similar line of questioning, can you bolt AI onto an app?
EA: You can definitely bolt an ML algorithm onto your app or product. But, as we discussed, ML is not AI.
For AI to be incorporated into your solution, you must design it from the ground up. You need to understand where AI is going to be applied, how you will collect the necessary data, how you will train the models, how you will orchestrate the process and how you will enable continuous learning.
KC: That’s right. In many other cases, bolting on a solution has value. But it simply won’t address the need for automation and AI unless you take a close, hard look at the foundational elements of your solution and integrate other AI methods.
KC: There’s a lot of discussion about bias in AI and we’ve talked about it internally as well. Do you believe there is bias and if so how do we overcome it?
EA: You’re absolutely right, AI is inherently biased. Let me explain. AI is dependent on models, orchestration, and training data.
These are developed by people who are biased, and so that trickles into the AI. Can we overcome the bias?
That’s a bigger question. At Charli, we approach this in two ways: using the bias to our user’s advantage when it makes sense to do so and being very conscious about introducing diversity. Just like we value diversity in business, we value it in AI. We know it gives us a natural advantage and so we see it not only as the right thing to do but also the best thing.
KC: Adding to that, something we’ve spent a lot of time trying to grapple with is how diversity adds a layer of complication to the design and implementation of Charli.
We want Charli’s AI to understand each individual user, hence we want their personal bias to be an advantage. For instance, if a user has a tendency to treat a certain type of content in a certain way, we want Charli to notice that and start doing it automatically.
However, when it comes to scaling Charli’s AI, one person’s individual bias might not be the same as another's. In that case, biases limit Charli and so by injecting outside influence and coaching, we can ensure that Charli’s AI will work for all users. It comes down to designing a scalable approach that enables the transfer of learning but still allows for individualization.
To learn more about Charli and try it today for free, please visit: https://www.charli.ai/
Tell us what you think by hitting the comment button below and don’t forget to share if you enjoyed this newsletter