Analysts of ChatGPT announced this week the chatbot had already reached 100 million users since its launch two months ago.
According to analysis data firm, Similarweb, in January, a total of 590 million visits to ChatGPT were recorded from 100 million unique users, described as “unprecedented growth for a consumer app” by investment bank UBS.
In the same week, Google’s unveiling of its rival to Chat GPT known as Bard caused Alphabet’s shares to plummet more than $100bn after the promotional material showed the chatbot giving an incorrect answer to a question. When the error was pointed out by experts, Google were quick to respond saying the chatbot still required “rigorous testing” before it was ready to release to the public.
The mistake by Google highlights the fact that the tech giant appears to be gradually getting left behind by Microsoft, who are a key backer of OpenAI’s ChatGPT. The company have further announced the launch of a version of Bing search engine powered by the popular chatbot’s technology.
The science behind Bard and ChatGPT?
Both chatbots are based on large language models which are types of neural networks inspired by the cell structures that appear in the brain and nervous system of animals. These models are trained on huge datasets taken from the internet to create plausible-sounding text responses to questions. Since its release, ChatGPT has continued to amaze users with its ability to write job applications, complete essays and even compose poetry.
What caused the Bard error?
Experts have cited these types of datasets can contain errors the chatbot repeats. AI models are based on huge, open-source datasets that include flaws. The sources have biases and inaccuracies that the AI model inherits, a problem that is yet to be resolved.
Michael Wooldridge, professor of computer science at the University of Oxford, said; “The networks don’t have any concept of what is ‘true’ or ‘false’. They simply produce the likeliest text they can in response to the questions or prompts they are given. As a consequence, large language models often get things wrong.”
ChatGPT users have also encountered incorrect, biased, and offensive responses, with experts quick to explain that a chatbot will inevitably reflect biases in the huge amount of text it is provided. Wooldridge added; “Any biases contained in that text will inevitably be reflected in the program itself, and this represents a huge ongoing challenge for AI – identifying and mitigating these.”
Why does this matter?
With ChatGPT gathering more momentum by the day, is chatbots and AI-powered search being overhyped? The response to ChatGPT reaching more than one million users in two months shows that there is considerable public appetite for this type of online service, but will OpenAI, Microsoft, Google and ChatGPT’s developer, have the expertise to tackle these ongoing issues?
OpenAI are currently working on the accuracy of their language models by adding billions more parameters, settings used to help them predict words, but this doesn’t guarantee better accuracy. With the release of Bard, Google also risks damaging its relationships with web publishers who rely on the company’s search page to bring clicks to their sites.
Other big tech firms such as Meta Platforms Inc. and Amazon.com Inc. are also making moves to create their own large-language models, that could potentially threaten publishers of writing, music, video and much more.
Whatever the future holds, one thing is certain, the AI tech race is set to continue at full throttle.Author spike.digital