Artificial Intelligence is on the rise, but so are its flaws

Digital
-

With Artificial Intelligence quickly making its stamp on the world, the race continues for the capabilities and dangers of existing AI systems to be properly studied and improved. This week, we’ve witnessed the start of legal jostling between rival AI experts.

AI experts call for a pause on creation of giant AI models

More than 1,000 artificial intelligence experts, researchers and backers are calling for a pause on the creation of giant Als for the next 6 months to allow the capabilities and dangers of existing AI systems such as GPT-4 to be properly studied and improved.

An open letter signed by many of the major AI players, including Elon Musk, Steve Wozniak and a number of engineers from Amazon, DeepMind, Google, Meta and Microsoft, cited a post from OpenAI’s co-founder Sam Altman as the reason for the pause.

Altman’s post from February described how it would be important for AI to receive an independent review before training could begin for future systems and that a limit may be necessary on the rate of growth of compute to create new models. AI experts have cited that the time for independent review is now.

AI experts are calling for the government to “step in” if researchers will not voluntarily pause their work, which they refer to as, “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

Labour have joined the criticism with shadow culture secretary, Lucy Powell suggesting the government are “letting down their side of the bargain” suggesting the regulation will take months, possibly years to come into effect.

Midjourney stops free trials

AI image generator Midjourney has stopped free trials of its service due to a sudden influx of users and “trial abuse.” Midjourney CEO and founder David Holtz has said that massive amounts of people were creating throwaway accounts so they could obtain free images alongside a temporary gpu shortage.

The firm was quick to confirm that the free trial of Midjourney did not include access to the latest version 5 which has created the most realistic images so far and has recently been in the news for creating viral pictures of celebrities and public figures, to include fake pictures of Trump’s arrest, the pope looking fashionable and Elon Musk holding hands with Alexandria Ocasio-Cortez.

Despite the viral images causing much upset across the world, Midjourney’s response has been minimal, without any significant overhaul of its moderation policies. Midjourney continues to maintain a list of banned words which it has recently expanded following the viral fake images scandal.

Was Bard trained with ChatGPT data?

Google’s Bard hasn’t had the smoothest of debuts and now an AI researcher has reportedly spoken out against using data from a website called ShareGPT. It has been reported that Google stooped so low they used data from OpenAI’s ChatGPT, which was take from the ShareGPT website.

Google has firmly denied the data has been used: “Bard is not trained on any data from ShareGPT or ChatGPT,” spokesperson Chris Pappas told The Verge. However, it has been reported that Google Ai engineer Jacon Devlin left Google immediately to join rival company OpenAI after warning Google not to use ChatGPT data as it would violate OpenAI’s terms of service.

Why does this matter?

The future success of AI technology hangs in the balance. While rival companies fight to be at the forefront of the technology, clearly morals are being left behind. Since the release of Chat GPT-4, OpenAI has been dealing with the “capability overhang” – the issue that its own systems have become more powerful than it can currently manage.

Over the coming weeks and months, researchers are likely to uncover new ways of “prompting” AI systems to improve their ability to solve difficult problems, but in the interim, without a solution, there are no clear regulations in place.

In response to these growing issues, the government is focussing on coordinating existing regulators such as the Competition and Markets Authority and Health and Safety Executive, to offer 5 “principles” through which they should think about AI. But will this be enough to convince businesses to use and trust AI technology in the future?

Author spike.digital