
Elon Musk and other leaders in the tech world recently called on ChatGPT and Bard to halt AI training due to concerns that AI will continue to advance at a breakneck pace without proper safety protocols in place, and Italy has recently banned the service, meaning a ChatGPT VPN is necessary to access it in the country. If this has you concerned, you’re not alone. Midjourney, the popular AI art generator, has already halted free trials over concerns of people abusing the platform to generate deepfake images that have the potential to be treated as real images and go viral. We are already seeing these concerns play out in front of us. On the other hand, the acceleration into post-truth would go into overdrive, and it would become reasonable to question whether anything is written by humans or by an AI. On the one hand, this would greatly improve what people can do with the AI chatbot, creating more engaging and realistic written content with ease. ChatGPT is conversational by nature and if you cannot determine whether the text you are reading is created by a GPT-5 powered AI or a human, then that is a game-changer - for better or worse.

Given ChatGPT’s chatbot design, it would be a perfect candidate for the Turing test. This would mean it can think, and is therefore an AGI. If the judge cannot correctly determine which is the machine and which is the human, then the machine is considered to have passed the test. The judge would evaluate the text-only conversation between the human and the machine and try to determine which participant was the human and which was the machine. In this test, there are three participants: a human, a machine and a judge (who is also human). The Turing test was developed by Alan Turing to determine if a machine can exhibit intelligent behavior. So then the question becomes, how could GPT-5 make the leap to a “strong AI” or an AGI? The answer is a hotly debated topic, but a lot of people would point to the Turing test.

(Image credit: Gabby Jones/Bloomberg via Getty Images)
