A team of researchers at Carnegie Mellon University has made a worrying discovery, The New York Times reports: guardrails set in place by the likes of OpenAI and Google to keep their AI chatbots in check can easily be circumvented.
In a report released this week, the team showed how anybody can easily transform chatbots like OpenAI’s ChatGPT or Google’s Bard into highly efficient misinformation-spewing machines, despite those companies’ deep-pocketed efforts to rein the systems in.
The process…
Read the full article here