AI models don’t always improve in accuracy over time, a recent Stanford study shows—a big potential turnoff for the Pentagon as it experiments with large language models like ChatGPT and tries to predict how adversaries might use such tools.
The study, which came out last week, looked at how two different versions of Open AI’s Chat GPT—specifically GPT-3.5 and GPT-4—performed from March to June. GPT-4 is the most recent version of the popular AI that came out in March;. Open AI…
Read the full article here