Every day, it seems, a new large language model (LLM) is announced with breathless commentary — from both its creators and academics — on its extraordinary abilities to respond to human prompts. It can fix code! It can write a reference letter! It can summarize an article!
From my perspective as a political and data scientist who is using and teaching about such models, scholars should be wary. The most widely touted LLMs are proprietary and closed: run by companies…
Read the full article here