ChatGPT's work lacks transparency and that is a problem - UPI.com
Large Language Models (LLMs) like ChatGPT use millions of texts to estimate the most likely next word, phrase, or sentence to follow a prompt from a user. While ChatGPT provides a good start on a summary of key takeaways from the COVID-19 pandemic, its lack of transparency is a problem.
Even with more prompting, ChatGPT could not provide concrete data or citations to back up its point. Its response is an amalgamation of the training pool of writing on COVID-19, text on lessons learned, and the general rules of language from the full corpus. However, when prompted for more details about the specifics of the points presented, ChatGPT may not have the appropriate details in its corpus and can't necessarily predict the best information to provide.
When asked for sources, ChatGPT produced three references that appear to be fictitious. While its five points provide a useful start for a summary of key takeaways, they are not without their shortcomings. The lack of transparency in LLMs calls for modernized data literacy. Developers need to be more transparent about their algorithms and data sources so that people can assess the inherent sources of bias or problems with the approach.
While LLMs like ChatGPT have a lot of uses, providing deep commentary or useful policy analysis is not one of those uses, for now. Users of LLMs may find them to be a nice shortcut to drafting material, but should be wary of factual statements made and read with a careful and critical eye.