Building on the earlier discussion highlighting a potentially controversial proposal (link above), recent research advancements in the corporate sector underscore that the proposal isn't as contentious as initially perceived. Acknowledging the hurdles that academia encounters in keeping pace with the rapid developments in the corporate realm, it becomes progressively sensible for academia to intensify its endeavors in scrutinizing corporate research activities. https://www.economist.com/business/2022/06/23/in-eys-split-fortune-may-favour-the-dull
The image above is not coincidental; rather, it is extracted from a compelling article authored by Jonathan Haidt, Professor of Ethical Leadership at New York University, and featured in The Atlantic. Below, I present a reproduced excerpt from this noteworthy article:
"...artificial intelligence is close to enabling the
limitless spread of highly believable disinformation. The AI program GPT-3 is already so good that you can give
it a topic and a tone and it will spit out as many essays as you like,
typically with perfect grammar and a surprising level of coherence. In a year
or two, when the program is upgraded to GPT-4, it will become far more capable.
In
a 2020 essay titled “The Supply of Disinformation Will Soon Be Infinite,” Renée
DiResta, the research manager at the Stanford Internet Observatory, explained
that spreading falsehoods...will quickly become inconceivably easy” https://www.theatlantic.com/magazine/archive/2022/05/social-media-democracy-trust-babel/629369/