ChatGPT: Friend or Foe?
ChatGPT is the latest AI chatbot Taking the world by storm – or rather, talking the world by storm.
ChatGPT closely replicates human quality of speech as it produces advice, academic arguments, conversation etc. This ability to mimic human communication is a product of deep reinforcement learning. However, it is worth considering the potential limitations of this chatbot.
We need to understand where this dataset is coming from. The chatbot’s model is fed with data all over the internet, so we must first ask ourselves to what extent we can be confident with these results, considering much of this is sourced from humans with natural biases? How can we verify this and how does our chatbot weigh and choose which results to show?
Considering that users can find ways to bypass the chatbot’s ethical policy, how robust is chatGPT in filtering good vs bad content? Assuming it has been trained on rules similar to Bayesian rules that tell the bot qualities that make something inherently ‘unethical’, how robust are these rules? Is it based on critical thinking skills like a premise and inductive or deductive thinking?
It does indeed display an uncanny ability to fool through convincing communication. Persuasion and persuasive techniques are dangerous when used to convince people unaware of fallacies or their own biases, such as substance-less health claims for extreme products (see ChatGPT arguing that eating glass is good for health).
- Think about people struggling to make sense of statistical insights (such as exaggeration or deceptive framing of a statistic to prove a desirable point; or people unaware of statistical blindspots such as causation versus correlation – this goes against the goal of promoting critical thinking in communications, particularly communication with blindspots, biases and assumptions.
- On the bright side, ChatGPT has the potential to educate people on common blindspots in thinking and expose persuasive techniques aiming to convince rather than inform. With a chatbot so realistic, we can begin to understand the power and potential deception of language. This of course begins with understanding the limitations and scope of the chatbot rather than treating it as a factual oracle.
- Imagine ChatGPT as an educational tool: it can help us think critically with awareness, forming arguments that are impactful with meaningful substance rather than deceptive. Additionally, it can help people expose positive or negative framing of information where emphases are used to guide the reader to agree with whatever point is being made.
How different would our perceptions be when we realize how closely a chatbot can mirror human rhetoric? How different would we see politicians and other orators who are great at commanding language to manipulate any idea, much like ChatGPT?
Online News Heard Now
Short URL: http://www.onlinenewsheardnow.com/?p=4581