Written on the 2nd of June 2024.
LinkThe yes-and problem with artificial intelligence
In an era where artificial intelligence (AI) plays an ever-increasing role in our daily lives, we face a concerning phenomenon: AI's tendency to generate false information. This issue does not necessarily stem from malicious intent but from the very nature of AI's design. Large language models, especially those meant to serve as assistants, are developed to provide answers to virtually any question posed to them. When they lack sufficient information or understanding, they do not hesitate; instead, they fabricate.
LinkAI and the expectation of perfection
When it comes to AI, the expectation to know everything is extremely pronounced. AI systems, particularly those used in customer service, education, and information retrieval, are often seen as infallible sources of knowledge. Users trust these systems to provide accurate, reliable answers. However, when an AI encounters a question it cannot answer, it is programmed to generate a plausible response rather than admit ignorance.
This approach stems from the design philosophy of AI developers, who aim to create systems that can handle a wide range of queries seamlessly. Yet, the consequence is that AI can, and often does, produce misinformation. Users might not always be able to discern when an AI is fabricating information, leading to the unintentional spread of falsehoods.
A large part of the issue is that most people do not understand how computers work, let alone the software that runs on them. This lack of understanding means that users have no way of accurately judging the information's veracity. For instance, the reader of this article might not know whether it is AI-generated. (It isn't, but how certain can you be?)
LinkThe case of Google's AI search answers
A notable example of this issue is Google's AI search answers feature. Despite the technology's impressive capabilities, it was found that the feature often provided incorrect answers with an air of authority, misleading users who relied on it for accurate information. This problem arises not because the AI hallucinates novel information but because it is unable to judge the information accurately and just accepts it as truth, even worse than people.
For example it claimed that a dog has actually played basketball in the NBA, presumably due to the Air Bud film franchise. In even worse cases it suggested users add glue to their food, as well as eat rocks, and even drink liters of urine daily because sources were trusted blatantly.
Google is aware of these issues and has acknowledged them publicly. However, their solution often seems to be that it is just a patch away from being fixed. This approach overlooks the fundamental flaws in the system's design. As long as large language models focus on predicting the most likely next token in the text, they will continue to "yes, and" the input, meaning false inputs will continue to generate false outputs.
Incidents like these underscore the importance of scrutinizing artificially generated outputs and holding the systems that make it possible to higher standards of accuracy and transparency than is currently the case.
LinkWhy we should demand more from AI
Accepting AI's tendency to fabricate information poses significant risks. Unlike human interactions, where one can interrogate claims with relative ease, misinformation from AI systems can be harder to investigate. These systems can operate at an unmatched scale and speed, amplifying errors rapidly and affecting large numbers of people simultaneously.
Trust is crucial for the effective and ethical use of AI. For AI to be considered a tool in critical areas such as healthcare, law, and education, it must provide accurate and verifiable information, which it currently struggles to do.
LinkHow do we as people compare
All this behaviour shown by AI mirrors a broader societal trend. The expectation that everyone, human and machine, should always be informed on every topic. Modern society values knowledge and expertise highly, sometimes unrealistically so. People are expected to have an opinion on a wide array of subjects, from politics to science, regardless of their actual expertise.
This expectation is pervasive, affecting social interactions, media discourse, and even workplace dynamics. When an individual admits ignorance, they might face criticism, dismissal, or even ridicule, especially if their uncertainty conflicts with the prevailing opinions of their peers.
The phenomenon is magnified in the digital age, where the rapid exchange of information often prioritizes speed over accuracy. We are so used to having all information at the ready these days anyway, so why would someone not know everything about anything already. We can find the latest news just one click away, there is even an entire encyclopedia at your fingertips this moment. This cultural pressure can lead individuals to feign knowledge or hastily form opinions based on incomplete or inaccurate information. Such behaviour perpetuates a cycle where misinformation spreads and superficial understanding is mistaken for true expertise.
LinkThe agreeableness dilemma
One underlying factor is our inherent desire to be agreeable. Humans are social creatures, wired to seek acceptance and avoid conflict. Agreeing with others, even when we are unsure or uninformed, is often a way to maintain harmony in social interactions.
This tendency is not inherently negative, it can foster cooperation and cohesion. However, it also has a downside, it discourages critical thinking and honest discourse. In many contexts, being agreeable means aligning with the dominant narrative or the opinions of influential individuals. This can stifle dissent and suppress diverse perspectives, ultimately limiting our collective understanding of complex issues. The same agreeableness that promotes social harmony can thus contribute to the spread of misinformation and the devaluation of genuine expertise.
LinkThe importance of critical thinking
Critical thinking is becoming increasingly important in a world saturated with information. The ability to analyze, evaluate, and synthesize information is crucial for navigating any issue and making an informed decision. When we generate false information, it undermines our ability to work together, think critically and assess the validity of the information we author and promote.
Encouraging ourselves to admit when we don't know something can promote a culture of critical thinking and continuous learning among ourselves and others. It can serve as a reminder that it is okay not to know everything and that seeking accurate information is a valuable pursuit.
Furthermore, the reduction in exposure to differing opinions exacerbates this issue. For instance when social media algorithms prioritize certain types of information, they can create echo chambers that reinforce existing beliefs and discourage critical examination of alternative viewpoints. This phenomenon contributes to the rise of conspiratorial thinking, as people are less likely to encounter and consider dissenting perspectives. Another scenario is when using a search engine we will phrase our query with an inherit bias that the search algorithms will gladly use to serve us just the answer we were looking for irregardless of it being the right answer.
This is not just a case of only algorithms, but also of how we, as individuals, approach information. Cognitive biases, such as confirmation bias, lead us to favour information that supports our pre-existing beliefs and dismiss information that contradicts them. This inclination can result in a distorted view of reality, where our understanding is shaped more by our preferences than by objective facts.
LinkWe should hold others accountable
Misinformation can have real-world consequences, from spreading false health advice to influencing political opinions. Ensuring that individuals provide accurate information can mitigate these risks. For instance, inaccurate health information can lead to harmful practices, while misleading political information can sway public opinion and harm the public. Therefore, holding ourselves accountable for the accuracy of the information we share is crucial.
Each of us has a responsibility to verify the information we share. This means taking the time to check the sources of our information and understanding the reasoning behind the claims we encounter. Transparency in our own actions can help others discern the reliability of the information we share and make more informed decisions.
We must cultivate a habit of critical evaluation in ourselves and encourage it in others. This involves questioning the credibility of sources, cross-referencing information with reputable references, and being open to differing viewpoints. By doing so, we can reduce the spread of misinformation and promote a culture of truthfulness and intellectual integrity.
Holding ourselves accountable means acknowledging when we are wrong and correcting our mistakes. It involves a commitment to ethical conduct in all our communications, whether online or offline. This also includes avoiding the temptation to share sensational or unverified information for the sake of attention or influence.
Trust is built through transparency and honesty. By being open about our sources and the limitations of our knowledge, we can foster a more trustworthy information environment. This also means being willing to engage in respectful dialogue and considering the impact of our words on others.
As a society, we should develop and adhere to high standards of accuracy and reliability. This involves promoting digital literacy and critical thinking skills, so individuals are better equipped to navigate the complexities of the information landscape. Educational institutions, media organizations, and community leaders all play a role in setting these standards and encouraging adherence to them.
Creating a culture where accountability is valued requires collective effort. We should support initiatives that promote fact-checking and responsible information sharing. Community-based efforts, such as local fact-checking groups or online forums dedicated to debunking myths, can be effective in fostering this culture.
Social norms play a significant role in shaping our behavior. By setting positive examples and calling out misinformation when we encounter it, we can influence others to do the same. Encouraging a norm where truthfulness and accountability are prized can lead to a more informed and discerning public.
LinkWhat can we do about it
Misinformation is a pervasive issue that affects us all, which in part is driven by the expectation that everyone must have an informed opinion on everything. This expectation drives both humans and computers to produce information, often without regard for accuracy. By holding ourselves and each other accountable for the information we share, we can mitigate the harmful effects of misinformation and contribute to a more informed and responsible society. This requires commitment, transparency, and continuous learning.
All that being said, this does not fill me with hope because without getting outright to political. This article should not be read as a "the problem are the humans". But as a starting point to think about what incentives are at play and who is being exploited here.
Link