AI has been a hot topic around the world lately. And rightfully so. Artificial intelligence is a technological development that we have all heard about and has been rapidly growing for the last decade. It was only a few years ago that my class’s syllabi started including statements on the use of AI for classes as students were continually caught submitting work they had not completed themselves. Since then, AI has become more and more and more integrated into every part of our lives. Most major search engines have AI built in and you cannot expect to interact with social media without seeing some kind of strange, AI generated content. As AI has become an unavoidable part of our day-to-day lives, debates have sprung up in multiple circles about how and when AI should be used.

As a library and information science student I have seen how, regardless of if they are dealing with seasoned researchers, students, or the public, information professionals are seeing more and more people starting to rely on AI as a research tool. In many cases, this can be a detriment to critical research skills and encourage a spread of misinformation as people start to trust the information that AI produces more and more. Although I have been warned to expect misinformation spread by AI and seen it first hand in the form of fake citations and quotes, I know I am not an authority on the subject. So to further inform myself on this issue, I picked up a good ol’ book and got to reading.
For this blog post, I will be engaging primarily with the first part of a new book from our collection, Truth-Seeking in an Age of (Mis)Information Overload (2024) entitled “Misinformation and Artificial Intelligence.” This section is composed of two essays: “It Is Artificial, But Is It Intelligent?” by E. Bruce Pitman and “Disinformation, Power, and the Automation of Judgments: Notes on the Algorithmic Harms to Democracy“ by Ewa Płonowska Ziarek.

In his article, Pitman discussed two major types of AI systems: Machine learning (ML) systems and deep neural network (DNN) systems. MLs are a class of algorithms that “learn” from a training dataset that they then rely on to answer questions. DNNs aim to emulate the human brain by setting up layers of “neurons” that are connected in a preconceived geometric pathway which it then uses to identify the presence (or lack thereof) of signals or data that would lead it to give a certain answer (Pitman 20). This is my attempt (as a non-computer sciences person) to simplify the complex math and ideas that Pitman explains. To learn more about these types of systems, I would suggest reading these articles for yourself or using other sources such as this IBM article to supplement your learning.
After explaining the ways that these systems operate, Pitman then evaluates the ability of AI to make unbiased and accurate decisions. Pitman points out that trusting the answers that AI provides can often be risky as it is simply comparing whatever prompt you give it to the training datasets it is provided with. Considering this, “AI systems are (often) biased. These deep networks require enormous amounts of data on which to train and, so, very often, these training datasets whether through inattention or a lack of care, are not comprehensive and tend to under-represent minority communities that are already disadvantaged in society” (Pitman, 20). Pitman gives the examples of Black people’s faces being mis-recognized by AI at a much higher rate than White people’s and the fact that Amazon’s recruiting tool shows a clear bias against applicants who identify as women (20). This is troubling on multiple levels as it shows that certain AI systems could have a tendency to reinforce certain viewpoints that are untrue based on the limited information it has been given by its creators.
At the end of the day, AI systems are designed to recognize patterns and give answers based on those patterns, which can ultimately lead it to give an answer that isn’t actually accurate. But how would it know? It is only operating on a limited set of data and physically cannot be critical about what information it is providing because it has been coded to produce answers in a certain format. At the end of the day, Pitman makes it clear that he is “not here to rant against AI systems and DNNs. But [he does] wish to rant against the uncritical, unsupervised, unchecked use of DNNs” (25).
Ewa Płonowska Ziarek’s essay echoes this warning against fully trusting AI as an information source. Ziarek aims to prove that capitalistic AI technologies “weaken political agency and understanding by the ever-increasing automation of judgements, debates, and decisions by algorithmic procedures”(35). To start off the chapter, Ziarek discusses the National Science and Technology Council’s December 2022 report entitled “Roadmap for researchers on Priorities Related to Information Integrity Research” which aimed to give guidance to researchers trying to minimize the amount of disinformation produced and circulated by AI (30). While their report seems to support integrating more diverse perspectives into AI training datasets at first glance, Ziarek posits that “a proposed engagement with diverse communities is not based on a participatory collaboration and understanding but rather driven by a top down, and at the time patronizing approach” (32). The creators of these systems are less concerned with the actual inclusion of different opinions as they are with capitalizing on all the information they can get their hands on.
The goal of gathering this information is not to make a system able to compare and critically analyze multiple viewpoints, it is to be able to produce the information that will get any sort of community to interact with it. This priority around profit makes the algorithms that companies decide to use unpredictable as we cannot be certain of what they are including, excluding, prioritizing, etc. As if they need to exclude the input of citizens even more, companies are extremely secretive about the algorithms that they use, making it impossible to know what information you are interacting with. Ziarek also emphasizes that AI systems operate in such a way that they cannot take into account actual human understandings of the world and they cannot replicate it (38). AI cannot understand the complexity of human interactions and the multitude of ways in which one decision can impact something in hundreds of different ways based on human reactions. Considering that, they are not a reliable source to use in making decisions on complex issues.

Much to her distress, however, government agencies have already taken to using AI to do who knows what. Much like the algorithms that major companies use, we cannot be certain what algorithms government agencies are using or even what they are using them for. Ziarek discusses an extremely influential study commissioned by the Administrative Conference of the United States in 2020 to evaluate government use of AI. At that time, of the 142 major federal departments studied “nearly half of them have already adopted AI, including areas of law enforcement, health, financial regulation, adjudication” and in communication with the public about their rights (38). This is a little alarming considering what we have already discussed. AI is physically incapable of understanding human behavior and can clearly produce biased information based on its algorithm, so why would we ever rely on it to make decisions or interact with other humans for us?
It is unlikely that I’ll ever get a satisfying answer to that question as we continue to rely on AI more and more and start to do critical research less and less on our own. An unsatisfying answer I can think of right now is: Because it is easier. As with anything in nature, humans tend to follow the path of least resistance. It is much easier to ask Chat GPT to give you sources or answer a question for you than it is to actually dig through those sources yourself. But by designating those kinds of tasks to AI, we lose the ability to create our own opinions based on what we have actually read on our own. In reading this book, I had to parse through Pitman and Ziarek’s ideas to form my own understanding. If I had just asked AI for a summary, I wouldn’t have the chance to think critically about their findings and ideas.
And hey, this may make me seem like an anti-AI purist. I’ve used my fair share of AI whether I’ve realized it or not. That AI summary from Google is pretty enticing sometimes when I need a quick answer to “How to get my car free from ice.” I’ve also seen some of my STEM major friends use AI to generate practice questions based on their field standards to use for studying purposes. All in all, AI is not outright bad, but the reliance on it as a decision making or thinking tool can be. These two essays show how AI is not as all knowing and reliable as it may seem. We should all be aware of the risk of disinformation and think a little more critically about the answers ChatGPT and Google’s AI Overview give us. Myself included.
Want to find more insightful and thought provoking books like this one? Check out the History, Philosophy, and Newspaper Library’s New Book sections both in person and online to find the most up-to-date publications we have to offer!