Assessing Current Platforms’ Attempts to Curb Misinformation

While disinformation is at the top of many people’s minds, social media platforms like Instagram and Facebook and their parent company, Meta, have new policies written into their user agreements about how they try to reduce the amount of disinformation or wrong information that is being put out onto their platforms. 

Instagram remains one of the world’s most popular social media sites, with over 2 billion users. Hence, managing social media information or disinformation is vital for Instagram to gain users’ trust. As someone who doesn’t usually use news sources or read/pay attention to the news I use social media to get alerts from news agencies because that news is more tailored to my interests, rather than the news that primary media outlets cover, which many times is news that is tailored to their viewerships interests rather than my own. While this can have drawbacks, I often must catch up to significant news stories or breaking news because I sometimes get it days later. My life stays mostly the same if I am late to stories or breaking news. I put a significant amount of trust into the credibility and reliability of the news courses I read on Instagram, so their ability to filter out and take down sources they deem as incorrect is critical to me as a reader. 

From the Instagram website, “We use both technology and feedback from our community to identify posts and accounts that may contain false information. We also work with third-party fact-checkers globally who review content in over 60 languages and are certified through the non-partisan international fact-checking network to help identify, review, and label false information” (Multiple Sources, 2024). During the 2020 COVID-19 global pandemic, Instagram was very involved with fact-checking information and plainly stating that some information being put out to the general public was incorrect and should not be trusted. They also have a policy in place that reduces the amount of false information: “Making false information harder to find, using technology to find the same false information, Labeling posts with false information warnings, and removing content and accounts that go against community guidelines” (Instagram,2024).  However, to avoid limiting people’s First Amendment rights about what they can and cannot say without fear of censorship, they allowed the user to limit the amount of disinformation or fact-checked information. When they rolled out this update, they faced backlash from the public, unaware of this option and how it was automatically turned on when you updated the app. You had to go in and change the amount of information yourself physically. In 2023, NBC News, groups of people were very upset about their information because it was being falsely flagged as misinformation when, in fact, it was correct. 

While this setting is helpful in a time and place, it puts too much of a filter on information that people deserve to be getting. If a story does not abide by the settings that Instagram has put in place, it could be falsely tagged as misinformation, affecting the number of people who deserve to see the correct information. I have this setting turned off because I can double-check news sources before spreading more information about a news story. 

I feel that Facebook has a much larger issue with people spreading disinformation because it is more of a “blog-based” website. People can post their thoughts and feelings without facing any real issue of backlash from a fact-checking website or third-party source. While people can cite their sources or turn people to legitimate websites, it is not very often that you get information from Facebook that is not routed in some kind of exterior motivation or motive when people post on that website. However, Facebook is also owned by Meta, which falls under the same standards and restrictions as Instagram regarding posting information. According to a post on the Facebook customer service page, they are trying to reduce the amount of disinformation in accounts and articles considered “clickbait.” Also, Facebook has attempted to offer additional information to add context to people’s posts so everyone can understand the whole context behind a post. 

Meta’s current policies to counteract disinformation work exceptionally effectively. When the policy that put “fact check notices” on specific posts about controversial topics was implemented, it made people more aware of what was out there regarding what people were saying. To improve the amount of discrediting of disinformation, Meta should implement more third-party fact-checkers so they can cover more ground and bolster the amount of checking they do for all posts. I think this would be the most effective way to ensure no disinformation is out there. 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *