Rage farming

From Justapedia, unleashing the power of collective wisdom
Jump to navigation Jump to search

Rage farming or rage-baiting is internet slang that refers to a manipulative tactic to elicit outrage with the goal of increasing internet traffic, online engagement, revenue and support.[1][2] The 2010s' neologism, clickbait, refers to sensationalist content on the internet that focused on attracting click-throughs at the expense of accuracy and quality.[3] Unlike clickbait, rage bait and rage farm almost always have a negative connotation. Rage baiting and rage farming manipulate users to respond in kind to offensive, inflammatory "headlines", memes, tropes, or comments.[4][5][6][7] Rage-farming, which has been cited since at least January 2022, is an offshoot of rage-baiting where the outrage of the person being provoked is farmed or manipulated into an online engagement by rage-seeding that helps amplify the message of the original content creator.[2][8][9]

Rage baiting or farming can be used as a tool to increase engagement, attract subscribers, followers, and supporters which can be financially lucrative. It has been used as political tactic at the expense of one's opponent. In his 7 January 2022 tweet, a senior researcher at Citizen Lab—John Scott-Railton—described how a person was "being rage farmed" when they responded to an inflammatory post with an equally inflammatory quote tweet as quote tweets reward the original rage tweet. Algorithms on Facebook, Twitter, TikTok, Instagram, YouTube, and other social media platforms, reward increased positive and negative engagement by directing traffic to posts and amplifying them.[1] As early as 2012, research suggesting that in both media and politics, eliciting outrage is a powerful tool.[10][11] In political media, both real and imagined outrage attracts readers making narratives that evoke rage very popular.[11]

Political scientist, Jared Wesley of the University of Alberta, said that the use of the tactic of rage farming is on the rise with right-wing politicians employing the technique by promoting conspiracy theories and misinformation. As politicians increase rage farming against their political and ideological opponents, they attract more followers online, some of whom may engage in offline violence, including verbal violence and acts of intimidation. Wesley describes how those engaged in rage farming combine half-truths with "blatant lies".[12]

Media and governmental investigations in the wake of revelations from Facebook whistleblower, Frances Haugen, and the 2021 Facebook leak, provide insight into the role various algorithms play in farming outrage for profit by spreading divisiveness, conspiracy theories and sectarian hatred that can allegedly contribute to real-world violence.[13] The most egregious example is Facebook—the world's largest social media network—with over 25 million accounts in Myanmar which neglected to police rage-inducing hate speech posts targeting the Rohingya Muslim minority in Myanmar that allegedly facilitated the Rohingya genocide.[14][15][16][9][17][18] In 2021, a c. $US 173 billion dollar class action law suit filed against Meta Platforms Inc—formerly known as Facebook—on behalf of Rohinga refugees claimed that Facebook's "algorithms amplified hate speech."[14]

Etymology, definitions and related terms

Rage farming is from rage + farm. Alternative forms include rage-seeding. Ragebait, rage-bait, rage baiting, and outrage baiting are similar Internet slang neologisms referring to manipulative tactics that feed on readers' anxieties and fears. They are all forms of clickbait, a term used used since c. 1999, which is "more nuanced" and not necessarily seen as a negative tactic.[19][20] The term rage bait, which has been cited since at least 2009, is a negative form of click-baiting as it relies on manipulating users to respond in kind to offensive, inflammatory "headlines", memes, tropes, or comments.[4][5][6][7] Rage-farming, which has been cited in January 2022, is an offshoot of rage-baiting where the outrage of the person being provoked is farmed or manipulated into an online engagement by rage-seeding that helps amplify the message of the original content creator.[2][8]

American writer Molly Jong-Fast wrote that "[r]age farming is the product of a perfect storm of f***, an unholy mélange of algorithms and anxiety" In her January 2022 article in The Atlantic on the GOP's far-right media network.[2] She described the tactic as cynical.[2]

Political scientist, Jared Wesley, wrote that rage farming was often "used to describe rhetoric designed to elicit the rage of opponents."[8] Rage-baiting is used to describe a tactic to attract, maintain, and increase a base of supporters and followers.[7]

A BBC article described clickbait as a form of headline writing in online journalism "which tempts the reader to click on the link to the story."[19]

Cardiff University's School of Journalism's honorary research fellow, Damian Radcliffe said that clickbait is more nuanced that the usual negative connotation of a catch-all for writing sensationalized headlines. Clickbait can also refer to "better and snappier" headlines that attract readers.[19]

Clickbait, in all its iterations, is a form of media manipulationInternet manipulation. While the goal of some clickbait is to generate revenue, it can also be used as effective tactic to influence people on social media platforms, such as Facebook, Twitter, Instagram, and YouTube.[19] According to a November 2016 analysis of Facebook, clickbaits are intentionally designed to a targeted interest group's pre-existing confirmation biases. Facebook's algorithms used a filter bubble that shares specific posts to a filtered audience.[21]

A 25 May 2016 Westside Seattle Herald article cited the 4 June 2014 definition from the online Urban Dictionary—"a post on social media by a news organisation designed expressly to outrage as many people as possible in order to generate interaction."[5][6] The Herald article described how increased user traffic online results equals in more revenue for online platforms and websites from paid advertisements and sponsors.[6]

A 25 May 2016 article described ragebait as "clickbait's evil twin."[4] In her review of Nick Spencer's Marvel Comics's "Red Skull of Hydra" which went on sale on 25 May 2016, the author described the shocking revelation that Steve Rogers a.k.a. Captain America was a sleeper agent of the fictional terrorist organization—Hydra. In the review the author described how the unexpected twist in the Captain America narrative elicited strong reactions from those who were pleased and those who were angered. Some said that Marvel had paid for these responses to increase sales. The author responded that the magic of ragebait is that it generates revenue without the need of paid creators. To distinguish clickbait from ragebait, the author says the former's "catchy headlines" are used to entice "curiosity in the most shameless way possible" while ragebait's "catchy headlines" on a controversial subject incite so much outrage that readers "vent in the comments section".[4] The creators benefit from both negative and positive comments as "everyone takes the bait".[4] Ragebait is "not about freedom of expression or going against the status quo. Ragebait, like clickbait, is always written with page views and attention in mind. More often than not, you can tell the writer barely even supports or believes what they're saying. They are simply saying it to grind people's gears and rake in angry hate-views."[4] A 25 May 2016 Time article said that Hydra's rhetoric sounds similar to a presidential candidate running in the 2016 United States presidential election.[22]

A 2006 article in Time magazine described how Internet trolls post incendiary comments online with the sole purpose of provoking an argument even on the most banal topics. A statement like "NASCAR is about as much a sport as cheerleading" in a car-racing forum or openly supporting open borders to Lou Dobbs is cited as an example.[23]

Rage bait and outrage bait creators, invent "controversial news stories out of thin air".[24] The example cited was a 15 December 2018 Irish digital media company ad falsely claiming that two thirds of people wanted Santa to be either female or gender neutral.[24]

A veteran writer described in a 2021 Medium article that ragebait is just a long-form of internet trolling. The troll laughs "all the way to the bank" while irate readers comment and complain.[25]

Background

A 2012 Journal of Politics (JOP) article found that political actors were intentionally incorporating emotional content to evoke anxiety into their messaging to elicit interest in a topic.[10] The article questioned why this political tactic resulted in viewers feeling more anger than anxiety. The study found that anger increased information-seeking behaviour and often resulted in web users clicking on the political website to learn more.[10] The research said there were also psychological incentives to use angry rhetoric in political communication.[10] A 2018 Media Matters for America article citing the JOP journal, reiterated that "anger is a powerful tool in the worlds of both politics and media."[11] The political media industry knows that real or imagined outrage attracts readers making narratives that evoke very popular in political media.[11]

A November 2018 National Review article decrying social-justice warriors was cited as an example of rage-baiting by Media Matters for America.[26][11] The Review article was in response to Tweets criticizing the cartoon image used by the ABC's Twitter account to advertise Charlie Brown's Thanksgiving on November 21, 2018.[26] Franklin, the Black friend was sitting all alone on one side of Charlie Brown's Thanksgiving dinner table.[26] Several unverified accounts by Twitter users—including one with zero followers—called the image racist.[11] Conservatives were so frustrated by these overly sensitive, politically correct, "snowflake" liberals who posted, that they in turn responded in anger. The Media Matters for America article said that there was irony in the way in which the National Review article which intended to illustrate how liberals were too easily provoked to anger, actually succeeded in enraging conservatives.[11]

Information technologies and digital media enable unprecedented capacities for online manipulation,[27] including click-baiting, rage baiting and rage farming. In his 7 January 2022 tweet John Scott-Railton described how a person was "being rage farmed" when they responded to an inflammatory post with an equally inflammatory quote tweet since algorithms on Twitter, TikTok, YouTube, Facebook and other social media platforms, reward posts that attract engagement by amplifying the posts.[1]

A 2020 review of the conservative Canadian online news magazine, which was started in 2017, The Post Millennial, said it was far-right America's most recent rage baiting outlet.[28]

Examples of rage farming

A 7 January 2022 voting meme by the Texas GOP, "If you can wait in line for a covid test, you can wait in line to vote." trended on Twitter and resulted in 11.8 thousand retweets, 10. 5 thousand quote tweets, and 53.k thousand likes as it angered those who were against the 2021 voting restrictions introduced by Texas Republicans and delighted those who supported initiatives making it harder to vote.[29] The Texas Tribune reported on how this Trump-style rhetoric captures attention on social media. Experts had observed similar memes in the spring 2020 posted by far-right strategists comparing "waiting in line to vote to other common activities for which people have to wait in line".[30] Sam Woolley, a director at the University of Texas at Austin's Center for Media Engagement that focuses on internet misinformation and propaganda, said that the meme was successful in achieving the goal of dividing people. Woolley said that this form of messaging is motivated by those who are convinced that those "who don't believe what you believe are the enemy". They are not true Americans.[30] Experts in social media are seeing an increase in politicians using this type of social media strategy with success.[30]

Gerald Butts, a former Principal Secretary to Canadian Prime Minister Trudeau, referred to rage farming on 8 January 2022 to describe the social media techniques used by then leader of the Conservative Party of Canada, Erin O'Toole's communications team when he was campaigning to remain as CPC leader.[31] Rage farming was used as a phrase by Molly Jong-Fast in her January 2022 article in The Atlantic on the GOP's far-right media network, in which she described the tactic as cynical.[2]

Hasan on The Mehdi Hasan Show said that online users fall victim to rage farming when they become so angered or upset by a harmful, racist, or inflammatory post online, that they feel compelled to respond in kind. Hasan cited the example of the Texas GOP's viral meme—a "lazy, dangerously false equivalency" of Covid and voter line ups. When the meme went viral, the GOP account posted more inflammatory tweets, which resulted in more angry responses from the left which delighted the trolls.[32] Hasan and guests, Molly Jong-Fast—editor at large of The Daily Beast and John Scott-Railton—senior researcher at Citizen Lab—described how the Texas GOP account used this successfully as an intentional tactic to get increased exposure. Accounts like these "harness the energy of others" to get their own ideas trending. Because the goal is increased engagement and ideally to go viral, it does not matter if the response is negative or positive.[32] They said that the farmed rage results in success in fundraising. When angry liberals respond in kind, the trolls win.[32]

Rage-baiting can be successfully used as a business model.[33] InfoWars founder Alex Jones, through InfoWars, made about $800,000 a day by spreading misinformation and conspiracy theories, in what the National Observer called "rage-farming fertilizer."[34]

Wesley said that the former Premier of Alberta, Jason Kenney, Danielle Smith, who was running to replace him, and Pierre Poilievre, the Conservative leadership candidate use narratives that encourage people to threaten and intimidate their opponents.[12][35]

By 26 August a "very large and visibly angry man"—later identified as a auto parts salesman—and his female partner attempted to intimidate Canada's Deputy Prime Minister Chrystia Freeland and her female staffers in Grand Prairie, Alberta and to tape the encounter.[36] The incident was described as rage farming and as a "textbook example" of stochastic terrorism" or "the public demonization of a person or group resulting in the incitement of a violent act."[34] In an interview with The Tyee, the man who verbally attacked Freeland, has been described as a conspiracy theorist.[12] He cited a number of tropes and conspiracy theories circulating online that seed rage farming including the alleged link between Prime Minister Justin Trudeau and the World Economic Forum (WEF), the use of vaccines to kill masses of people, the role of the authentic patriots in contrast to the sheep-like followers of the elite—and more recently—the theory that the federal government was "trying to starve the public by forcing fertilizer limitations on farmers."[12] In a 29 August CTV News interview in response to the attack on Minister Freeland, Public Safety Minister Marco Mendicino described the attack as "unacceptable" and said that it was not only a threat to Freeland but also a "threat to democracy".[37] He said they were in consultation with the RCMP and police services to investigate increasing security details for ministers and all politicians.[37]

Social media's role in spreading divisiveness, conspiracy theories and sectarian hatred

Rage farming and rage baiting are most recent iterations of clickbait and other forms of Internet manipulation that use conspiracy theories and misinformation to fuel anger and engage users. Facebook—the largest social network in the world—has been "blamed for fanning sectarian hatred, steering users toward extremism and conspiracy theories, and incentivizing politicians to take more divisive stands," according to a 2021 Washington Post report.[13] In spite of previous reports on changes to its News Feed algorithms to reduce clickbait, revelations by Facebook whistleblower Frances Haugen and content from the 2021 Facebook leak—Facebook Papers—provide evidence of the role the company's News Feed algorithm had played.[13]

In response to complaints about clickbait on Facebook's News Feed and News Feed ranking algorithm, in 2014 and again in 2016, the company introduced an anti-clickbait algorithm to remove sites from their News Feed that frequently use headlines that "withhold, exaggerate or distort information."[38]

A February 2019 article that was promoted in Facebook described how outrage bait made people angry "on purpose".[24] Digital media companies and social media actors incite outrage to increase engagement—"clicks, comments, likes and shares"— which generate "more advertising revenue".[24] If content does not increase engagement, "timeline algorithm" limits the number of users that this uninteresting content can reach.[24] According to this article, when geared up on its war against clickbait, algorithm changed, which made it harder for creators and sites to use clickbait. The article said that a new engagement strategy was introduced to replace clickbait—Rage bait or outrage bait.[24]

The 2016 algorithms were allegedly trained to filter phrases that were frequently used in clickbait headlines similar to filters that remove email spam.[38] Publishers who continue to use clickbait were allegedly punished through loss of referral traffic.[38]

Starting in 2017, Facebook engineers changed their ranking algorithm to score emoji reactions five times higher than mere "likes" because emojis extended user engagement, according to a 26 October 2021 Washington Post article. Facebook's business model depended on keeping and increasing user engagement.[39] One of Facebook's researchers raised concerns that that the algorithms that rewarded "controversial" posts including those that incited outrage, could inadvertently result in more spam, abuse, and clickbait.[39]

Since 2018, Facebook executives had been warned in a slide presentation that their algorithms promoted divisiveness but they refused to act.[40] In a 2022 interview Scott-Railton had observed that the amplification by algorithms of these inflammatory quote tweets in rage farming that looped upon themselves may have been planned and structural or accidental.[2] Algorithms reward positive and negative engagement. This creates a "genuine dilemma for everyone". Algorithms also allow politicians to bypass legacy media outlets that fact-check, by giving them access to a targeted uncritical audience who are very receptive of their messaging, even when it is misinformation.[11]

By 2019, Facebook's data scientists confirmed that posts that incited the angry emoji were "disproportionately likely to include misinformation, toxicity and low-quality news."[39]

The 2020 Netflix docudrama The Social Dilemma analyzed how social media was intentionally designed for profit maximization through Internet manipulation which can include spreading conspiracy theories and disinformation and promoting problematic social media use.[41] Topics covered in the film included the role of social media in political polarization in the United States, political radicalization—including online youth radicalization, the spread of fake news and as a propaganda tool used by political parties and governmental bodies. Social media networks have three main goals—to maintain and increase engagement, growth, and advertisement income, according to a former Google design ethicist.[42]

A 2021 report by the Washington Post revealed that Facebook did not adequately police its service outside the United States.[16] The company invested only 16% of its budget to fight misinformation and hate speech in countries outside the United States, such as France, Italy, and India where English is not the maternal language. In contrast, the company allocated 84% to the United States which only represents 10% of Facebook's daily users.[9] Since at least 2019, Facebook employees were aware of how "vulnerable these countries, like India, were to "abuse by bad actors and authoritarian regimes" but did nothing to block accounts that published hate speech and incited violence.[9] In their 2019 434-page report submitted to the Office of the United Nations High Commissioner for Human Rights on the findings of the Independent International Fact-Finding Mission on Myanmar, the role of social media in disseminating hate speech and inciting violence in the anti-Muslim riots and the Rohingya genocide was investigated. Facebook was mentioned 289 times in the report as their are millions of Facebook accounts in that country.[17] Following the publication of an earlier version of the report in August, Facebook took the "rare step" of removing accounts that represented 12 million followers implicated in the reports findings.[15] In October 2021, Haugen testified at a United States Senate committee that Facebook had been inciting ethnic violence in Myanmar which has over 25 million Facebook users and in Ethiopia through its algorithms that promoted posts inciting or glorifying violence. False claims about Muslims stockpiling weapons were not removed. [16] The Digital Services Act is a European legislative proposal to strengthen rules on fighting disinformation and harmful content, that was submitted by the European Commission to the European Parliament and the Council of the European Union partially in response to concerns raised by the Facebook Files and revelations in Haugen's testimony in the European Parliament.[18] In 2021, a c. $US 173 billion dollar class action law suit was lodged by law firms Edelson PC and Fields PLLC against Meta Platforms Inc, formerly known as Facebook in the United States District Court for the Northern District of California on behalf of Rohinga refugees, claiming that Facebook was negligent in not removing inflammatory posts that facilitated the Rohingya genocide in Myanmar. The lawsuit said that Facebook's "algorithms amplified hate speech."[14] Following its launch in Myanmar in 2011, Facebook "quickly became ubiquitous."[14] A report commissioned by Facebook led to the company's admission in 2018, that they had been failed to do "enough to prevent the incitement of violence and hate speech against the [...]Muslim minority in Myanmar." The independent report found that "Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence".[14]

Notes

See also

References

Sources

  • Akinwotu, Emmanuel (7 October 2021). "Facebook's role in Myanmar and Ethiopia under new scrutiny". The Guardian. ISSN 0261-3077. Retrieved 3 September 2022.
  • Hom, Kyra-lin (25 May 2015). "Rage baiting". Westside Seattle Herald. Retrieved 3 September 2022.
  • Jeans, Frank (4 June 2014). "Rage Bait". Urban Dictionary.
  • Neiwert, David (17 October 2017). Alt-America: The Rise of the Radical Right in the Age of Trump. Verso Books. ISBN 978-1-78663-423-8.
  • Nygma, E. (20 December 2009), "Rage baited", Urban Dictionary, Me: Brittany Murphy will be forgotten just like any other celebrity in Hollywood. Move on Girl: You're heartless. You don't even know her. Me: You just got rage baited.
  • Oremus, Will; Alcantara, Chris; Merrill, Jeremy B.; Galocha, Artur (26 October 2021). "How Facebook shapes your feed". Washington Post. Retrieved 4 September 2022.
  • "What is Outrage Bait?", This Interests Me, 19 February 2019

External links