Editor’s Note – As the technological realm becomes more pervasive, whom can we trust? Each week, Liberty Nation brings new insight into the fraudulent use of personal data, breaches of privacy, and attempts to filter our perception.
Twitter Anti-Misinformation Experiments
Fake News – the scourge of the Information Age. Misinformation, disinformation, and misleading content – it’s everywhere, apparently, but which news is it that’s fake, anyway?
Twitter is looking for ways to deal with this problem, and one option is apparently to stamp “harmfully misleading” tweets in bright orange. One can only assume the beneficially misleading ones will still be allowed.
NBC News recently received a leaked demo detailing an experiment by the social media site that would see such messages labeled and their public exposure reduced. This is just one option the company is exploring “to help each other understand what’s happening in the world, and protect each other from those who would drive us apart,” as the demo states.
How would the site determine which posts to tag? This particular trial suggests a system of “community reports” whereby users, including Twitter-verified fact-checkers and journalists, act to police the tweeting public. Twitter claims the system would echo that of Wikipedia, whose supposedly open-source content editing has gone largely unquestioned by internet users – with a few exceptions.
NBC reports, “In one iteration of the demo, Twitter users could earn ‘points’ and a ‘community badge’ if they ‘contribute in good faith and act like a good neighbor’ and ‘provide critical context to help people understand information they see.’”
Community members would be asked how “likely” or “unlikely” a particular message is to be “harmfully misleading” before estimating what percentage of respondents would agree with their assessment and why. Serial participants would be rewarded, as “the more points you earn, the more your vote counts.” Although the demo mentions that it is “not a space for personal opinion or belief,” the rise of pedantic “fact-checking” operations with ideological ties shows how supposedly objective assessments can be used to guide perception.
Twitter confirmed the leaked documents. “We’re exploring a number of ways to address misinformation and provide more context for tweets on Twitter,” a spokesperson said. “Misinformation is a critical issue and we will be testing many different ways to address it.”
Twitter Takes Action on Fakes
The revelation comes less than a month after Twitter announced a new rule designed to curb tweets that “deceptively share synthetic or manipulated media that are likely to cause harm.” This missive against fake news will be enforced from March 5.
To determine whether content violates this order, Twitter will test it with three questions:
- Are the media synthetic or manipulated?
- Are the media shared in a deceptive manner?
- Is the content likely to impact public safety or cause serious harm?
If a tweet checks one of these boxes, it may be labeled as “manipulated media.” If it checks two, it will likely be labeled and may be removed, and if it checks all three, it will likely be removed – with all those “likelys,” it seems there are no absolutes or hard-and-fast rules to which the company is willing to commit. Of course, users will then be directed to learn more about the manipulated content from “reputable sources.”
With the timing of the rule and much of the media chatter surrounding it highlighting supposed concerns about misinformation ahead of the 2020 election, one prospective candidate slipped a video in just before the tenet is set to take effect. Michael Bloomberg provoked controversy after tweeting a “doctored” video from footage taken at the recent Las Vegas Democrat debate: Upon asking which, if any, other candidates had started their own businesses, footage was inserted of each of his rivals appearing to sheepishly remain silent, which was not the response he actually received. A soundtrack of crickets is added to the background – although witnesses deny the presence of the insects on the debate stage – before a satisfied Bloomberg simply says “OK” to settle the matter.
— Mike Bloomberg (@MikeBloomberg) February 20, 2020
Is this a light-hearted joke or an attempt to spread misleading content to win an election? Perhaps one day a mob of “community reporters” will be able to decide. In the meantime, according to Ben Collins of NBC, Twitter admitted this was indeed the kind of content that will shortly be flagged as false.
No More Internet Anonymity in India
The ability to speak anonymously on the internet leads to both an explosion of fresh communication, as well as screeds of horrendous abuse. If the internet were no longer anonymous, would it create a more civilized network or a means to shut down the free exchange of ideas?
New rules set to be published by the Indian government this month will effectively do so for users of social media sites in that country. A “traceability requirement” will reportedly force social media companies to hand over identifying information for individuals upon government request, within 72 hours, with no need for a warrant or judicial order. This may include breaking encryption to trace the origin of data or providing metadata on users.
The rules were proposed in Dec. 2018 but are only now expected to become official, with few changes, according to Saritha Rai, who reports for Bloomberg on technology in South Asia. India’s Ministry for Electronics and Information Technology came up with the guidelines on how to get “intermediaries,” i.e. social media and messaging platforms, to trace the origin of “unlawful” content online, as well as to make tech companies keep user data for longer periods. The rules would also “require online platforms to become proactive arbiters of ‘unlawful’ content. This is via ‘technology-based automated tools or appropriate mechanisms’” reported The Indian Express. “The proposed change shifts the onus and duty of the state to a private party,” the paper stated.
The proposal came after the apparent spread of false information online resulted in violence, including multiple lynchings. WhatsApp declined to reveal the source of the rumors.
Apar Gupta of the Internet Freedom Foundation worried at the time of the proposal that the rules “would be a sledgehammer to free speech online.” He wrote that “legitimate speech could be suppressed by requiring online platforms to become pro-active arbiters and judges of legality” which “would result in widespread takedowns without any legal process or natural justice.”
In January, tech leaders from Mozilla, GitHub, and Cloudflare wrote to Minister of Electronics and Information Technology Ravi Shankar Prasad suggesting the laws would “promote automated censorship” and “substantially increase surveillance,” as well as “tilt the playing field in favor of large players.”
India may seem half a world away, but it is hardly alone in considering such a move. It’s no longer difficult to imagine supposedly free nations moving to control the information you can publish, or even see, online.
That’s all for this week from Tech Tyranny. Check back in next Monday to find out what’s happening in the digital realm and how it impacts you.
Read more from Laura Valkovic.