Editor’s Note: As the technological realm becomes more pervasive, whom can we trust? Each week, Liberty Nation brings new insight into the fraudulent use of personal data, breaches of privacy, and attempts to filter our perception.
Whether it’s Iranian emails, social media censorship of damaging allegations, or Google bias, the onslaught of tech-related stories leading up to the 2020 election has highlighted the expanding role of Silicon Valley in the supposed democratic process.
On Oct. 28, Facebook, Google, and Twitter officials are set to testify before the Senate Commerce Committee on the topic of Section 230. The hearing will certainly involve some grilling on the Hunter Biden scandal and how it was handled by social media. That topic will receive even more direct questioning by the Senate Judiciary Committee, which voted to subpoena CEOs Mark Zuckerberg and Jack Dorsey for a Nov. 17 hearing to “review the companies’ handling of the 2020 election” and the New York Post allegations against Biden the younger.
Is the US an “At-Risk” Nation This Election?
Facebook has been in the spotlight leading up to Election Day, introducing a host of new rules to limit the spread of unapproved information over the vote results. The United States has long criticized countries around the world whose elections do not live up to its exacting standards – even as many refuse to acknowledge potential issues of fraud or intimidation on their home soil. And yet, in this most polarizing of election years, Facebook may be lumping in the United States with such “at-risk” countries as Myanmar and Sri Lanka, people familiar with the matter apparently told The Wall Street Journal. The social network will be using “emergency measures” previously deployed overseas for “slowing the spread of viral content and lowering the bar for suppressing potentially inflammatory posts,” according to the WSJ.
The paper reported:
“Facebook executives have said they would only deploy the tools in dire circumstances, such as election-related violence, but that the company needs to be prepared for all possibilities …
“Deployed together, the tools could alter what tens of millions of Americans see when they log onto the platform, diminishing their exposure to sensationalism, incitements to violence and misinformation, said the people familiar with the measures. But slowing down the spread of popular content could suppress some good-faith political discussion, a prospect that makes some Facebook employees uneasy, some of the people said.”
The caution comes, suggests the WSJ, from accusations that Facebook didn’t do enough to limit the spread of messages that contributed to anti-Muslim violence in Sri Lanka and Myanmar. In response, the company performed various analyses and came up with strategies to limit the likelihood of atrocities.
Is the United States so polarized that it risks violence on or after Election Day? Some have predicted a continuation of riots if Trump declares victory, but what will happen if Biden wins? In either case, Facebook will be there to control communications. “We need to be doing everything that we can to reduce the chances of violence or civil unrest in the wake of this election,” Zuckerberg recently told Axios.
Facebook has already moved to limit the spread of allegations of mail-in voter fraud, as well as curb claims to electoral victory deemed illegitimate by the press. What else does it have in mind, behind the scenes?
The Blurring Line Between Ads and Entertainment
Descriptions of political bias exhibited by Silicon Valley are increasingly common – but that’s not the whole story on social media, where content is king and individuals are the creators of content.
The internet was originally touted as the great democratizing force of our age. Though that promise is being rolled back and reined in, individual influencers do still have some power to, well, influence – voters, in this case. Then again, you never know when individuals are being driven by forces greater than themselves. TikTok recently pulled videos from influencers who had posted political content without disclosing they had been paid to create and publish it. As people seek a megaphone to spread their political opinions, it’s easy to assume that commentaries or funny videos are genuine – and they may be, but have they also been paid for?
A BBC investigation found that Bigtent Creative had paid TikTok influencers to persuade their audiences to register to vote. While some of these messages remained non-partisan, for others it translated into anti-Trump messaging. TikTok doesn’t allow political ads and stipulates that paid promotions be declared, in line with U.S. Federal Trade Commission guidelines. So, when the BBC alerted the company to the story, TikTok removed the offending videos, although they had reportedly already gained hundreds of thousands of views. Is this legitimate and neutral messaging to promote the democratic process, or is it driven by political interests?
Bigtent Creative started life as a pro-Elizabeth Warren volunteer “meme team” that went pro to “create new ways for GenZ and Millennials to engage in politics, meet them where they are, and influence their peers.” While the new outfit’s website stays away from partisan politics, the fact that it spawned as a pro-Warren promotion group might give one an idea of its affiliation. The team’s “intention was always to support whomever the eventual Democrat nominee was,” Bigtent CEO Ysiad Ferreiras told Gizmodo after Warren dropped out of the Democratic primary race. According to Gizmodo, the group also runs Fellow Americans, “a non-profit aiming to sway Republicans to vote against Trump. (But no, he says, they’re not paid by a particular campaign.)” While the tech website appeared only casually concerned about the line blurring between paid advertising and authentic content – and, in the context of an election, what this could mean for campaign finance laws – the old-fashioned BBC was rather more alarmed. It fretted:
“With two weeks until the US election, the competition for the youth vote is fierce and the drive to get first-time voters to register has intensified. Potentially millions of young TikTok users will not understand that some of the fun, quirky skits they are seeing are being paid for or supported by political interests.”
Ferreiras suggested the content did not constitute advertising, while the BBC complained:
“But the lines between partisan and non-partisan in these videos are often blurred – and the company told the BBC it discourages partners to use #ad because they want the videos to appear authentic.”
Spend too much time on the internet, and you too might start wondering what is real and what is just made to look real.
That’s all for this week from Tech Tyranny. Check back next week to find out what’s happening in the digital realm and how it impacts you.
Read more from Laura Valkovic.
Get Your Top Election Coverage Right Here on Liberty Nation
Remember to check out the web’s best conservative news aggregator Whatfinger.com -- the #1 Alternative to the Drudge
Also check out newcomer ConservativeNewsDirect.com