As the technological realm becomes more pervasive, who can we trust? Each week, Liberty Nation brings new insight into the fraudulent use of personal data, breaches of privacy, and attempts to filter our perception.
Facebook Follies – Passwords not Protected
One might have imagined that following our Facebook special last week, news on the company’s privacy problems would be exhausted – but less than a week later, yet another story has come out casting aspersions on its data security processes.
Investigative reporter Brian Krebs – of the blog KrebsOnSecurity – spoke with an anonymous senior Facebook employee who revealed that millions of user passwords were stored in an unsecure manner for years. Passwords should be encrypted to keep them secret from even the eyes of tech company employees, but reportedly an ongoing internal inquiry has so far found that between 200 million and 600 million user passwords were stored unencrypted, in plain text readable by more than 20,000 Facebook employees.
Facebook subsequently admitted to the violation, which was discovered during a security scan in January, but claimed that no known abuse occurred as a result of the mistake. Facebook software engineer Scott Renfro told Krebs:
“We’ve not found any cases so far in our investigations where someone was looking intentionally for passwords, nor have we found signs of misuse of this data. In this situation what we’ve found is these passwords were inadvertently logged but that there was no actual risk that’s come from this. We want to make sure we’re reserving those steps and only force a password change in cases where there’s definitely been signs of abuse.”
Pedro Canahuati, VP of engineering, security and privacy, reiterated in a statement that, “these passwords were never visible to anyone outside of Facebook and we have found no evidence to date that anyone internally abused or improperly accessed them.”
According to Krebs’ Facebook insider, evidence shows that 2,000 engineers and developers accessed information that included the passwords nine million times.
Google News Initiative Celebrates First Birthday
Social media and tech companies are moving into the news business, with eliminating “fake news” as their mission. Liberty Nation has previously looked at Facebook’s attempts to infiltrate local media outlets, as well as the company’s relationships with fact-checkers, but Google is just as involved in steering the path of the media, via its Google New Initiative. The program celebrated its one-year anniversary this March by announcing a few of the new projects it plans to launch in its second year. Among them are planned Fact Check MarkUp Tool and Fact Explorer, which will supplement existing programs that claim to fight misinformation of the web.
There will also start a GNI Digital Subs Lab, which will help 14 publishers “transform their approach to digital subscriptions,” through artificial intelligence. Of course, one might be concerned that Google will have so much influence over the way news organisations get their income, but this is not really new – the News Initiative launched by investing millions of dollars in “Subscribe with Google,” which helped to streamline subscription processes in partnership with legacy media outlets.
According to Google’s VP of News, Richard Gingras, the initiative has partnered with numerous fact-checking organizations and news publishers, while its NewsLab “has trained nearly 300,000 journalists in person and online around the world on digital tools for journalism, with a goal to reach 500,000 journalists by 2020.” It has also been training teenagers on “the difference between fact and fiction online” and working on a tool “which helps journalists debunk and share information across the world—they’ve already trained hundreds of journalists ahead of the EU elections.”
In January, the News Initiative announced it would partner with WordPress to invest in low-cost publishing for local newsrooms, while Google-owned Youtube committed to suppressing content it deems to promote “conspiracy theories.”
It would appear that Google is aiming to gain further control over what information can be accessed via social media, local news outlets and major publishers – what a neat little package it is wrapping for itself.
Facial Recognition Failure – Social Media Photos Used Without Permission
Have you posted pictures of you and your loved ones online? Photos of fun-filled times are commonly posted on social media, but could tech companies be using those images for their own purposes, without bothering to tell you? Facial recognition researchers have done just that, according to an investigation by NBC. Artificial intelligence programs designed to recognise and identify faces “learn” the skill by being fed as many different facial images as possible – a process which used to involve hiring and paying willing subjects , but which is now conducted by dredging the internet for photos which are often used without the knowledge or consent of photographers or subjects. “This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” said NYU School of Law professor Jason Schultz to NBC.
In January, IBM released its dataset Diversity in Faces, which aimed “to advance the study of fairness and accuracy in facial recognition technology.” The research used one million images from photo-posting social media service Flickr, provided by Yahoo which owned the site until 2018. NBC reports that it obtained the IBM dataset from an anonymous source and contacted some of the photographers to see if they knew their content had been used in this way. Greg Peverill-Conti , a photographer with 700 images in the set, said “None of the people I photographed had any idea their images were being used in this way … It seems a little sketchy that IBM can use these pictures without saying anything to anybody.” Sebastian Gambolati added that it would have been “nice if they asked.”
Flickr users seemed split on whether they considered the project a good or bad use of their images. Austrian photographer Georg Holzer said “I know about the harm such a technology can cause … Of course, you can never forget about the good uses of image recognition such as finding family pictures faster, but it can also be used to restrict fundamental rights and privacy. I can never approve or accept the widespread use of such a technology.”
On the other hand, Guillaume Boppe from Switzerland said, “If the pictures of faces I shot are helping AI to improve, reducing false detection and ultimately improving global safety, I’m fine with it.”
IBM has offered to remove images from the set if photographers provide them with links to the individual photos to be removed, but since the company has not made available the information on what files have been used, it would take considerable work on the part of Flickr users to achieve this, particularly if they have hundreds or thousands of files in the set.
The most obvious course of action in light of this week’s stories is for Facebook users to change their passwords post-haste; new ones should be encrypted, but one wonders what blunder will be discovered by the company next.
Those who have used Flickr in the past may want to ask whether their photos have been used in IBM’s study, although anybody posting photographs on the internet should be aware that they may be appropriated for use in facial recognition software development – something to keep in mind next time you upload your holiday snaps.
That’s all for this week’s edition of You’re Never Alone. Check back in next Monday to find out what’s happening in the digital realm and how it impacts you.