As the technological realm becomes more pervasive, whom can we trust? Each week, Liberty Nation brings new insight into the fraudulent use of personal data, breaches of privacy, and attempts to filter our perception.
Cell Phone Radiation – Is it Safe to Carry in Your Pocket?
Do you carry your cell phone around with you at all times? Is it in regular contact with your body? Have you ever wondered whether radiation from the device could be affecting your health? On July 2, the Ninth Circuit Court of Appeals upheld a Berkeley city ordinance requiring phone retailers to inform customers of the potential hazards from cell phone radiation. Under the law, phone stores must post a notice that says:
“To assure safety, the Federal Government requires that cell phones meet radiofrequency (RF) exposure guidelines. If you carry or use your phone in a pants or shirt pocket or tucked into a bra when the phone is ON and connected to a wireless network, you may exceed the federal guidelines for exposure to RF radiation. Refer to the instructions in your phone or user manual for information about how to use your phone safely.”
Such information is already given in the small print provided by phone manufacturers, but the “right to know” ordinance, passed in 2015, requires signs to be displayed where the devices are sold. The decision was challenged by the Cellular Telephone Industry Association (CTIA), a lobbying group for the wireless communications industry, which filed a lawsuit to block the law. The organization also asked that the law not be enforced until a final decision is made on the case – a request that has traveled through the system from the district court to Supreme Court and back via a convoluted appeals history. Berkeley’s power to enforce the ordinance was this time upheld by the Ninth Circuit Appeals Court, and while the CTIA is expected to appeal again, the court panel predicted it would not be victorious.
According to a CTIA spokesman, “The overwhelming scientific evidence refutes Berkeley’s ill-informed and misleading mandatory warnings about cell phones, according to the FCC and other experts.”
The court ruled that the “text of the Berkeley notice was literally true,” “factual” and “uncontroversial,” adding that “Berkeley’s compelled disclosure does no more than alert consumers to the safety disclosures that the FCC requires … Far from conflicting with federal law and policy, the Berkeley ordinance complements and reinforces it.”
Despite all this, the lawsuit over the legality of the Berkeley law is still to be fought – but until a new decision is made, phone shops in the California city will be highlighting the radiation risks of their products.
Facial Recognition Lie Detector?
A British start-up has developed an artificial intelligence (AI) program that can capture “emotional leakage” on your face, no matter how hard you try to hide your feelings. The facial recognition software, touted as the best in the world on the company’s website, has the additional feature of detecting involuntary “micro-expressions” that give away what is going on inside your head.
Facesoft is a collaboration between machine learning experts and a plastic surgeon. Dr. Allan Ponniah, the co-founder, first used AI to reconstruct patients’ faces at a hospital in London. “If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” he told The Times.
In a U.S. government contest, Facesoft’s program recently performed better than rivals from the U.S. and Europe, although a team at Pittsburgh’s Carnegie Mellon University is working on similar emotion-detecting technology. Sanya Venneti, an engineer for the project, wrote:
“Micro-expressions–involuntary, fleeting facial movements that reveal true emotions–hold valuable information for scenarios ranging from security interviews and interrogations to media analysis. They occur on various regions of the face, last only a fraction of a second, and are universal across cultures.
While the media has already labeled the technology a potential “lie detector,” Facesoft has not explicitly marketed it that way, instead recommending it for use in facial recognition, emotion identification, and behavior analysis. The company has approached U.K. authorities about incorporating it into law enforcement, including use by police psychologists. It is also in talks with police in India about using the program to track the mood of crowds.
France Bans Research on Judges Due to AI Findings
As has been discussed multiple times in this column, France is one of the world’s forerunners in technology-related censorship. But a story recently came out that shocked even this author: France has made it a crime to research its judges – with a possible five years of jail time as punishment. The move came in response to artificial intelligence technology which seeks to analyze and track patterns in the behavior of individual judges, and their rulings. Lawyers have been able to use this information to predict results for their clients, and seek to try their case before a judge known to have a record that would favor their desired outcome. As reported by Artificial Law, a blog that looks at technological changes in the legal profession:
“Unlike in the US and the UK, where judges appear to have accepted the fait accompli of legal AI companies analysing their decisions in extreme detail and then creating models as to how they may behave in the future, French judges have decided to stamp it out.”
According to Article 33 of the Justice Reform Act, “The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.”
But what prompted such a decision? According to professors Mikael Rask Madsen and Malcolm Langford, it all started back in 2016. That’s when a lawyer and artificial intelligence expert, Michaël Benesty, published an analysis of individual judges’ decisions on asylum cases. The results showed that some judges approved almost all cases, while others rejected nearly every one – indicating that individual biases played a significant role in their rulings. Benesty also established a non-profit website so the public could view his continued research. Rather than welcome the exposure of troubles in the court system, the French government chose to simply ban all analysis of judges’ behavior.
Madsen and Langford, who examine the legality of the law, conclude that “[I]ndividual judges must be protected from harassment … However, the French prohibition seems primarily motivated by a concern with the exposure of variation and bias in the system rather than protecting judges from needless attack. In our view, it represents a clear violation of the right to freedom of expression.”
“This new law is a complete shame for our democracy,” says Louis Larret-Chahine, co-founder of French legal analytics company Prédictice. He says the ban will not affect his business, however, and that it will likely switch to analyzing courts rather than individuals.
How will artificial intelligence change the legal system? And a broader question: Who else will be using these tools to analyze and predict targets’ behavior? As this law shows, government folk don’t like it when the lens is turned on them – but that doesn’t mean they aren’t willing to shine it on the rest of us.
That’s all for this week from You’re Not Alone. Check back in next Monday to find out what’s happening in the digital realm and how it impacts you.
At Liberty Nation, we love to hear from our readers. Comment and join the conversation!