Liberal Algocracy Newsletter #1

Hello World

In this newsletter, we will aggregate for you the most interesting online content, published in the domain of Artificial Intelligence, algorithms and their fairness, privacy and other topics, which are important for the emerging world of algocracy - world governed by algorithms.

If this is your first time seeing this, then please, Read our manifesto and subscribe, if you find it interesting.

We are still in a very early stage, working out the content form and its structure.
Please, send us a feedback!

Ok. Here we go.


From the Ministry of Education

Camera Above the Classroom

Interesting read about the “Class Care System”, which is now being deployed to several high-school classrooms in China. The system is built on top of facial recognition technologies, which are able not just to identify individuals, but also their mood and “level of engagement” or better - “level of boredom”.

Imagine continuous, real-time observation resulting in engagement score of the whole class, as well as of individual students. These scores are then supposed to be leveraged by teachers to optimise activities in their class.
(link)


From the Ministry of Justice

Podcast: Turner on Rules for Robots

Podcast interview with Jacob Turner, author of the book Robot Rules: Regulating Artificial Intelligence.

Topics covered:

  • Responsibility - who is responsible for artificially intelligent agent actions, while distinguishing between private and criminal responsibility.
    Viable path forward seems to be holding the proximity humans responsible. This includes people, who participated on AI creation (programming, data curation, teaching) and those operating it (for profit or other purposes).

  • Legal personhood - what legal status could potentially AI have? Putting intelligent agents on the same level as humans is just a marketing stunt which can however lead to the erosion of human rights.
    Jacob proposes to give AI a legal personhood similar to the one held by corporate entities, which are also virtual and are being held responsible for various decisions they make.

  • Intellectual property ownership - imagine autonomous AI, which comes up with a novel design of a product - say a revolutionary drug compound. Who will legally own the IP? The answer is not straightforward, because of previous lawsuits around monkeys selfie photos.

    (link)


Facebook ad system seems to discriminate by race and gender

The Economist writes about a  research paper published on April 3rd which concludes, that facebook algorithms are biased by race and gender of their users, when they present those users with ads.
Researchers have proved this claim by running several ads promoting either sale or rental of various properties.
Ads promoting a property sale were more often displayed to white people, while ads promoting property rental were more often shown do people of color.

People of color are thus being discriminated, because they don’t have equal access to the same market opportunities as white people.

It is a bit similar to the pre-digital newspaper ad, which would have said something like “Appartment for sale, white buyers only” just today, this rasist preference is not explicitly written anywhere, but is implicitly contained within inner workings of the algorithm, as a result of patterns previously learned from data - past ad clicks of individual users.
(Pay-walled article from The Economist - link)


From the Ministry of Police

The grim reality of life under Gangs Matrix London’s controversial predictive policing tool

UK Wired ran a story about algorithmic system used by UK Metropolitan Police in Londons low income neighborhoods, where teenage gangs (and related negative externalities) are the issue.
The system has the following two key capabilities, which are supposed to help optimize the police work:

  1. Predictive mapping - helps to identify areas, where crime is likely to occur.

  2. Individual Risk assessment - predicts how likely an individual is to commit crime.

From the article:
“Based on a number of variables such as previous offences, patrol logs, social media activity and friendship networks, the matrix relies on a mathematical formula to calculate a “risk score” – red, amber, or green – for each person, in reference to the likelihood they will be involved in gang violence. This intelligence in theory guides an efficient use of police resources and aids court prosecutions.”

It has several issues, though, including race and location bias and more importantly the concept of “pre-criminality”, where youngsters are identified as potential criminals based on metadata instead of a hard and real evidence, which then may work as a self-fulflilling prophecy for them. (link)


From the Ministry of Free Speech

Your Speech, Their Rules: Meet the People Who Guard the Internet

It is not widely known, that pretty much every major platform built around the user-generated content has their own department, which is responsible for various levels of content curation, in order to comply with basic required legal frameworks (human rights, copyright, etc.) and to various “community guidelines” on top of those frameworks.

Medium’s head of safety has interviewed 15 leaders from such groups, working for major platforms - Youtube, Facebook, Pinterest, Twitter, Reddit, and others. These people and their teams are basically gatekeepers of the internet. Who are they? How they see their work? (link)


Other Interesting Links

TEDx Talk: The ethical dilemma we face on AI and autonomous tech

Have you ever heard of the Trolley Problem? It exposes very tough ethical dilemma with no obvious or correct solution. Check it on Wikipedia or on many other available resources, because this is the type of question which needs to be discussed more and more as the AI is growing and impacting our daily lives.

Christine Fox is an American military official and politician, who served as the Acting United States Deputy Secretary of Defense. Her TEDx talk appeals on a crucial need of restructuring of a Corporate Social Responsibility agenda with regards to a brand new type of near-future threats a technology companies are, often unwittingly, generating.


Thanks for scrolling down here, I hope you’ve found at least some parts interesting. You will hear from us again about in a week.

And again - please, send us a feedback, thanks!