Algocracy Newsletter #2
"Privacy over, too far is self-driving car, will AI lead us?" - Old Haiku
In this newsletter, we aggregate for you the most interesting online content recently published in the domain of Artificial Intelligence, algorithms and their fairness, privacy and other topics, which are important for the emerging world of algocracy - world governed by algorithms.
If this is your first time seeing this, then please, Read our manifesto and subscribe, if you find it interesting.
We are still in a very early stage, working out the content form and its structure.
Please, send us a feedback!
Podcast of the week: Meredith Whittaker and Kate Crawford: How AI could change your life
Let’s start today with this accessible Recode/Decode podcast, in which two ladies from the AI Now Institute explain and discuss in layman terms all the challenges caused by various applications of AI (and algorithmic decision making in general) on today society.
Comments:
Forget Singularity, Superintelligence and Killer robots - Algocracy is here today and it brings its unique challenges - privacy, algorithmic bias, diversity, potential abuse by governments (see China Citizen Score) missing regulations, and more.
New word to learn - Ethics-washing (f.e. here) - basically an “ethics theatre” where companies are creating various ethics boards, to demonstrate their commitment to “the good cause” and to wash out the negative PR (which often doesn’t work as expected).
(link)
From the Ministry of Public Privacy
“Privacy is over, get over it.” - Scott McNealy
Busted: Thousands Of Amazon Employees Listening To Alexa Conversations
(image from the internet)
Amazon employs thousands of people to listen in on what people around the world are saying to their Alexa digital assistant, according to a report by Bloomberg, which cites seven people who have worked on the program.
Why? Because they are teaching the Alexa AI - manually annotating/interpreting the phrases which Alexa didn’t understand properly, thus continually improving its user experience.
Comments:
However creepy this may look, it is a natural part of how current AI systems are being built and operated, especially in the “user-generated content” scenarios, where users often generate low-quality or simply wrong data and there has to be someone to clean that mess up.
(link)
How does it feel to be watched at work all the time?
(Image from bbc.com)
Software, data and AI have a potential to drive efficiency in the enterprise not just through automation, but also through surveillance of employees .
From the article:
“More than half of companies with over $750m (£574m) in annual revenue used "non-traditional" monitoring techniques on staff last year
…
These include tools to analyse e-mails, conversations, computer usage, and employee movements around the office. Some firms are also monitoring heart rates and sleep patterns to see how these affect performance.”
For example the company humanyze.com which employs so-called “Organizational Network Analysis” to study communication patterns and behaviours of employees:
“These can check how much time you spend talking, your volume and tone of voice, even if you dominate conversations. While this may sound intrusive - not to say creepy - proponents argue that it can also protect employees against bullying and sexual harassment.
Humanyze calls these badges "Fitbit for your career".”
Comments:
“The path to hell is paved by good intentions” - Old saying
Various companies have been using different mechanisms to measure the efficiency of their employees since the dawn of modern management .
The most important success factors is in transparency towards employees. People should simply know when, how and why they are being watched.
The question is, how these various deployments of similar technologies change our sensitivity towards surveillance in general.
It all started with security reasons (“cameras are ok, because you have nothing to hide”).
Then came the efficiency play (“fitbit for your career” and “improving team dynamics”) with the security as an ultimate argument in case of any doubts (“protecting against bullying and sexual harassment”).
But, the efficiency measures are usually only a proxy for the “compliance” with “normal” or “standard”.
Standardization is probably ok, when companies measure for example efficiency of their call centres, but highly questionable, when they start evaluating complexity of social ties and interactions.
(link)
From the Ministry of Defense
Outsmarting the artificially intelligent enemy…
Computer virus alters cancer scan images
(Image from bbc.co.uk)
Security researchers have created an experimental malware, which in laboratory tests has altered 70 images and managed to fool three radiologists into believing patients had cancer. The altered images also managed to trick automated screening systems.
Comments:
This is a novel type of security threat, targeting image recognition algorithms, which are being used in a wide range of real-world applications.
Instead of false radiological screenings think hackers fooling self-driving cars into thinking that there are no people in front of them.
(link)
From the Ministry of Transportation
From traffic jams to self-optimizing autonomous transportation networks
Ford CEO says the company 'overestimated' self-driving cars
(image from engaged.com)
Last year, the self-driving technologies seem to have entered the trough of disillusionment which basically means that they have failed to deliver on originally over-optimistic promises.
Few days ago, Ford CEO Jim Hackett scaled back hopes about the company's plans for self-driving cars this week, admitting that the first vehicles will have limits.
However, Elon Musk still claims, that Tesla’s self-driving will be feature-complete this year and drivers will be able to take a nap behind the steering wheel in 2020.
(link)
From the Ministry of Culture
How the AI and algorithms change our norms and behaviors
China's toxic livestreaming culture: the vicarious lives of angry, alienated, uneducated rural gamers
There is a new documentary People’s Republic of Desire, analyzing the dark side of the Chinese livestreaming industry.
“China has a massive livestreaming industry, centered around the YY platform, which started out as a Twitch-style gamer live-streaming platform and now hosts a huge number of wildly popular vloggers who earn money when viewers toss them virtual tips that they can redeem for cash.”
Comments:
Livestreaming is a worldwide phenomenon, which is directly tied to the single most important KPI of current online content platforms (video, gaming and others) - user engagement.
Livestreaming basically equals to “internet TV” - user generated content instead of central programming, millions of channels instead of “only” hundreds and direct feedback loops between creators and their audience - in China even money-based, thanks to their ubiquitous digital payments infrastructure, allowing peer-to-peer microtransactions. All that serving just one single purpose - growth of the user engagement metrics.
However, there is mounting evidence that optimization on user engagement incentivize the production of content targeting the lowest instincts of people. This then directly determines the nature of virtual communities emerging around that content, which is exactly what the documentary is about.
(link)
From the Ministry of Regulation
Imperium regit: ergo est
EU has released guidelines for trustworthy AI
Last week was BIG in regulations of AI and algorithms in general. EU has released its long-awaited guidelines for trustworthy AI which tries to define rules for AI systems to be:
Lawful - respecting all applicable laws and regulations
Ethical - respecting ethical principles and values
Robust - both from the technical perspective while taking into account social context
It has however immediately raised concerns by various groups and organizations: Trustworthy AI is not an appropriate framework and It’s absurd to believe you can become world leader in ethical AI before becoming world leader in AI first, the latter one being true only partially, as ethics should be wired into the technologies and business models of the AI-driven solutions from their very beginnings…
Since the topic of EU policy (and of AI regulation in general) deserves much deeper analysis, you can expect a special newsletter issue dedicated to it sometime soon.
In the meantime, definitely check out the open Global inventory of AI Ethics Guidelines which attempts to collect all major regulations in one place…
The UK is attempting a radical redesign of the internet
(Image from theverge.com)
Although only a white paper so far, this proposal envisions one of the most radical attempts to regulate online free speech by a developed democratic country to date.
The goal is to address all the “bad usual suspects” - terrorism, child pornography, sale of illegal goods, etc.
But there is also an uncertainty how other terms like “trolling” or “fakenews” will be defined in the potential future law.
(link)
From the Ministry of Algocracy
How the future Algocracy could look like, once we solve all those “little” details above...
Rethink government with AI
Nature.com article envisioning the benefits of data-driven government.
From the article:
People produce about 2.5 quintillion bytes of data each day. All those data could be used to improve and personalize services provided by government - these services could be made much cheaper and more effective.
AI could harness data about citizens’ behaviour to enable government in three ways:
Personalized public services can be developed and adapted to individual circumstances
...
Enable governments to make forecasts that are more accurate, helping them to plan. Machine-learning algorithms identify patterns in data and then use them to predict future trends or events.
...
Governments could simulate complex systems, from military operations to private sectors of entire countries. This would enable governments to experiment with different policy options and to spod unintended consequences before committing to a measure.
Comments:
However appealing the above mentioned possibilities may sound, the underlying dynamics of data accumulation by bureaucratic systems may directly lead to the accumulation of power (because winner-takes-all-dynamics).
And as the famous saying goes - “Power tends to corrupt, and absolute power corrupts absolutely.”
Although additional safety-checks could be put in place to ensure the balance of power in the world of data-driven government, they may be nearly impossible to design and implement by using current legislative mechanisms (see #brexit for more details).
We should rethink the way how we govern ourselves in today modern networked society by empowering the individuals and changing the mechanisms of power delegation to make them more dynamic and reflecting the great diversity of our modern societies.
(link)
TED Talk: A bold idea to replace politicians
Inspiring TED talk by Cesar A. Hidalgo, author of a very good book Why information grows .
In his talk, Mr.Hidalgo goes into the rabbit hole of AI-based liquid democracy, in which each citizen is represented by his/her “AI Avatar” (basically a digital representation of ones self), which performs constant negotiations (“continuous voting”) with similar Avatars representing other citizens.
This ultimately means no politicians in the loop, as all citizens (through their avatars) are directly participating on the decision making, thus contributing to the “collective whole”.
Comments:
This idea is not completely new (f.e. here or here).
We live in a much more complex and much more dynamic world, as compared to the one when democracy was invented - the world used to be much simpler back then as the societal structures were much less complex, less diverse and more hierarchical (thus easier to govern).
The complexity of today networked world brings the need to better align preferences of individual citizens, ideally on a continuous basis, not once per several-years-long election cycle.
If it is possible to build a global, constantly evolving and personalized social network optimized for user engagement, then it should be possible to build various governance structures optimized for finding the best common denominators amongst dynamically emerging interest groups.
Of course there is a whole hell of devils hidden in the details…
Thanks for scrolling down here, I hope you’ve found at least some parts interesting. You will hear from us again in about a week.
And again - please, send a feedback, thanks!