Algocracy Special: Fakenews Algorithms and Czech Senate

Explaining automated cyber-propaganda (not just) to politicians

By Josef Holy

About two months ago I had an opportunity to participate in a discussion panel about Fakenews in the Czech Senate together with Frantisek Vrabel, CEO of semantic-visions.com.

The main topic of the whole event were technologies and algorithms powering the current online platforms and their role in the fakenews generation and distribution.

Below is a loose rewrite of my presentation, in which I tried to explain in layman’s terms how algorithms powering those online platforms work.

Fakenews as cyber-propaganda

Fakenews is not someone simply lying on the internet. It is a sophisticated, semi-automated data-driven propaganda, which leverages the Information and Communication Technologies (ICTs). 

That’s why we talk about “cyber-propaganda”.

Cyber-propaganda is a manipulation of public opinion towards certain direction by using Information and Communication Technologies.

This report nicely describes how the cyber-propaganda works. 

FullSizeRender_1.jpg

There is always an Operator, pursuing a certain goal. 

Think for example russian secret service aiming to polarize public discourse in the Czech republic around the topic of immigration.

To achieve that goal, the Operator designs a specific campaign, which starts with a team of seed users, who develop and test the content - text or video - and push that content into online networks.

Think for example a made-up article describing how Czech Pirate party wants to provide free appartments to all immigrants from Africa, posted onto several dozens websites and links shared in appropriate Facebook groups.

The ultimate goal is to get a snow-ball effect - to get the content spread virally - through regular social media users spreading it by themselves through likes, comments and shares.

The Scale of Algorithmic power

Cyber-propaganda leverages various online content and media platforms for creation and distribution of fakenews. 

We will have a look onto the following four, which have currently the biggest impact - Email, Twitter, Youtube and Facebook.

We can sort these four online platforms based on their Algorithmic power, which simply means how much the machines are in charge of content distribution.

Chain Emails: The private social network

Emails are used to spread misinformation mostly amongst seniors, who happen to be less technically savy. 
They also demonstrate lower media literacy, mostly thanks to living big parts of their lives in totalitarian communist regime, which had a tight grip on the information flows in the society.

Email represents a decentralized and private social network, where the connections between people are not stored in a database of one service provider (e.g. Facebook) but exist in address books of individual recipients.

Misinformation campaigns then start with sending emails to set of “Users Zero”, who then forward them to other contacts in their address books. Users do forward the content, because they trust the sender they received the email from.

IMG_0059.PNG

Of course, simple forwarding would make it fairly easy to track down the oririginal content source. That’s why Operators and their teams literally educate their target audience by sending them step-by-step instructions how to send background (invisible) copies (article in Czech). 

This effectively makes tracking of the original source very difficult or nearly impossible.

For chain email campaigns, operators leverage most of the traditional email marketing tools and practices, from acquiring users information (email addresses, demography, geo location, etc.), tracking link click-throughs to evaluating “performance” of individual Users Zero and how large is their impact and influence.

Twitter: Shout loud to be heard

FullSizeRender_4.jpg

While the email chains described above are mostly manual (in a sense that people are the ones who spread the (mis)information through the email network) Twitter is a centralized online social network, which allows semi or fully automated information distribution. 

This automation is usually done through so-called Twitter bots and bot networks.

Twitter bot is an artificial (=non-human) Twitter account which semi-automatically tweets, retweets, likes, follows and mentions other users and their tweets.

Individual twitter bots are connected into large networks of interconnected bots (botnets) which can consist of tens or even hundreds of thousands of individual bots. 

All of them following, liking and retweeting each other in order to amplify certain tweets or hashtags.

FullSizeRender_5.jpg

Just look at the scale - visualization of a botnet of tens of thousands of bots. Black nodes are bots, black lines show how they follow each other and green lines show how they Like and Retweet each other tweet to amplify them. Source: duo.com

Botnets are centrally controlled and can be hired for a specific purpose on a black market.

The main purpose of botnets is amplification of tweets and hashtags, because topics and tweets with the most likes, retweets and responses get higher in Twitter search results and can even make it into various trending lists which then further increases their overall reach and impact.

One of the recent examples of this automated amplification is the case of journalist Jamal Kashoggi who got murdered in October 2018 by Saudis. 
When the reports about the event started to get traction in the public space, various security researches have observed sudden emergence of trending topics tied to pro-Saudi propaganda. 

The goal was simple - to amplify pro-Saudi version of events on the account of the truth. 

It is fair to say, that in the recent years, Twitter has made a progress in fighting botnets, but it’s difficult.

Further reading:

Youtube: Through Autoplay to radicalization.

While Twitter’s algorithms are mostly concerned with search and identification of trending topics (hashtags), Youtube represents a personalized online service. It tries to cut bigger piece of the attention pie by suggesting videos to watch next to users.

The thing is, though, that Youtube’s algorithms tend to suggest progressivelly more and more extreme (radicalized) content because they have literally learned from viewers behaviors, that people are more likely to spend more time watching more extreme content

For example - when you start watching a video about space exploration, it is very likely that sooner or later you will be recommended a video questioning the Moon Landing. Hillary Clinton videos will lead to conspiracy theories, etc.

IMG_0061.JPG

An infamous recent example of Youtube algorithm recommending article about 9/11 attacks from Britannica under the live video feed of the Notre Dame Cathedral fire.

When you combine this algorithmic principle with state of the art SEO (Search Engine Optimization) techniques and tools, it is quite simple to polarize and further radicalize individual population groups by serving them with appropriate content. This is what Russians (and others) have been doing and are really good at it.

Further reading:

  • Recent NYT article describing a story of a young deprived man turned into right wing radical by Youtube’s algos.

Facebook: Target Bubbles

The largest and the most advanced platform running content distribution algorithms is Facebook. I wrote about Facebook at length in the past issue of Algocracy Special

My main message for Czech Senators was that Facebook algorithms naturally close people in content bubbles, which get further radicalized by being served appropriate hypertargetted content.

It is OK to hypertarget when you run a marketing platform but it becomes highly problematic, when you mediate access to news and information for billions of people every day.

Further reading:

Coming Soon: Deep Fakes - The Perfect Storm of Cyberpropaganda

Algorithms described above drive distribution of content which is still created by humans. 

However, we are approaching an era where we will see more and more content generated by machines. It is already starting with textual content (by using Natural Language Generation)  and soon enough it will evolve via other content modalities - images, audio - all the way to full videos indistinguishable from reality.

When this happens, then the complete chain of content creation and distribution will be automated. Algorithms will be dynamically creating content personalized for various segments, measuring their impact and constantly fine-tuning the quality.

I call this the Perfect Storm of Cyberpropaganda, because the quality of fake content and precision of its targeting may cause serious disruptions in the society.

Imagine fake videos of corporate executives generated in conjunction with automated trading bots to game the stock market. Or videos of politicians making serious fake statements about critical topics and policies

Or Mark Zuckerberg becoming a champion of transparency:

‘Imagine this...’ (2019) Mark Zuckerberg reveals the truth about Facebook and who really owns the future... see more @sheffdocfest VDR technology by @cannyai #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart
June 7, 2019

The amount and quality of such highly effective content will be so big, that it will be impossible to fight by something like a manual curation. There will have to be other AIs automatically detecting such fake content, but they seem to be playing the catch-up game for now...

Further reading:

Final Remarks

After our presentations, there followed about 40 minutes long panel discussion. Czech senators who were present were clearly surprised by the scale and level of automation of the current cyber propaganda.

There was also a shared opinion that there is only a little what they (as MPs of a country, with population equal to the size of a user segment which Facebook tests its new features on) can do to bring platforms to responsibility for their impacts on the society.

We will see how the relationships between governments and Big Tech companies will develop, but the tension is clearly growing.

Further reading:

Algocracy Newsletter: Facebook Libra

All you need to know in about 5 minutes

On June 18th, Facebook has published its own currency called Libra. Below is a quick summary of what seems to be obvious at this point. 

Let’s start with few facts and then with a few thoughts on what Libra means from the short, mid and long-term perspectives.

Libra: Crypto, Stable and Not Anonymous

  • Libra is a cryptocurrency. Think money as a software code, similar to Bitcoin.

  • It is an open standard controlled by the Libra Association incorporated in Geneva .

  • The Libra Association currently includes around 30 mostly global brands

  • Facebook is not controlling the association. They have “just” founded it, “just” contributed the initial technology and are “just” one of many association members ;-)

  • It is pseudonymous cryptocurrency similar to Bitcoin. That basically means, that it is not anonymous at all.
    With Facebook already owning large portions of social data of billions of people worldwide, the idea of them having access to those peoples realtime transactional data sounds actually quite scary…

  • Libra is a so-called stablecoin. This means, that it should stay pretty stable, not like f.e. Bitcoin. As people will be cashing in/out of Libra from/to their local currencies, the appropriate number of Libra(s) will be created/destroyed. That’s how it will be financially tied to the rest of the world. That’s also why Mastercard, VISA and PayPal are founding members of the Libra Association.

  • Libra will be launched in 2020. Its test network for software developers to play with should be launching anyday now.

Short term: A money transfer network

The primary value proposition of Libra for the common people is that it makes transactions cheaper and faster which is clearly targeted to third-world citizens who work in foreign countries and need to send money to their families back home.

Traditional money transfer companies currently charge them 7% transaction fees on average, which is unsustainable and Libra is going to disrupt them with near-zero transaction fees and instant money transfers.

This seem to be the first adoption use case for Libra which can be also judged from the style of their marketing video:

Short term: A Payment system

After the first adoption wave as a money transfer vehicle, people will start to use Libra to pay for good and services.

Imagine that for example Uber will offer you two options how to pay for a ride - one with your registered card, the other one with Libra, which will be cheaper (because of inherently lower transaction costs). Which one will you choose?

All the Big Tech companies have implemented their own payment systems in order to decrease transaction costs related to their content and services which have to be paid to traditional payment processors (Apple, Amazon, Google). 

With Libra, Facebook is now doing the same. The main difference is that they aim to do it on a planetary scale (because their social network is planetary) and through an decentralized cryptocurrency, which is more suitable for the networked world than traditional centrally-governed solutions.
Plus nobody would trust Facebook if they’d it on their own (because of the planetary problems they’ve been causing).

Mid-term: A Foundation for private messaging

For Facebook, payments will be a fundamental layer in their recently announced pivot to WeChat-like private messaging

In that future, the main source of revenue for Facebook may not be advertising (as it is now) but transactions done by its users - Libra will effectively become a payment processor for Facebooks decentralized private messaging network, through which users will transact with each other as well as with a wide variety of service providers and other merchants. 

And Facebook will be one of the (many) parties which will keep the Libra blockchain running, collecting the transaction fees and last but not least offering their own set of services on top of that.

Mid-term: A Fintech crypto-platform

Facebook has built Libra from open cryptocurrency standards and technologies. Cryptocurrencies allow transactions among untrusted parties.
This in effect allows individual members of the Libra consortium to safely participate and invest into the currency, although they may not trust each other, f.e. because they are competitors (think Visa vs Mastercard as an example).

Another important technological feature taken from the world of cryptocurrencies is that Libra has its own programming language  which makes it similar to other cryptocurrencies like Ethereum
This has a potential to spark a new ecosystem of financial applications and services and Libra may in turn become the largest fintech platform on the planet.

To achieve that vision, Facebook has designed Libra with the ability to process 1000 transactions per second, which is 3-fold more than Bitcoin or Ethereum. This is a prime example of a large corporation industrializing open source standards and technologies while not giving a damn about their original (cryptoanarchist) ethos.

This also clearly demonstrates the main problem of current open cryptocurrencies like Bitcoin and Ethereum - the size of their core development teams is simply too small.

Long-term: A new global currency?

This has been one of the first themes which a lot of people started to immediately talk about.
Yes, Libra has a potential to be global, because Facebook is a global social network and other consortium members backing it are some of the most influential global brands.

However, all traditional (FIAT) currencies have always fulfilled the following 3 use cases:

  1. Medium of exchange - ability to pay. The more universally, the better.

  2. Store of value - ability to save value for later spend. The more stable the better.

  3. Unit of accounting - ability to use the currency in the books, for tax purposes, etc.

For example Bitcoin is a really bad medium of exchange, because it doesn’t scale well in terms of number of transactions due to its technical limitations. It however may be a good long-term store of value (“cryptogold”), given the whole game theory behind its deflational nature.

On the other hand Libra is obviously optimized to be a medium of exchange for people around the world. At least for now.

In order to become a store of value, the big financial institutions would have to probably jump on board and back it up. Will they do it? We will see.  Trust is a big factor here and Facebook has lost a lot of it in the last few years.

And as for the unit of accounting, this would require governments around the world to accept Libra as a legitimate currency. That is not likely to happen any time soon.

The reasons above are IMO why Libra will serve as the medium of exchange - a payment platform - in the first few years. It will accumulate the value, build the momentum and then we will see.

However we can safely say now, that thanks to Facebook, cryptocurrencies are finally going mainstream. After 11 years since the publication of the original Bitcoin whitepaper .

Algocracy Special: It’s time to unbundle Facebook

Algorithmic Corporations and their Value Chains

Welcome to this special issue of Algocracy News, in which you’ll find an analysis of Facebook as an Algorithmic Corporation plus a few thoughts on how it could be potentially broken up - or actually - unbundled.


By Josef Holy

More than a week ago, the NYT ran a piece by one of the Facebook co-founders (who left the company more than a decade ago) in which he argues, that it is now time to break up Facebook.

Why? Because it’s simply too BIG. Mark Zuckerberg is in control of information distribution for 1.5BILLION people daily. That is a power, which nobody in the history of mankind has ever had. 

FullSizeRender.jpg


However, I think that Facebook should be carefully unbundled, instead of bluntly broken up.

To unbundle Facebook means to decompose it from the perspective of its value chain, instead of from the perspective of its app portfolio - Facebook, Messenger, WhatsApp, Instagram - which is the proposal in the NYT article for breaking it up.
The value chain perspective focuses on where and how the value is created.

Algorithmic Corporations

Facebook’s source of value are its algorithms which collect, store and analyze users data and generate personalized content for those users. Facebook is an example of “Algorithmic Corporation”, which can be defined as a commercial parallel to the “Algorithmic Society”, first used in the great 3-years old analysis of FB operations by labs.rs .

FullSizeRender.jpg

Algorithmic Corporations use algorithms to extract patterns from data. Due to the ongoing digital transformation, many traditional businesses will eventually more or less become Algorithmic Corporations.
Algorithmic Corporations are built around the Algorithmic Value Chain, which we will look into next.

Step #1: Data Collection

The first part of the Algorithmic Value Chain is the Data Collection, in which the data - signals - are being collected through various mechanisms. Facebook collects the user data, which include the user profile information, content created and activities performed in regards to that content.

Facebook is not collecting user data only from its website(s) and mobile app(s), though. It has an ability to collect various types of user data also from outside of its own venues. This is done either through widely deployed online tracking mechanisms or through partnerships with various data broker companies.

For Facebook, the users are a raw material to mine the patterns and predictions from. That is not much different from natural resources like the iron ore, which have been mined and processed by traditional industrial companies for centuries.

Contrary to the popular saying “if the digital service is free, then you are the product”, the product of the Facebook Algorithmic Corporation are not users
Its product are patterns and predictions, which are consequently used for personalized targeting of content and (more importantly) ads.

Step #2: Data Storage and Algorithmic Processing

The second part of the value chain is the Data Storage and Algorithmic Processing. This is where any Algorithmic Coporation stores the data in structures, which are best suited for their business model. Facebook is collecting and storing data in a metastructure called “The Social Graph”.

Social Graph is essentially a network of nodes and their connections. The nodes are users and content (photos, videos, articles, etc.) and connections are expressions of their relationships, built based on their mutual activities. Social Graph is the primary structure which patterns, personalizations and predictions are extracted from.

Step #3: The App

The third part of the Algorithmic Value Chain is “The App”, through which the value generated by algorithms is delivered. In the case of Facebook, its main app is the News Feed, which is the delivery mechanism of personalized content and ads to its users. 

In this model, WhatsApp and Messenger are not an apps, through which Facebook Algorithmic Corporation would be realizing its main value. WhatsApp and Messenger are actually a Data Collection vehicles, contributing additional “messaging metadata” into the main Social Graph.

Unbundling Facebook

The Algorithmic Value Chain - Data collection, Storage & Processing and “The App” represents a generalized high-level description of the value chain, where algorithms turn data into value. With this model in mind, how could the Facebook be unbundled?

FullSizeRender.jpg

In my opinion, the Data Collection and Storage parts should be pronounced a public utility of some form. Personal data represent the most valuable asset of the Algorithmic Society as they can be used for good or abused for control and oppression.

The parts of the Algorithmic processing and “The App” then should ideally be unbundled into an open market, in which various entities (private as well as public ones) would compete in delivering different types of value derived from the underlying data.

The thing is, that the algorithmic processing done by Facebook represents only one way of deriving value from all the data they collect (and boy, have they been data hungry). 

The Main Problem: Narrow KPI

User engagement is the ultimate goal, which everything what Facebook does is optimized for.

The user engagement is Facebook’s ultimate KPI.

It’s narrowness is the fundamental cause of the negative externalities like the fakenews and weaponized propaganda, fragmentation of public discourse into echo chambers and others, because systems driven by narrow KPIs are fragile and easily corruptable.

From this point of view, Facebook is probably the worst ran totality in the world, because even Kim Jong-un is not running North Korea against a single number.

FullSizeRender.jpg

We need new types of apps and services to emerge, which will optimize for a wider variety of KPIs other than the user engagement.
Just try to imagine all the services, which could really help people to be better connected and to live more quality lives, from the level of individual families to the society as a whole.

Regardless of HOW the unbundling may be done in the end, the models of the Algorithmic Corporation and Algorithmic Value Chain are often missing in the current discussions about the regulation of Facebook and other Algorithmic Corporations. 


You have just read the very first special issue of Algocracy News. How did you like it? Feel free to send us a feedback!
The next issue will be again full of links to interesting articles. Stay tuned!

Algocracy Newsletter #4

Instagram is for chimps, let’s disrupt democracy...

In this newsletter, we aggregate for you the most interesting online content recently published in the domain of Artificial Intelligence, algorithms and their fairness, privacy and other topics, which are important for the emerging future world of algocracy - world governed by algorithms.

If this is your first time seeing this, then please, Read our manifesto and subscribe, if you find it interesting.


Video: Instagram is for chimps

Image result for chimpanzee instagram

The above video clearly demonstrates the level of interactions between the Instagram’s algorithm and its users. It is only visual and takes into account only a very limited users context.

Simply put - with the right content, Instagram works even for the Chimpanzee.

What type of systems can be built from such low-level user signals? Only the addictive ones, which target users dopamine centers and optimize for user engagement, which is exactly what Instagram (and Facebook and others) are doing.

(Link)


From the Ministry of Algocracy

To save democracy, we must disrupt it

Interesting piece by Carl Miller (who I highly recommend to follow) about how Taiwan Sunflower revolution essentially started the integration of online platforms for crowdsourced decision making into the country democratic procedures.

It describes an emergence of the g0v open source/hacking group, which aims to fork the government (in software development, “to fork” means to span a new separate version of the original software, independent on the original version).

This group eventually led to the official initiative called vTaiwan - a platform, which allows citizens to comment and rate for suggestions on policy issues and to come to a consensus (instead of conflict) which is then used by the government to shape the legislation.

From the article:

“For centuries, democracy has pretty much meant one thing: elected representatives sitting in sovereign Parliaments. But vTaiwan challenged that basic vision of how democracy should work”

“Voting is a single opportunity for a citizen to give a political signal, and an incredibly weak signal at that”

“Politicians won’t see their job as making decisions at all. Instead, they will see themselves as a ‘channel for collective intelligence’”

(Link)

Some people saw that coming in 1985

If you don’t have time to read the whole article above, then watch this video from 1985 (!!!), which describes pretty much the same in about one minute.


From the Ministry of Big Brotherhood

China is exporting its surveillance tools...

In the last decade, the totalitarian government in China has started to use various online and offline surveillance technologies to track their citizens, with the goal of seizing even more power.

This has created a new generation of companies, developing tools and practices. These companies are now starting to sell that infrastructure to other countries, mostly in the developing world (like f.e. Ecuador).

(Link)

… but don’t worry, you can become invisible.

If you government closes the deal with China, don’t worry - researchers have you covered.

A new paper by from the Belgian university KU Leuven describes an efficient (and low-cost) way how to literally become invisible for the standard image recognition AI, variants of which are growingly being used in surveillance systems across different contexts.

Screenshot from the YOLO v2 demo video

To become literally invisible, you need to attach a patch to your body, covered with a certain type of colorful pattern. It confuses the deep neural network, which is trained to recognize objects and people in the live video feeds coming for example from the security cameras.

Apart from being an interesting experiment on its own, this study also demonstrates the fragility of systems, which are nowadays being called “artificially intelligent” and which are in fact only pre-trained pattern-recognizers.

Did anyone say the self-driving cars are near?

(Link)


From the Ministry of complex challenges

How AI can enable a sustainable future

“Research by PwC UK, commissioned by Microsoft, models the economic impact of AI’s application to manage the environment, across four sectors – agriculture, water, energy and transport.”

“AI levers could reduce worldwide greenhouse gas emissions by 4% in 2030, an amount equivalent to 2.4 Gt CO2e – equivalent to the 2030 annual emissions of Australia, Canada and Japan combined.”

(link)


Forget about artificial intelligence, extended intelligence is the future

Related to the above article, here are some good thoughts on why the ideas of Artificial General Intelligence and Singularity are basically religions built on the belief, that the reality can be reduced to a finite set of narrow optimization functions in which we will be ultimately beaten by a superhuman AI.

The contrary is true - the future of AI is in the systems which will help humans in solving complex and multi-dimensional issues (like for example the climate change).

(Link)


Other Links

  • Russians are the best in Youtube SEO - The former Youtube engineer Guillaume Chaslot has shown, that one week after the release of the Mueller report, the most recommended channel covering the topic among the 1000+ channels that he monitors daily was Russia Today. (Link to the twitter thread)

  • How a Google Street View image of your house predicts your risk of a car accident - researchers are constantly finding new ways how to extract unforseen value from the data exhaust. Approaches from this study could be very useful for your insurance company :-(link)

  • Local governments and police across the U.S. have been secretly testing crime-predicting AIs on falsified data (link) , the secretive “security” company Palantir has been helping them. (link)

  • Cognitive bias cheat sheet, simplified - cognitive biases make our lives simpler, but they often skew our perception of the reality and influence our decision making abilities. Biases are natural not just for us humans, but for algorithms as well. (Link)


Thanks for scrolling down here, we hope you’ve found at least some parts interesting. You will hear from us again, once we gather enough interesting stuff for you!


Algocracy Newsletter #3

No easter eggs, just interesting stuff.

In this newsletter, we aggregate for you the most interesting online content recently published in the domain of Artificial Intelligence, algorithms and their fairness, privacy and other topics, which are important for the emerging world of algocracy - world governed by algorithms.

If this is your first time seeing this, then please, Read our manifesto and subscribe, if you find it interesting.


Visualization of the week: The Anatomy of AI

by Josef Holy

Breathtaking visualization (by Kate Crawford and Vladan Joler) of a COMPLETE anatomy of Amazon Alexa.

It’s staggering, when you realize how much mass and energy has to be put into coordinated motion so people can use voice commands to ask for the weather forecast.

Natural resources flowing through the global hyperconnected supply-chains into the production of the physical cylinder itself, large scale distributed server farms, powering the cloud, where “the intelligent service” lives, continuous training done by human labor - contractors and unsuspecting reCAPTCHA users (=all of us) and finally, the Databanks, continuously capturing everything about people and what they do and aggregating it into the ultimate source of value and power.

This is the Standard Oil of our time.


(link)


Fail of the Week: “People who like this fire also liked”

by Josef Holy

It is not entirely clear, which similarity clusters led the video platform algorithm to recommend Britannica article about completely unrelated 9/11 attacks in the live feed of the Notre Dame Cathedral fire.

It is clear though, that the path to Singularity and AGI will be long…


Untold History of AI: Algorithmic Bias Was Born in the 1980s

by Josef Holy

Algorithmic Bias is in the center of current discussions around AI Ethics.  

One of the first documented cases of the Algorithmic Bias with significant impacts is from 1980s, when St. George’s Hospital Medical School in London started to use an algorithm to screen student applications for admission.

The original intent was to automate the manual task of reviewing thousands of student applications and to make the admission process more transparent.

In 1986, the UK Commission for Racial Equality launched an inquiry, which has found the algorithm was biased against people of color and women.

Key learnings:

  • Gender and racial discrimination was “normal” in UK universities at that time, but it was implicit - not codified anywhere - and thus not persecuted. It could have been identified by the commission in this case only because it was made explicit in the form of the computer algorithm.

  • There was a significant cultural aspect - faculty staff members were simply trusting the algorithm too much, taking its scores as “granted”.

The above mentioned cultural aspect is one of the key takeaways from this history lesson - Algorithms (and AI) are just tools, their products (=information) shouldn’t be accepted blindly.

(link)

Notes on AI Bias

by Josef Holy

Algorithmic (AI) Bias is a wide and complex topic, which we will cover in more depth in the future issues of this newsletter.

In the meantime, go and read this very accessible post by Benedict Evans .


The Past Decade and Future of AI’s Impact on Society 

by Milos Krissak

Great article about the current state of AI by one of the leading researchers in the field of AI Ethics Joanna J. Bryson. It’s a chapter from the book Towards a New Enlightment? A Transcendent Decade (OpenMind 2019; see book trailer). Academic, long, thorough and overarching. It addresses two main questions:

  1. What have been and will be the impacts of pervasive synthetic intelligence?

  2. How can society regulate the way technology alters our lives?

Here are some of the key points worth mentioning, since they provide a simple, yet powerful conceptual and down-to-earth framework for thinking about AI (and ethics):

Definition of intelligence

Intelligence is the capacity to do the right thing at the right time, in a context where doing nothing (making no change in behavior) would be worse.

By essence, intelligence is a subset of computation, that is the transformation of information, which is a physical process and needs time, space and energy. People many time misunderstand the difference between computation and math. Math doesn’t need space, time and energy, but it is also not real in the same sense. Given the definition of intelligence, AI is presented by Joanna Bryson as “any artifact that extends our own capabilities to perceive and act.

Artificial General Intelligence

The values, motivations, even the aesthetics of an enculturated ape cannot be meaningfully shared with a device that shares nothing of our embodied physical (“phenomenological”) experience.

The very concept of Artificial General Intelligence (AGI) is incoherent. In fact, human intelligence has significant limitations, which is bias and combinatorics. Joanna claims that from evolutionary perspective “biological intelligence is part of its evolutionary niche, and is unlikely to be shared by other biological species…” and hence AGI is a myth, because no amount of (natural or artificial) intelligence will be able to solve all problems. Not even extremely powerful AI can be very human-like, because it embodies entirely different set of motivations and reward functions.

Public Policy and Ethics

Taxing robots and extending human life via AI are ideas with populist appeal. Unfortunately, both are  based on ignorance about the nature of intelligence.

Joanna is a strong critic of AI ethics well-known concepts of e-personhood and value alignment. E-personhood, the idea of AI legal persons, would increase inequality in society by shielding companies and wealthy individuals from liability. On the other side, value alignment is the idea that we should ensure that society leads and approves of where science and technology goes. While it sounds well and democratic, it is rather populist. Joanna believes that we do not need a new legal framework for AI governance, but rather optimization of current framework and governmentally empowered expert groups that would shape public policy.

(link)


Other Links

  • Chinese government uses algorithms (facial recognition and beyond) to track (and effectively geo-fence) members of the Muslim minority (link).

  • The PR disaster of Mark Zuckerberg and Facebook is getting worse every week. It’s hard to keep track of it, but it now seems clear, that they were actually considering selling data for profit and for competitive advantage. (link).

  • Are the United States finally jumping on the regulation train? U.S. Congress wants to protect citizens from the bad AI (link).

  • Will AI end or enhance human art? This article claims the latter, although it looks like proper tools (UIs and metaphors for interacting with AI) are missing, as current art-tinkerers have to do programming and spreadsheets to leverage the power AI.  (link).


Thanks for scrolling down here, we hope you’ve found at least some parts interesting. You will hear from us again in about a week. 

Loading more posts…