Algorithmic Justice for All.

Krishna Gade
10 min readDec 26, 2020

--

We’re blind to how algorithms are making decisions, and unless we foster transparency, fairness, and accountability, we cannot ensure Algorithmic Justice for all. (source: istock/Simpson33)

2020 has been a crazy year for many reasons, but an issue that has come to the fore is the aspect of Bias, Ethics, and Censorship of Algorithms.

Never before in human existence have Algorithms had such an impact on our day to day lives. For instance, I wake up every day to read the latest news that has been personalized for me on Twitter and Facebook while I check Gmail, where the most important emails are kept at the top. As I get to work, I am working with tools like Slack, Zoom, Hangouts, Gmail, Google Docs, etc. At lunchtime, I remember that it is the Holiday season, and I am late to send that gift to a friend, and Amazon starts recommending me what I should be buying based on a simple search. As I end the day retiring to bed, I catch up on my favorite web series on either Amazon/Netflix. We’re truly living under the rule of Algorithms, whether that’s in buying a new house, getting hired for a new job, or getting approved for that mortgage rate.

Today big tech companies can take advantage of user-generated data and build businesses worth billions of dollars. They have been able to do this by making sense of our data at scale, connecting us with relevant content or services, and creating a virtuous loop that keeps refining their products to meet our evolving tastes. Over time, these big tech companies have created for themselves an unassailable moat. For instance, it is almost impossible for anyone today to beat Google at search or Facebook at news recommendations, or Amazon at online shopping because of the treasure troves of data that these companies have accumulated over decades and how much they know about each of their individual users.

Therefore, questions do arise — Should we continue to trust the large corporations with our data? Should we not ask for transparency in their algorithms? Should the governments and civil society not hold them accountable?

How did it all start?

I have been working in tech for over 2 decades. After grad school, my first job was at Microsoft, working on Bing (called MSN Search at the time), where we were building a Google-killer product. The feeling at the beginning was that Microsoft had all the resources in the world (an army of engineers, a world-class research team in Microsoft Research, and not to mention a ton of cash) to compete head-on with Google. Something that our team learned very quickly was that Google had one big competitive advantage over us. It was their search log, which they had been collecting for a while, which MSN Search or Internet Explorer did not bother to keep track of during their early years. Google used its search log data to great effect to generate better spell corrections, better query suggestions, and overall better search results (especially for long-tail queries) than Bing. Even though Bing and Microsoft Research invented cutting edge Algorithms such as Neural Networks for Search Ranking, we could not compete with Google. As more people started using Google, the more data they collected, and the more difficult it became for Bing to catch up with Google on search quality. This was the first lesson I learned about the value of collecting and processing big data for algorithms to work effectively.

MSN Search Team with Bill Gates in 2005 (source: Microsoft)

Later on, I spent a decade working at social networking companies like Twitter, Pinterest, and Facebook. The one common theme that stayed true in my jobs was the importance of collecting, analyzing, and processing large volumes of data to make our products better. At Twitter, we built platforms specifically to process streaming data at scale to power product features like trending topics, search, and ad recommendations. Pinterest invented cutting-edge computer vision technology to visually search for items to shop by taking pictures of real-world things. Facebook used big data and AI/ML algorithms everywhere, from recommending news content, ads, videos, etc. When I was there, we built some of the cutting edge Machine Learning algorithms for News recommendations in Feed.

2016 Elections

And things started taking a turn after the 2016 Elections, which I consider a historic moment in this journey. People started noticing the macro side effects of all this on society. We saw the emergence of several issues like fake news, misinformation, data privacy, algorithmic bias, etc.

I was at Facebook when we bore the brunt of these issues during that time, and the feeling was like we were caught off-guard. We started working on putting guard-rails to our AI/ML algorithms, checking for data integrity, building debugging, and diagnostic tooling to understand and explain how they work. One such tool my team worked on was called “Why am I seeing this?” which shows human-readable explanations for news recommendations.

Why am I seeing this? (source: Facebook)

Algorithmic integrity and transparency became critical issues at Facebook — so critical that it was one of the projects tracked by the Chief Product Officer, Chris Cox himself. Large teams worked on these tools to detect and flag misinformation in news stories and diagnose why something was going viral or making it to the top of the News Feed. Features like “Why am I seeing this’’ started to bring much needed Algorithmic transparency and thereby accountability to the Newsfeed for both internal and external users.

Algorithmic Bias

And, in the last 4 years, several complaints cropped up about bias in Algorithms. One needs to see a chilling documentary called Coded Bias to fully grok the impact of these issues on society. While not knowing how an algorithm works is bad, consuming an algorithm's detrimental side because of any bias is worse.

The first question that comes when you follow this topic is how bias actually gets into products in the first place and what we even mean when we’re talking about bias. The reality is that humans are at the center of technology design and humans have a history of making product design decisions that are not always in line with everyone's needs. For example, female drivers were 47 percent more likely to be severely injured in an automobile accident until 2011 because automobile manufacturers weren’t actually required to use crash-test dummies that represented female body types and so as a result, they didn’t really understand the impact of seat belts or airbags in a collision. In another example, Kodak was actually using a model named Shirley (see below) in the 1950s to help calibrate color for their cards. As a result of that, it wasn’t until the 1990s when wood manufacturers and chocolatiers actually complained that their colors weren’t looking right in their ads that Kodak actually realized that it needed to look at a wider spectrum of colors and skin tones when actually calibrating these cards.

For decades, Kodak’s Shirley cards, like this one, featured only white models. (source: Kodak)

Both of these examples are not Algorithm related both of these examples are not examples of malicious intent or ill-will or a desire to discriminate, but they are examples of when we design technologies, the goal should be not to launch something quickly because the decisions that we make at the moment can lead to the subconscious or unconscious biases and stereotypes permeating through our products.

Users complaining on Twitter about Apple Card’s gender bias (source: Twitter)

Last year Apple and Goldman Sachs went through allegations on a credit card bias issue, and what started as a tweet thread with multiple reports of alleged bias (including from Apple’s very own co-founder, Steve Wozniak, and his spouse), eventually led to a regulatory probe into Goldman Sachs and their algorithm-prediction practices. The primary issue was with the black-box nature of the algorithm that generated Apple’s credit lending decisions. As laid out in the tweet thread, Apple Card’s customer service reps were rendered powerless to the algorithm’s decision. Not only did they have no insight into why certain decisions were made, but they were also unable to override it.

Algorithmic Ethics

And to prevent these issues, big tech companies started Ethical AI and Responsible AI initiatives. While that is commendable — it is not clear if all is well there and if there is a fundamental alignment of incentives.

What happened with Timnit Gebru recently was just appalling! If it can happen to a celebrated AI Ethics researcher like her for taking a stance, then imagine what can happen to an average employee uncovering potential AI ethics issues within a large corporation. It shows a fundamental discrepancy between AI ethics and Company policy in action when it comes to big corporations.

Former Google AI researcher Timnit Gebru speaks in San Francisco, California, Sept. 7, 2018 (Photo by Kimberly White/Getty Images for TechCrunch).

A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over a paper she co-authored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which he has since put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met several conditions, which it was unwilling to meet. Gebru tweeted that she had asked to negotiate “a last date” for her employment after returning from vacation. She was cut off from her corporate email account before her return.

I believe that a large corporation like Google, which has built a trillion-dollar business selling primarily AI-based products like Search, AdWords, YouTube, will continue to have these issues dealing with its own employees working on AI ethics. Because there is no clear separation between church and state. If Google is selling AI-based products and telling the world that they are also fixing AI ethics, it is hard to believe them. And currently, no one holds a company like Google accountable.

Could we depend on the moral branding strategy of a large corporation to course-correct itself?

We need accountability, and we need independent 3rd parties, external regulators, and stricter laws and regulations.

Algorithmic Censorship

Algorithmic censorship extends the power of social platforms to more actively and preemptively determine which speech should be permitted and should be suppressed, often according to their own criteria, which is likely influenced by commercial considerations. Some kind of algorithmic censorship is important so that these platforms are not showing abusive, violent, and pornographic content freely and creating a safe environment for all kinds of users. However, the fact that they have the power to censor content automatically using an algorithm raises questions. With human moderators, content moderation is typically passive in nature, relying on user reporting rather than actively seeking out prohibited communications. With algorithmic censorship, social platforms can, in theory, instead intervene to suppress any content their algorithms deem prohibited according to the platform’s criteria.

If a social platform wants to silence an entire community or a political discourse, they can do that with the click of a button today.

Social media platforms are applying AI in order to censor their user’s posts (source: censorship.home)

Therefore it becomes important to provide transparency into the kind of censorship these platforms are attempting. TikTok made waves this summer when its former CEO Kevin Mayer announced on the company’s blog that the company would be releasing its algorithms to regulators and called on other companies to do the same. Mayer described this decision as a way to provide “peace of mind through greater transparency and accountability,” and that to demonstrate that TikTok “believe[s] it is essential to show users, advertisers, creators, and regulators that [they] are responsible and committed members of the American community that follows US laws.”

TikTok’s news broke the same week that Facebook, Google, Apple, and Amazon were set to testify in front of the House Judiciary’s antitrust panel. TikTok has quickly risen as fierce competition to these U.S.-based players, who acknowledge the competitive threat TikTok poses, and have also cited TikTok’s Chinese origin as a distinct threat to the security of its users and American national interests. It is interesting to see what happens next as TikTok is signaling an intent to pressure these companies to increase their transparency as they push back on TikTok’s ability to continue to operate in the U.S.

Where do we go from here?

It is becoming obvious Humans don’t want a future ruled by unregulated, capricious, and potentially biased algorithms. Since Algorithms permeate all aspects of our lives today, if we continue to let algorithms operate the way they do today, in the black-box and without human oversight, it would result in a dystopian view of the world where unfair decisions are made by unseen algorithms operating in the unknown.

Joy Buolamwini who started ajl.org (source: AJL)

While credit is due to companies like TikTok for opening up its algorithms, its hand was largely forced here. Civic society needs to wake up and hold large corporations accountable. Great work is being done in raising awareness by people like Joy Buolamwini, who started the Algorithmic Justice League in 2020. To ensure every company follows the path of Tiktok and discloses their algorithms to regulators, we need strict laws. Congress has been sitting on the Algorithmic Accountability Act since June 2019. It is time to act quickly and pass the bill in 2021.

Finally, we need Impartial and independent third parties because they provide all-important independent opinions to algorithm-generated outcomes. This is the central reason why I started Fiddler with my old colleagues and friends in 2018, and we’re working towards our mission of “Building Trust in AI.”

At the end of the day, we’re blind to how these algorithms are making decisions. Unless we foster transparency, fairness, and accountability, we cannot ensure Algorithmic Justice for all.

--

--

Krishna Gade
Krishna Gade

Written by Krishna Gade

Founder, CEO Fiddler.AI — Building Trust into AI. Prior: @facebook , @pinterest , @twitter , @microsoft

No responses yet