top of page
  • letsaskformore

Bayesian Thinking and Racist Algorithms

Our logic is faulted. I mean dead wrong. If you feel that computers got more clever in the last decade, it's because they abandoned logic! Machine Learning and Artificial Intelligence don't use true and false. Computers let us, humans think in those loser terms and they advanced to... Bayesian Thinking!

Let me explain with an example. When you apply for a loan, the AI starts by giving you an initial chance of granting the loan. Let's say 70%. That's called "prior probability". Then it looks at your application for "bad smells". Whenever it finds something similar to an application of someone who failed a payment, it gets anxious feelings. Those feelings aren't "true" or "false". They're numbers called conditional probabilities. And they stack up layer after layer, forming what is called - a deep neural net.

Your Zip Code might smell bad. Many people defaulted there... boom - your odds down 5%. At the end of all this "smelling", the computer either feels you're ok or it's damn scared. In which case your loan gets rejected.

Needless to say that this reinforces biases. If green people failed to pay in the past, then data suggests that you have to reject green people. And guess what? They will never get a second chance to prove it was bad luck and they were innocent.

You might think to mask out data. Remove color from the application. Remove ethnicity. Yes! We should do that! Unfortunately color can often be guessed from where you live, which can be guessed by your phone. It can be guessed by your surname, which might be part of your email. It can be guessed by your income, or your university degree. It's easy for data to turn racist or sexist or biased in countless ways.

In a paper from October 2020, researchers gave an image of Alexandria Ocasio-Cortez to an AI that fixes images. It was trained out of the most used image dataset called ImageNet. The AI gave AOC bikinis and low-cut tops. They conclude that the AI replicated human-like biases about skin-tone and weight. It learned biases based on the ways people are stereotypically portrayed on the web.

Let's stop calling them "algorithms". The Facebook algorithm and the Google algorithm. We wish they were algorithms! Then we could fix them! There is very little code to review. There was no unethical hacker or entrepreneur that coded a racist AI. Geeks played with cool technology. It proved useful. And from a crazy prototype it turned to big company a bit too quickly.

Your loan applications and your Facebook posts. They're both judged by computers running up and down Terabytes of data - getting excited or scared. No logic whatsoever!

Thanks for watching

Thanks for sharing

114 views0 comments


bottom of page