Artificial Intelligence vs. Programmed Stupidity

Unreal Education
Heineken on the rocks

Popular culture is awash with images of a dystopian future brought about by artificial intelligence. Dozens of TV series, movies and books are discussing the dangers of artificial intelligence. How robots will take our jobs and rule our lives. The more promising AI becomes, the darker the perceived threat. On the one hand we embrace the good things AI can do for us while on the other we fear losing our humanity, our position on the top of the dominance hierarchy.

I don’t really buy into the paranoid fear of the robotic overlords. It is a ludicrous idea that should be explored some other time in another post. The news that prompted this one was an announcement from IBM.

“To Build Trust In Artificial Intelligence, IBM Wants Developers
To Prove Their Algorithms Are Fair”

“We trust artificial intelligence algorithms with a lot of really important tasks. But they betray us all the time. Algorithmic bias can lead to over-policing in predominately black areas; the automated filters on social media flag activists while allowing hate groups to keep posting unchecked.”

Just think about it! IBM wants to build cognitive dissonance into Artificial Intelligence. Now THAT IS a scary thought. Didn’t they see 2001: A Space Odyssey? (“I can’t do that Dave”)
Do they really, honestly expect to build a system that can sniff out the smell of the ever-shifting politically correct bullshit?  The ever-changing demands for intersectional equity and social justice?

What is scary is not just the arrogance of trying or picturing the outcome but imagining the tremendous harm they may cause by trying.

AI is excellent at finding the best solutions to multivariate problems. The best next chess (or GO) move, the most efficient structural design to an engineering problem, face recognition, data mining, analyzing medical records or the risk level of an investment to name just a few.
The common element of the problems AI is good at solving are:

  • A large number of data points (all possible chess moves)
  • Well defined and formulated parameters (the rules of the game)
  • A simple goal (Find the best move that will bring us closer to winning the game)

Now try to apply this to IBM stated goals from the article:

“……..These range from subjecting AI developers to third party audits, in which an expert would evaluate their code and source data to make sure the resulting system doesn’t perpetuate society’s biases and prejudices, to developing tests to make sure that an AI algorithm doesn’t treat people differently based on things like race, gender, or socioeconomic class.”

The goal seems to be to skew the datapoints, mess up the parameters and complicate the goals.
What are society’s biases and prejudices? How do we define race? What is gender?
Can a set of unbiased data create biased results? Just picture the following:

We have two jobs with ten open positions and one hundred applicants for each.
Applying is absolutely open to anybody.
Job #1 will attract 80 male and 20 female applicants.
Job #2 will be skewed exactly the other way.

Where is the bias? Can the aggregate of our individual choices be called a bias?
If I tell you that one of the jobs is engineering and the other is nursing, can you guess which is which? Does your ability to guess correctly make you prejudiced?

How can the bias be corrected? By hiring five males and five females for each job? Would it be an intelligent choice to completely disregard both clearly displayed individual preferences and the social benefit derived from hiring the most competent people for the job?

The following example can lead to three interpretations based on three different criteria:

  • The neo-com drive for equality will result in 5-5 for each job
  • An applicant focused approach will result in 8-2 reflecting the application ratio
  • A most qualified approach (ranking the applicants by ability and experience) will most likely result in a 9-1 ratio skewed toward the majority group in each case.

Which approach is ‘fair’? Which one is intelligent? Which one is the most objective? Which one is the most unbiased? Which one would a truly unbiased AI algorithm choose?

Now let’s suppose, that each applicant has to do a test. The test is rated without any personal information about the applicant. Gender or race should be entirely irrelevant to the results.
Can we suggest the presence of bias if the results do not conform to our neo-communist egalitarian expectations?

These questions are not theoretical. As Thomas Sowell likes to point out, affirmative action is even more harmful to its intended beneficiaries than to its ‘victims’ (the people who did not get the job or University acceptance). The ‘noble’ fight against the fictional bias in policing created more crime and more suffering. There are some good statistics to illustrate the problem in this video.

Postmodernist and neo-communist ideas about the nature of reality are not only stupid and immoral, but exceptionally dangerous as well. Programming delusional expectations into Artificial Intelligence can lead to no good. While it is obvious that AI is no match to natural stupidity, combining the two would only amplify the power of stupidity, not the other way around.

The problems this can cause should be obvious to anybody and I didn’t even get to the scary part yet: There is a White House task force to promote AI research and there is a slew of progressive ideas on how to do it:

“Promote transparency and prevent bias in AI algorithms.
An AI can only work with the information fed to it. And given that people have all sorts of biases, conscious and unconscious, that means AI can be prejudiced as well. This gets especially problematic when it involves algorithms used to automate hiring decisions and police activity, which can reflect a society’s own bigotry.”

But the ‘best’ of the ideas is to politicize the whole business:

“To keep AI honest, the government could create a regulatory body to keep an eye on the algorithms under development. Like the FDA has to approve new pharmaceuticals and can post warnings about side effects, the government could create an administration that audits algorithms for bias and publishes consumer warnings for companies that use untested or potentially unfair AI.” (emphasis mine)

…because government bureaucrats are clearly the best people to make intelligent decisions about Artificial intelligence…….

I can already see a dystopian world on the horizon but artificial intelligence is NOT the thing that will bring it to us. To paraphrase Marx:
A spectre is haunting the world – the spectre of artificial stupidity.

….and it is just as deadly dangerous as the Marxist ideas inspiring it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.