Artificial intelligence (AI) is getting everywhere. Or should I say the “AI hype?” Aside from all the fuss, not much has happened.

It isn’t about whether learning algorithms haven’t done a lot in a short time. Rather, all the talk has been completely unfounded. Big problems have been swept under the rug out of the way of disproportionate expectations.

I guess that everyone has heard how IBM’s Watson diagnosed a rare type of leukemia more accurately than human physicians. When a human physician is only getting started, Watson has already studied nearly two million scientific articles and made the correct diagnosis.

You have also probably heard how JP Morgan discharged thousands of contract lawyers because an algorithm based on text mining can review thousands of contracts in just a matter of seconds – a task for which a full team of lawyers used to need almost 360,000 working hours.

Both of these achievements are based on the efficient examination of a massive database after the key parameters have been selected correctly and the system has been taught by using smart data.

However, what this hype, based on a few dramatic examples, doesn’t show is that something important is still missing. This becomes apparent when you look at the expectations raised by AI for personnel assessments and personality assessments, in particular.

The last playing field of humans?

The assessing of personalities is something that we people are naturally good at. On an evolutionary level, we are exceptionally talented in assessing traits that are relevant considering social behavior.

The reliability of members of a pack, the motivation to lead, intelligence and a number of other factors can decide whether or not the pack survives. We get a strong intuition of all of these factors in only a few seconds.

However, it seems that machines are also about to replace humans in this field. AI has found significant and unexpected correlations between personalities and social media behavior.

An impressive study was released a few years back: AI found a larger correlation to self-assessed personalities on the basis of Facebook likes than assessments made by acquaintances, as long as there were more than 100 likes.

Here’s another dramatic example: When AI analyzed texts written freely in social media, it discovered as high a correlation to self-assessed personalities than more traditional assessment methods.

Pretty incredible findings – at least on the surface.

What’s the problem?

Correlation (or correspondingly regression analysis), which is the most commonly used key figure in these studies, does not reveal any variables in between. If not interpreted properly, this can also be misleading.

Facebook likes offer indications of personalities, but do they actually measure personality traits? Do they simply measure factors related to personality, such as external motivation or areas of interest?

Theoretically speaking, there is a fairly straightforward relationship between cheerleading likes and extroversion. But what about a statistically significant link between neuroticism and the length of the last name? Nothing but coincidence.

Many studies have also ignored discriminant validity. There are connections, many of them, but are there also those that shouldn’t be there?

For example, there can be a surprisingly significant connection between self-assessed extroversion and that assessed by AI, just as there should be. However, what should we think about there being an even more significant connection between self-assessed extroversion and conscientiousness assessed by AI.

This means that some connections are mere coincidences, despite big data. Some findings will certainly create new perspectives, radical even, to personality assessments. How can we separate these from each other?

Bias in algorithms

Making equal recruitment decisions, protecting diversity and many other beautiful-sounding phrases have been used when speaking in favor of AI. The politically correct marketing jargon often forgets that AI can only be as unbiased as its teacher.

If a machine is taught by using already distorted data, it will inevitably be visible in decisions made by an algorithm. Nevertheless, this raises childlike faith among believers in AI that decisions made by machines are blindly equal and fully objective.

We will soon be in a worse situation than before, as no-one can doubt the objectivity of an algorithm. Just like people who preach that they are “fully unbiased and unprejudiced” – sometimes they may be, but most often they are not.

We should remain cautious when it comes to the AI hype. My gut feeling is that something big will eventually happen. But those gurus who claim to see the future are wrong. As they always have been.
 

Further reading: https://www.ncbi.nlm.nih.gov/pubmed/29792115