Culture

This AI has a “gaydar”, and it should be stopped

By Oliver Smith 8 September 2017
Image: LEO RAMIREZ/AFP/Getty Images.
Summary

The real-world consequences are terrifying.

A new computer algorithm, developed by researchers at Stanford University, can correctly determine someone’s sexuality with up to 91% accuracy just by analysing a photo of their face.

It’s the robotic equivalent of a “gaydar” and it’s sure to stir up a minefield of discussion, but first, a few caveats.

The study used 35,326 photos from a US dating site and the artificial intelligence only distinguished sexuality between two photos (always one straight, one homosexual) of people of the same gender – the study did not include transgender or bisexual people.

The photos used were portraits of faces, so the AI’s judgments are made based on a “faceprint” determined from things like eyebrows, cheeks, hairline, neckline and nose, rather than clothing or hairstyle.

Among men the system was accurate 81% of the time, and with women it was 71% accurate, but when given repeated photos of the same man (and so, more data) the accuracy increased to 91%.

The implications

From a scientific point of view, the researchers say their findings offer “strong support” for the theory that our sexual orientation is based on our exposure to certain hormones before birth – that sexuality is not a choice.

The discovery is clearly fascinating and hopefully a well-needed wake-up call for the one-in-three Brits who continue to hold the view that homosexuality is a choice.

But from a societal point of view, the implications of this AI are potentially terrifying.

In the wrong hands an algorithm or artificial intelligence that claims to determine sexuality could be disastrous.

The risks

Iran, Mauritania, Saudi Arabia, Sudan and Yemen all enforce the death penalty for homosexuality, as well as some parts of Nigeria and Somalia.

Even countries like India and Egypt still hold laws that say sexual activities deemed “against the order of nature” are a crime, leading to the imprisonment of same-sex couples.

If people in these countries got their hands on a technology that, regardless of its accuracy, claims to determine someone’s sexuality, it could have deadly consequences.

That’s not to mention more liberal countries like the US, where homosexuality retains a stigma in many conservative communities and where many gay and lesbian men and women choose to keep their sexuality private.

This technology could forcibly out them, putting friendships, families and careers at risk.

False-positives could also lead people to be discriminated against, or at the very least foster sexual confusion.

A dangerous precedent

All of this leads to the question, why create such an algorithm?

“This is exactly the type of technology that can be harnessed to harm an already marginalised community,” Josh Rivers, the co-founder of the Series Q network for LGBTQ entrepreneurs, told The Memo.

“If governments and police forces around the world are taking advantage of dating apps like Grindr to entrap men in countries where homosexuality is illegal (and in some cases punishable by death), then this can only be another weapon in their arsenal.”

“It is pointless and it is reckless.”

Dr Kosinski, who led the research, says he conducted it to demonstrate the power of artificial intelligence to policymakers, to highlight the risks. Kosinski says he invented no new technology, merely used software and data that’s already available to anyone with an internet connection, but he won’t say more in order to stop copycats.

So if its scientific ‘discoveries’ are minor, even inconsequential, why risk such an experiment when the resulting societal impact could be deadly?

Elon Musk has warned that artificial intelligence runs the risk of doing more harm than good, and this seems to be a worrying case study.

In the 1960s and 70s, when psychology was the new scientific hotness, Stanford researchers overstepped the line with the Stanford Prison Experiment, abandoning ethics in pursuit of discovery, at any cost.

Today we need to rein in the AI research that once again risks taking society into dangerous waters, lest history repeats itself.