Thursday November 26, 2020
intellect logo

Home Technocracy INTRODUCING THE RACIST ROBOTS

INTRODUCING THE RACIST ROBOTS

INTELLECT DESK
INTRODUCING THE RACIST ROBOTS

Has technology really proven to be greater than its creators? Do creations take after their makers and their traits? Does this mean even artificial intelligence can have preferences, taste or even be racist? The very foundation a learning machine is based on its creator. So, does it mean that machines could be biased like their creators?

Beauty.ai, partner an online beauty contest,  run by a few companies such as the Model Alliance, Fabrelic and a number of other Youth Laboratories, has generated some uproar. This online platform has taken around 600,000 entries to be judged by an artificial intelligence (AI). The machine will look at the person’s face symmetry, wrinkles, pimples, blemishes, race, age and every other human “imperfection” and attributes. Oddly enough, it was obvious that the AI was a racist. Of the 44 winners, 36 were white. 

As the functions to judge these participants were its learned patterns from massive amounts of data. In this case, the AI must have been fed the perfect image of a human being, in order for it to make deductions as to what human beauty is. But if the AI was initially educated with pictures of white people, it’s only natural that the machine would not recognize or relate to darker skin as “beautiful”. As different races tend to have a group of common physical traits, such a narrow-minded machine, would not be able to differentiate. And as the algorithmic pattern, it will eventually choose the white skinned participants. 

According to Motherboard, the machine cannot think for itself, and therefore operated on its stored data. Hence its precision of beauty is only a set of instructions, which it compares with the participants own features. The participants are individual humans to it, rather a task of “compare and contrast” to derive the best possible match. So, as a result, the very method of its deduction was biased to begin with. 

“It happens to be that color does matter in machine vision. And for some population groups the data sets are lacking an adequate number of samples to be able to train the deep neural networks,” said the chief science officer of Beauty.ai, Alex Zhavoronkov, in an interview with Motherboard.

The answer to this problem is- better filter and diverse data. If the AI was exposed to a variety of racial people, it would have been better equipped to evaluate a wider range of physical features.

“If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing non-white faces,” believes the principal researcher at Microsoft Research New York City, Kate Crawford. She writes in a New York Times op-ed saying, “So inclusivity matters—from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.”

Beauty.ai is rumored to host another beauty contest with an AI judge in October, but whether they will learn from their mistakes and use a wider range of data or not, is still unclear. Maybe racism is one of the many things that we all are biased towards, and we simply happen to conceal it well. 

September 10, 2016
Kazifarms Kitchen

Recent Posts


the2hourjob.com MARKS BANGLADESH'S ENTRY INTO THE 'GIG ECONOMY'

The2hourjob.com marks Bangladesh's entry into the 'Gig Economy' - a new milestone that Bangladesh has now achieved during the Digital Bangladesh era. 

The2hourjob.com is here to make us count on women and to make women look beautiful...

FUTURE SAMSUNG GALAXY PHONES COULD READ YOUR PALMS

Samsung files a patent for fetching patterns of password with palm verification.

NEW BARBIE DONS A HIJAB!

The world’s favourite beauty queen has been spotted in a hijab for the first time ever in a tribute to the bold Ibtihaj Muhammad, the first American Olympian to compete...