For those who were behind the door when God was giving out the faces this is not a pretty story. In all of its horrifying detail it has been revealed that a beauty contest judged not by humans but by robots was negative about contestants who were other than ethnic European. The first international beauty contest to be inhumanly judged was intended to use objective factors such as facial symmetry and wrinkles to separate the winners from the also-rans. Not surprisingly the contest’s result ignited sanctimonious debate and ribald reaction from ‘the deplorables’.
The Guardian’s Social Justice Editor, Everard Suckensee, describes such comment as ‘deeply offensive and distasteful’.
The BeautyA.1 beauty contest that was judged by algorithm has prompted outrage. Ms Shoshana Tel-Aviv said the contest was ‘grotesque’.
"Regretfully, it isn’t possible to arraign the offending device before a court of law. But I see no reason why the humans who initiated the contest should not feel the full weight of the law. Someone has to pay.
Beauty.A1, launched earlier in the year analysed approximately 6,000 facial images submitted by contestants located in over 100 countries. Each contestant’s ambition was to discover where they figure on a scale that would determine who best epitomised human beauty. When the results were published ‘the creators were dismayed to see that there was a glaring factor linking the winners’: the robots were revolted by people with dark skin.
The BeautyA.1 contest might well be described as being more than skin deep. Instead of just three contestants selected to strut their stuff on the Podium of Narcissus there were a total of 44 clear winners. One would have thought that the liberal set would be overwhelmed with joy; clearly, everyone was a winner; no one need suffer the humiliation of rejection.
However, the contest’s finalists were not equally represented. Such outcome would have resulted in the 44 finalists being made up of 15 ethnic-Europeans, 15 Asians and 14 whose noses were such that it might be thought they were eating bananas. However, those with ethnic-European complexions led by a handsome margin. There were a handful of lighter-skinned Asians but only one of these runners up had dark skin.
Many were the lewd remarks emanating from the far-right caucus: One was heard to call out, “she eats bananas with her feet.” The police are investigating to see if an offence has been committed.
The outcome of the Beauty.A1 contest prompted debate about the ways in which robots can perpetuate biases by “yielding unintended but offensive results.” Is it possible to program a robot so it ignores those whose skin is ethnic-European? This raised the question of a racist-robot. It was then suggested that if the robot was programed to prefer non-Whites to Whites then it could be listed under the term, ‘affirmative action’.
The question remains, how do you categorise unfortunates burdened by faces only a mother could love? When Microsoft released the ‘millennium, chat bot dubbed Tay’ in March, 2016, the robot immediately began to use politically incorrect language. This machine too was described as ‘racist’. The politically correct caucus thought it outrageous that Tay was airing ‘neo-Nazi views’ on Twitter.
Facebook recently laid-off its human editors. Facebook censors achieved notoriety by deleting posts and blocking users for posting ‘off message’ posts. However, the replacement algorithm went into reverse mode and actually favoured ‘fake and vulgar stories’ on news feeds.
Social Justice Advocates are trying to put a brave face on it but complain that this is no laughing matter. They say, “In some cases, it can have devastating consequences for people of colour.”
Malkia Cyril says; “It’s a polluted data producing polluted results.” The Executive Director of the Centre for Media Justice bitterly remembers being stung by the jibe that he epitomised the expression, ‘dark and handsome’ as being handsome only in the dark.
Is there a role for A1 algorithms in the future? Alex Zhavoronkov Beauty.A1 Science Officer says,
"When Beauty.A1 launches another contest this fall, I expect the algorithm will have a number of changes designed to weed out discriminatory results. We will try to correct it.
The Guardian’s Social Justice Editor, Everard Suckensee, describes such comment as ‘deeply offensive and distasteful’.
The BeautyA.1 beauty contest that was judged by algorithm has prompted outrage. Ms Shoshana Tel-Aviv said the contest was ‘grotesque’.
"Regretfully, it isn’t possible to arraign the offending device before a court of law. But I see no reason why the humans who initiated the contest should not feel the full weight of the law. Someone has to pay.
Beauty.A1, launched earlier in the year analysed approximately 6,000 facial images submitted by contestants located in over 100 countries. Each contestant’s ambition was to discover where they figure on a scale that would determine who best epitomised human beauty. When the results were published ‘the creators were dismayed to see that there was a glaring factor linking the winners’: the robots were revolted by people with dark skin.
The BeautyA.1 contest might well be described as being more than skin deep. Instead of just three contestants selected to strut their stuff on the Podium of Narcissus there were a total of 44 clear winners. One would have thought that the liberal set would be overwhelmed with joy; clearly, everyone was a winner; no one need suffer the humiliation of rejection.
However, the contest’s finalists were not equally represented. Such outcome would have resulted in the 44 finalists being made up of 15 ethnic-Europeans, 15 Asians and 14 whose noses were such that it might be thought they were eating bananas. However, those with ethnic-European complexions led by a handsome margin. There were a handful of lighter-skinned Asians but only one of these runners up had dark skin.
Many were the lewd remarks emanating from the far-right caucus: One was heard to call out, “she eats bananas with her feet.” The police are investigating to see if an offence has been committed.
The outcome of the Beauty.A1 contest prompted debate about the ways in which robots can perpetuate biases by “yielding unintended but offensive results.” Is it possible to program a robot so it ignores those whose skin is ethnic-European? This raised the question of a racist-robot. It was then suggested that if the robot was programed to prefer non-Whites to Whites then it could be listed under the term, ‘affirmative action’.
The question remains, how do you categorise unfortunates burdened by faces only a mother could love? When Microsoft released the ‘millennium, chat bot dubbed Tay’ in March, 2016, the robot immediately began to use politically incorrect language. This machine too was described as ‘racist’. The politically correct caucus thought it outrageous that Tay was airing ‘neo-Nazi views’ on Twitter.
Facebook recently laid-off its human editors. Facebook censors achieved notoriety by deleting posts and blocking users for posting ‘off message’ posts. However, the replacement algorithm went into reverse mode and actually favoured ‘fake and vulgar stories’ on news feeds.
Social Justice Advocates are trying to put a brave face on it but complain that this is no laughing matter. They say, “In some cases, it can have devastating consequences for people of colour.”
Malkia Cyril says; “It’s a polluted data producing polluted results.” The Executive Director of the Centre for Media Justice bitterly remembers being stung by the jibe that he epitomised the expression, ‘dark and handsome’ as being handsome only in the dark.
Is there a role for A1 algorithms in the future? Alex Zhavoronkov Beauty.A1 Science Officer says,
"When Beauty.A1 launches another contest this fall, I expect the algorithm will have a number of changes designed to weed out discriminatory results. We will try to correct it.