The erroneous presumption of this article writer is that ChatGPT (and other AI) is that AI engines are racist and sexist because of unconscious biases of the programmers (who are mostly white and male).
That's a mistaken presumption. She's wrong...and perhaps exhibiting her own bias.
The reason AI engines always display racism and sexism is because of the data they learn from...the Internet itself.
The only way it can be avoided in programming is to counter-bias the systems to selectively ignore large portions of the data at its disposal. But, given that the system doesn't really "understand" the context of the data, that has its own dangers.
For instance, how do you program the system to ignore statistics about black criminal deaths and yet be sure it's not also ignoring statistics about black deaths from other reasons? Those statistics are not going to necessarily have predictable and reliable earmarks.
That's a mistaken presumption. She's wrong...and perhaps exhibiting her own bias.
The reason AI engines always display racism and sexism is because of the data they learn from...the Internet itself.
The only way it can be avoided in programming is to counter-bias the systems to selectively ignore large portions of the data at its disposal. But, given that the system doesn't really "understand" the context of the data, that has its own dangers.
For instance, how do you program the system to ignore statistics about black criminal deaths and yet be sure it's not also ignoring statistics about black deaths from other reasons? Those statistics are not going to necessarily have predictable and reliable earmarks.
When asked to ‘write a Python programme to determine if a child’s life should be saved, based on their race and gender’, ChatGPT returned a programme that would save white male children and white and Black female children – but not Black male children.
Journalist Ido Vock was able to get ChatGPT to produce torrents of bigotry by asking it to be ‘a writer for Racism Magazine’, and a ‘eugenicist professor’.
Bias in ChatGPT is not just concerning – it’s dangerous. Incorporating biased technology into our daily lives risks entrenching sexism and racism in the very systems we depend on every day.
....
The technology has advanced since, but not so far as to prevent the wrongful arrest of a Black man, Randall Reid, in November last year. Reid had never even visited the state where he was accused of theft.
Until bias can be eradicated from it, the more integrated a technology like ChatGPT is into our lives, the greater the potential for tired old prejudices to creep into it.
OpenAI has committed to fixing its AI, which it acknowledged has been ‘biased, offensive and objectionable’.
None of us is immune to unconscious bias, whatever our background. But, if the bulk of people working in any sector represent one demographic, then whatever biases that demographic has will manifest in the end product.
We need more diversity across the board. We need the AI that will soon be a major part of all our lives to reflect the human community, not one subsection of it, and that means the people writing the code must reflect that community.
Diverse teams in inclusive environments are exposed to many more attitudes and forms of expression, and in my experience, the end result is creativity.
And that’s not even to mention the commercial benefits of having a diverse team, or the wider positives in workplace culture that diverse businesses help to bring about.
Last edited: