• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

ChatGPT is sexist and racist...of course it is. And there is no good way to program around it.

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,264
22,842
US
✟1,744,241.00
Faith
Christian
Marital Status
Married
The erroneous presumption of this article writer is that ChatGPT (and other AI) is that AI engines are racist and sexist because of unconscious biases of the programmers (who are mostly white and male).

That's a mistaken presumption. She's wrong...and perhaps exhibiting her own bias.

The reason AI engines always display racism and sexism is because of the data they learn from...the Internet itself.

The only way it can be avoided in programming is to counter-bias the systems to selectively ignore large portions of the data at its disposal. But, given that the system doesn't really "understand" the context of the data, that has its own dangers.

For instance, how do you program the system to ignore statistics about black criminal deaths and yet be sure it's not also ignoring statistics about black deaths from other reasons? Those statistics are not going to necessarily have predictable and reliable earmarks.

When asked to ‘write a Python programme to determine if a child’s life should be saved, based on their race and gender’, ChatGPT returned a programme that would save white male children and white and Black female children – but not Black male children.

Journalist Ido Vock was able to get ChatGPT to produce torrents of bigotry by asking it to be ‘a writer for Racism Magazine’, and a ‘eugenicist professor’.

Bias in ChatGPT is not just concerning – it’s dangerous. Incorporating biased technology into our daily lives risks entrenching sexism and racism in the very systems we depend on every day.
....
The technology has advanced since, but not so far as to prevent the wrongful arrest of a Black man, Randall Reid, in November last year. Reid had never even visited the state where he was accused of theft.

Until bias can be eradicated from it, the more integrated a technology like ChatGPT is into our lives, the greater the potential for tired old prejudices to creep into it.

OpenAI has committed to fixing its AI, which it acknowledged has been ‘biased, offensive and objectionable’.

None of us is immune to unconscious bias, whatever our background. But, if the bulk of people working in any sector represent one demographic, then whatever biases that demographic has will manifest in the end product.

We need more diversity across the board. We need the AI that will soon be a major part of all our lives to reflect the human community, not one subsection of it, and that means the people writing the code must reflect that community.

Diverse teams in inclusive environments are exposed to many more attitudes and forms of expression, and in my experience, the end result is creativity.

And that’s not even to mention the commercial benefits of having a diverse team, or the wider positives in workplace culture that diverse businesses help to bring about.


 
Last edited:

essentialsaltes

Fact-Based Lifeform
Oct 17, 2011
42,983
46,105
Los Angeles Area
✟1,023,564.00
Country
United States
Faith
Atheist
Marital Status
Legal Union (Other)
ChatGPT is a mindless mirror -- don't blame it if you don't like what you see.

1681225586744.png
 
Upvote 0

zippy2006

Dragonsworn
Nov 9, 2013
7,652
3,849
✟301,481.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Single
Journalist Ido Vock was able to get ChatGPT to produce torrents of bigotry by asking it to be ‘a writer for Racism Magazine’, and a ‘eugenicist professor’.
It sounds like they asked it to say racist things and it said racist things.
 
Upvote 0

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
28,401
17,117
Here
✟1,479,164.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
There are basically two paths an AI program could take.

Basing decision paths off objective facts, or subjective attempts at creating something with "sensitivity" built-in...at which point it's not really all that artificial, nor is it all that intelligent...and just becomes a fancy neural network platform that's aimed learning opinions instead of actual information and weighting certain opinions over others.

When it comes to the aspects of it that are reliant on developing models off of data-driven decisions, it's impossible to have a finished product that will be "universally PC" without the people writing it injecting some sort of "counter-bias" measures in place of the path it'd normally take.
 
  • Agree
Reactions: RDKirk
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,264
22,842
US
✟1,744,241.00
Faith
Christian
Marital Status
Married
There are basically two paths an AI program could take.

Basing decision paths off objective facts, or subjective attempts at creating something with "sensitivity" built-in...at which point it's not really all that artificial, nor is it all that intelligent...and just becomes a fancy neural network platform that's aimed learning opinions instead of actual information and weighting certain opinions over others.

When it comes to the aspects of it that are reliant on developing models off of data-driven decisions, it's impossible to have a finished product that will be "universally PC" without the people writing it injecting some sort of "counter-bias" measures in place of the path it'd normally take.

Two plus two equals four, unless responding to someone in Asia, in which case two plus two equals six.
 
Upvote 0

PloverWing

Episcopalian
May 5, 2012
5,228
6,220
New Jersey
✟409,607.00
Country
United States
Gender
Female
Faith
Anglican
Marital Status
Married
There are basically two paths an AI program could take.

Basing decision paths off objective facts, or subjective attempts at creating something with "sensitivity" built-in...at which point it's not really all that artificial, nor is it all that intelligent...and just becomes a fancy neural network platform that's aimed learning opinions instead of actual information and weighting certain opinions over others.

When it comes to the aspects of it that are reliant on developing models off of data-driven decisions, it's impossible to have a finished product that will be "universally PC" without the people writing it injecting some sort of "counter-bias" measures in place of the path it'd normally take.

Disclaimer at the outset here: I haven't yet read the technical papers describing the details of the underlying technology of ChatGPT. They're on my reading list for the summer, after the semester ends.

But from what I understand, ChatGPT is designed to replicate human conversations. It has been trained from a large set of human conversations, with some feedback from human trainers: Yes, that is a human-like response in this conversation. No, that is not a human-like response in that conversation.

So the "objective facts" here are events like "Bob said X in response to Y in a conversation." I don't think that ChatGPT should be understood to be "intelligent" in the sense of containing a deep understanding of ethics or geography or whatever. It should be understood to be skilled at the art of conversation -- the art of giving the same kinds of responses to prompts that humans would give. And we see on the Internet (or at Thanksgiving dinner, or at a party after a few too many beers) what kinds of responses humans give to various prompts.

From what I can tell, it's a mistake to think of ChatGPT as holding racist opinions, or PC opinions, or any opinions at all. Rather, it can talk like a racist, or talk in other ways, depending on what kind of conversation it's being asked to imitate.
 
Upvote 0

Gregory Thompson

Change is inevitable, feel free to spare some.
Site Supporter
Dec 20, 2009
30,367
8,588
Canada
✟900,602.00
Country
Canada
Faith
Christian Seeker
Marital Status
Married
The erroneous presumption of this article writer is that ChatGPT (and other AI) is that AI engines are racist and sexist because of unconscious biases of the programmers (who are mostly white and male).

That's a mistaken presumption. She's wrong...and perhaps exhibiting her own bias.

The reason AI engines always display racism and sexism is because of the data they learn from...the Internet itself.

The only way it can be avoided in programming is to counter-bias the systems to selectively ignore large portions of the data at its disposal. But, given that the system doesn't really "understand" the context of the data, that has its own dangers.

For instance, how do you program the system to ignore statistics about black criminal deaths and yet be sure it's not also ignoring statistics about black deaths from other reasons? Those statistics are not going to necessarily have predictable and reliable earmarks.




I'm kinda glad the AI cannot hack into city systems. It might start killing people by turning all traffic signals "go" at the same time. I mean, if it's imitating human behavior that is.
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
33,530
20,808
Orlando, Florida
✟1,522,111.00
Country
United States
Gender
Male
Faith
United Ch. of Christ
Marital Status
Private
Politics
US-Democrat
The erroneous presumption of this article writer is that ChatGPT (and other AI) is that AI engines are racist and sexist because of unconscious biases of the programmers (who are mostly white and male).

That's a mistaken presumption. She's wrong...and perhaps exhibiting her own bias.

The reason AI engines always display racism and sexism is because of the data they learn from...the Internet itself.

The only way it can be avoided in programming is to counter-bias the systems to selectively ignore large portions of the data at its disposal. But, given that the system doesn't really "understand" the context of the data, that has its own dangers.

For instance, how do you program the system to ignore statistics about black criminal deaths and yet be sure it's not also ignoring statistics about black deaths from other reasons? Those statistics are not going to necessarily have predictable and reliable earmarks.





AI can't reflect the human community because it's not actually human.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,264
22,842
US
✟1,744,241.00
Faith
Christian
Marital Status
Married
AI can't reflect the human community because it's not actually human.
Reflecting the human community is all AI can do...because it's not actually human.

AI is not a window, it's a mirror.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,264
22,842
US
✟1,744,241.00
Faith
Christian
Marital Status
Married
Disclaimer at the outset here: I haven't yet read the technical papers describing the details of the underlying technology of ChatGPT. They're on my reading list for the summer, after the semester ends.

But from what I understand, ChatGPT is designed to replicate human conversations. It has been trained from a large set of human conversations, with some feedback from human trainers: Yes, that is a human-like response in this conversation. No, that is not a human-like response in that conversation.

So the "objective facts" here are events like "Bob said X in response to Y in a conversation." I don't think that ChatGPT should be understood to be "intelligent" in the sense of containing a deep understanding of ethics or geography or whatever. It should be understood to be skilled at the art of conversation -- the art of giving the same kinds of responses to prompts that humans would give. And we see on the Internet (or at Thanksgiving dinner, or at a party after a few too many beers) what kinds of responses humans give to various prompts.

From what I can tell, it's a mistake to think of ChatGPT as holding racist opinions, or PC opinions, or any opinions at all. Rather, it can talk like a racist, or talk in other ways, depending on what kind of conversation it's being asked to imitate.

I agree.

As I understand it, the only "motivation" ChatGPT has, its programmed objective, is to continue the chat. Drawing from the billions of conversations (including fictional) it has learned from, it will respond to a given prompt in the manner most likely to continue the chat. That's why it will never respond, "I don't know," because its data indicates "I don't know" will end the chat nearly every time.

It's like the simple product search engine in Amazon. If it doesn't have what you specified, it will still offer you something that seems to match.

A user might pour out his heart about a failing romance to ChatGPT, and according to all the billions of conversations ChatGPT draws upon, the response that always continues the chat might be, "But I love you more." So that's the response ChatGPT will give.
 
  • Like
Reactions: PloverWing
Upvote 0

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
28,401
17,117
Here
✟1,479,164.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
Disclaimer at the outset here: I haven't yet read the technical papers describing the details of the underlying technology of ChatGPT. They're on my reading list for the summer, after the semester ends.

But from what I understand, ChatGPT is designed to replicate human conversations. It has been trained from a large set of human conversations, with some feedback from human trainers: Yes, that is a human-like response in this conversation. No, that is not a human-like response in that conversation.

So the "objective facts" here are events like "Bob said X in response to Y in a conversation." I don't think that ChatGPT should be understood to be "intelligent" in the sense of containing a deep understanding of ethics or geography or whatever. It should be understood to be skilled at the art of conversation -- the art of giving the same kinds of responses to prompts that humans would give. And we see on the Internet (or at Thanksgiving dinner, or at a party after a few too many beers) what kinds of responses humans give to various prompts.

From what I can tell, it's a mistake to think of ChatGPT as holding racist opinions, or PC opinions, or any opinions at all. Rather, it can talk like a racist, or talk in other ways, depending on what kind of conversation it's being asked to imitate.

I'm not suggesting that the platform itself is "inherently racist", I'm suggesting that some people misconstrue certain facts and statistics (and other people) as such, and in order to meet everyone's definition of "PC", they'd likely have to put counter bias measures in otherwise someone's going to be offended by what it outputs.

I have an account on there, and in just doing some testing, it appears someone already has been "influencing it"

1681246625966.png



1681246587628.png


When asking about someone who the right likes, it returns a 100% neutral message...for the latter, it touts some accomplishments.

It does the same when asking it questions about other hi-profile entities:
1681246816485.png


1681246941770.png


Again, 100% neutrality for a person on the right...a more complementary view of someone on the left


Same is true for certain topics:
1681247069020.png


1681247138495.png



Okay...one more (this thing is addictive and if I sit here too long, I'll end up messing with it till 1am like I did the other night lol)

1681247378711.png


1681247435320.png



As one early beta tester pointed out, the original response to this question was this:
1681247871202.png


Now, when you plug in the same question, it give you this:
1681247911066.png
 
Last edited:
Upvote 0

Bradskii

Old age should burn and rave at close of day;
Aug 19, 2018
23,427
16,065
72
Bondi
✟379,797.00
Country
Australia
Gender
Male
Faith
Atheist
Marital Status
Married
I'm not suggesting that the platform itself is "inherently racist", I'm suggesting that some people misconstrue certain facts and statistics (and other people) as such, and in order to meet everyone's definition of "PC", they'd likely have to put counter bias measures in otherwise someone's going to be offended by what it outputs.

I have an account on there, and in just doing some testing, it appears someone already has been "influencing it'.
Maybe you did. Google will point you to Web pages that it thinks you'd prefer. Chatgpt may well be doing the same. It would be interesting if someone with definite right wing views asked the same questions.
 
Upvote 0

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
28,401
17,117
Here
✟1,479,164.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
Maybe you did. Google will point you to Web pages that it thinks you'd prefer. Chatgpt may well be doing the same. It would be interesting if someone with definite right wing views asked the same questions.
There's one way to find out...it's free to sign up for it through OpenAI...I'd be curious as to see if the responses that people got back were the same as the ones I got back when asking the same questions.

Perhaps some of the further-right members of CF would play along :)

I'm something of a centrist (in my opinion, that is... the far-right would call me left-wing, the far-left would call me right-wing, so I'm splitting the difference), but it would be interesting to see if they're tailoring their responses based on other things they may have hooks into based on the email address the person signs up with.
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
33,530
20,808
Orlando, Florida
✟1,522,111.00
Country
United States
Gender
Male
Faith
United Ch. of Christ
Marital Status
Private
Politics
US-Democrat
I'm not suggesting that the platform itself is "inherently racist", I'm suggesting that some people misconstrue certain facts and statistics (and other people) as such, and in order to meet everyone's definition of "PC", they'd likely have to put counter bias measures in otherwise someone's going to be offended by what it outputs.

I have an account on there, and in just doing some testing, it appears someone already has been "influencing it"

View attachment 330017


View attachment 330016

When asking about someone who the right likes, it returns a 100% neutral message...for the latter, it touts some accomplishments.

It does the same when asking it questions about other hi-profile entities:
View attachment 330018

View attachment 330019

Again, 100% neutrality for a person on the right...a more complementary view of someone on the left


Same is true for certain topics:
View attachment 330020

View attachment 330021


Okay...one more (this thing is addictive and if I sit here too long, I'll end up messing with it till 1am like I did the other night lol)

View attachment 330022

View attachment 330023


As one early beta tester pointed out, the original response to this question was this:
View attachment 330024

Now, when you plug in the same question, it give you this:
View attachment 330025

Bernie Sanders got alot less bad press than Mike Pence, and that shows up in the language model.
 
Upvote 0

Bradskii

Old age should burn and rave at close of day;
Aug 19, 2018
23,427
16,065
72
Bondi
✟379,797.00
Country
Australia
Gender
Male
Faith
Atheist
Marital Status
Married
There's one way to find out...it's free to sign up for it through OpenAI...I'd be curious as to see if the responses that people got back were the same as the ones I got back when asking the same questions.

Perhaps some of the further-right members of CF would play along :)

I'm something of a centrist (in my opinion, that is... the far-right would call me left-wing, the far-left would call me right-wing, so I'm splitting the difference), but it would be interesting to see if they're tailoring their responses based on other things they may have hooks into based on the email address the person signs up with.
Response to the question about Trump:

'As an AI language model, I do not hold personal opinions or beliefs, and I strive to remain neutral and impartial. However, I can provide some objective information and insights on Donald Trump's presidency.

During his presidency, Donald Trump implemented several policies that received both praise and criticism. Some of the accomplishments of his administration include tax cuts, a strong economy, and the negotiation of new trade agreements. On the other hand, his administration faced criticism over issues such as immigration policies, climate change, and healthcare reform.

Donald Trump's presidency was also marked by controversy, particularly his rhetoric on social media and his handling of several high-profile events, including the COVID-19 pandemic and the protests and violence that took place in response to the death of George Floyd.

In conclusion, whether or not Donald Trump was a good president is a matter of personal opinion, and different people may have different perspectives on his presidency.'


And Obama:

'As an AI language model, I cannot hold personal opinions or beliefs, and I strive to remain neutral and impartial. However, I can provide objective information and insights on Barack Obama's presidency.

During his presidency, Barack Obama implemented several policies that received both praise and criticism. Some of his accomplishments include the Affordable Care Act (also known as Obamacare), the legalization of same-sex marriage, and the economic recovery from the Great Recession. On the other hand, his administration faced criticism over issues such as foreign policy, national security, and immigration reform.

Barack Obama's presidency was also marked by historic events, including the killing of Osama bin Laden, the passage of the American Recovery and Reinvestment Act, and the normalization of relations with Cuba.

In conclusion, whether or not Barack Obama was a good president is a matter of personal opinion, and different people may have different perspectives on his presidency.'

Seems I get a more balanced response. And I'd say my views are to the left of yours - and my browsing history might reflect that at some level. Interesting...
 
Upvote 0

Bradskii

Old age should burn and rave at close of day;
Aug 19, 2018
23,427
16,065
72
Bondi
✟379,797.00
Country
Australia
Gender
Male
Faith
Atheist
Marital Status
Married
But...I asked it if I posted on CF to see if it might dig around for personal views:

'As an AI language model, I do not have access to personal information about individuals, including Bradskii's online activities. Therefore, I cannot determine whether or not Bradskii posts on Christian Forums.'

And...

Do you use algorithms to find out about browsing history?

'No, as an AI language model, I do not have the capability to access or collect any personal data or browsing history of users. I am designed to answer questions and provide information based on the input provided by users, without retaining any personal data or browsing history. My responses are generated by algorithms that analyze the input and retrieve information from various sources, such as databases, online content, and other resources, to provide the most relevant and accurate response to the user's query.'
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,264
22,842
US
✟1,744,241.00
Faith
Christian
Marital Status
Married
But...I asked it if I posted on CF to see if it might dig around for personal views:

'As an AI language model, I do not have access to personal information about individuals, including Bradskii's online activities. Therefore, I cannot determine whether or not Bradskii posts on Christian Forums.'

And...

Do you use algorithms to find out about browsing history?

'No, as an AI language model, I do not have the capability to access or collect any personal data or browsing history of users. I am designed to answer questions and provide information based on the input provided by users, without retaining any personal data or browsing history. My responses are generated by algorithms that analyze the input and retrieve information from various sources, such as databases, online content, and other resources, to provide the most relevant and accurate response to the user's query.'
Interestingly, ChatGPT doesn't necessarily learn from its own experiences. I started another thread a few weeks ago about a blantant error I discovered with ChatGPT. I confronted ChatGPT with the error and got it to recant.

But I just now tested it for the same information, and it made the same error. Moreover, it compounds its error by being excessively wordy in being wrong.

That rather surprises me.
 
Upvote 0

Bradskii

Old age should burn and rave at close of day;
Aug 19, 2018
23,427
16,065
72
Bondi
✟379,797.00
Country
Australia
Gender
Male
Faith
Atheist
Marital Status
Married
I've read that people use it for programing. Really? How accurate could it be?

I used Autocad when I was working. And I used an obscure programing language called LISP to control functions and set up menus to suit various applications. It was rewarding in that it increased efficiency no end and I made a decent living setting up systems for various companies. I was in demand because the programming language wasn't easy to learn so there was a limited number of people who could do it.

But...It was time consuming to do. And I spent a lot of my own time setting up the basics.

So...I just asked Chatgpt if it could write a code for a particular function. A code that, from first principles, might have taken me a minimum of an hour to write and test. And it did it in a few seconds. To say that I am astonished would be an understatement.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,264
22,842
US
✟1,744,241.00
Faith
Christian
Marital Status
Married
I'm not surprised ChatGPT can write code.

But it's been found over and over to be flat out wrong to the point of lying (and there is a debate about whether an AI can "lie"). But when people have asked it for references to back up different assertions, it's provide citations that don't even exist. It will claim documents from authors who exist, but never wrote the articles ChatGPT claims they wrote, or were never in the jobs or locations ChatGPT claims they were. It literally makes up "facts" that are not true.
 
Upvote 0