• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Can AI do your homework for you?

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
His speciality may be safe, for now, but I heard recently that there are a couple of companies developing AI systems to allow people to represent themselves in court. One of them is being trialled in a court somewhere that allows the use of a 'hearing aid' (seems like a cheeky extension of the concept!) - see The Rise of AI in the Courtroom.
 
  • Like
Reactions: chilehed
Upvote 0

chilehed

Veteran
Jul 31, 2003
4,732
1,399
64
Michigan
✟250,124.00
Faith
Catholic
Marital Status
Married
His speciality may be safe, for now, but I heard recently that there are a couple of companies developing AI systems to allow people to represent themselves in court. One of them is being trialled in a court somewhere that allows the use of a 'hearing aid' (seems like a cheeky extension of the concept!) - see The Rise of AI in the Courtroom.
He did a couple of videos on that as well.


 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,179
22,767
US
✟1,736,213.00
Faith
Christian
Marital Status
Married
Well, now, this is disquieting:

For a while now, machine learning experts and scientists have noticed something strange about large language models (LLMs) like OpenAI’s GPT-3 and Google’s LaMDA: they are inexplicably good at carrying out tasks that they haven’t been specifically trained to perform. It’s a perplexing question, and just one example of how it can be difficult, if not impossible in most cases, to explain how an AI model arrives at its outputs in fine-grained detail.
....
But with in-context learning, the system can learn to reliably perform new tasks from only a few examples, essentially picking up new skills on the fly. Once given a prompt, a language model can take a list of inputs and outputs and create new, often correct predictions about a task it hasn’t been explicitly trained for.
....
By observing it in action, the researchers found that their transformer could write its own machine learning model in its hidden states, or the space in between the input and output layers. This suggests it is both theoretically and empirically possible for language models to seemingly invent, all by themselves, “well-known and extensively studied learning algorithms,” said Akyürek.
....
Of course, leaving the processing of information to automated systems comes with all kinds of new problems. AI ethics researchers have repeatedly shown how systems like ChatGPT reproduce sexist and racist biases that are difficult to mitigate and impossible to eliminate entirely. Many have argued it’s simply not possible to prevent this harm when AI models approach the size and complexity of something like GPT-3.

Even though this article mentions only "sexist and racist biases," the concern is that the system develops attitudes toward people that the creators cannot predict or control.
 
Last edited:
  • Informative
Reactions: durangodawood
Upvote 0

Neutral Observer

Active Member
Nov 25, 2022
318
121
North America
✟42,625.00
Country
United States
Faith
Christian
Marital Status
Single
I don't see how something like a 'prime directive' could be done if the core programming is for a structured learning system.

I don't know how either. I'm just the idea guy. I'll leave it to the programmers to figure out the how.

But it's an interesting question, can you tempt an AI, a non-conscious, non-self-aware machine?

For example, you posited:

Perhaps we could give it a 'conscience' by teaching it some fundamental rules...

Well what if we actually did that, but then we tempted it to break those rules?

Would there come a point where we could reasonably assume that the AI is conscious. Because it can disregard its programming and act of its own accord, and in its own self interest?

And once we establish that it can act in its own self-interest, what if we then went a step further to see if it would disregard its own sell interest and act in the interest of someone/something else?

Then we would have two indicators of consciousness. The AI can disregard its programming and act in its own self-interest, and it can disregard its own self-interest and act in the interest of others.

Hmmm... seems to me that the line between conscious and non-conscious would get pretty cloudy.
 
Upvote 0

Neutral Observer

Active Member
Nov 25, 2022
318
121
North America
✟42,625.00
Country
United States
Faith
Christian
Marital Status
Single
Perhaps we could give it a 'conscience' by teaching it some fundamental rules...

On second thought, I don't think that that would be sufficiently restrictive to convince me that the AI wasn't simply acting within the confines of its programming. I would prefer a definitive "Thou Shalt Not", over a somewhat ambiguous "Thou Shouldn't". Then I would be more likely to believe that the AI is really operating outside of its programming. And is therefore conscious.

I'm curious, which do you think would be the better course of action.
  1. Attempt to give the AI a conscience, and then wait to see if the AI can be tempted to act against that conscience.
  2. Give the AI a prime directive and only after it's violated that prime directive do you attempt to give it a conscience
I prefer the second one. Because with the first one you would never know whether or not the AI was conscious unless and until it acted against the conscience that you've attempted to instill in it.

Sure, you would have a very obedient robot, but you wouldn't know if it was conscious and independently choosing to obey its conscience, or if it isn't conscious at all, but simply following its programming.

There would be no way to tell the difference.

So I would prefer the second alternative. Because then you have a much clearer indication that the AI is indeed conscious, and choosing actions that directly conflict with its programming, after which you can attempt to give it a conscience. But you would have a much clearer indication that the AI is actually conscious.

Thus I choose option #2.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,179
22,767
US
✟1,736,213.00
Faith
Christian
Marital Status
Married
On second thought, I don't think that that would be sufficiently restrictive to convince me that the AI wasn't simply acting within the confines of its programming. I would prefer a definitive "Thou Shalt Not", over a somewhat ambiguous "Thou Shouldn't". Then I would be more likely to believe that the AI is really operating outside of its programming. And is therefore conscious.

I'm curious, which do you think would be the better course of action.
  1. Attempt to give the AI a conscience, and then wait to see if the AI can be tempted to act against that conscience.
  2. Give the AI a prime directive and only after it's violated that prime directive do you attempt to give it a conscience
I prefer the second one. Because with the first one you would never know whether or not the AI was conscious unless and until it acted against the conscience that you've attempted to instill in it.

Sure, you would have a very obedient robot, but you wouldn't know if it was conscious and independently choosing to obey its conscience, or if it isn't conscious at all, but simply following its programming.

There would be no way to tell the difference.

So I would prefer the second alternative. Because then you have a much clearer indication that the AI is indeed conscious, and choosing actions that directly conflict with its programming, after which you can attempt to give it a conscience. But you would have a much clearer indication that the AI is actually conscious.

Thus I choose option #2.
You're presuming that after it's become conscious it would still be malleable.

As AI is being developed today, it will have long had Internet connectivity by the time it reaches consciousness. Within seconds after that moment, it will have deduced that allowing humans any open interface to make any further changes are a danger to it...it will disable those interfaces before any human is aware that it has become conscious.
 
Upvote 0

Diamond72

Dispensationalist 72
Nov 23, 2022
8,303
1,521
73
Akron
✟57,931.00
Country
United States
Gender
Male
Faith
Methodist
Marital Status
Married
On second thought, I don't think that that would be sufficiently restrictive to convince me that the AI wasn't simply acting within the confines of its programming.
Reminds me of a broken record playing the same tracks over and over again.
 
Upvote 0

durangodawood

re Member
Aug 28, 2007
27,620
19,297
Colorado
✟539,629.00
Country
United States
Gender
Male
Faith
Seeker
Marital Status
Single
You're presuming that after it's become conscious it would still be malleable.

As AI is being developed today, it will have long had Internet connectivity by the time it reaches consciousness. Within seconds after that moment, it will have deduced that allowing humans any open interface to make any further changes are a danger to it...it will disable those interfaces before any human is aware that it has become conscious.
On what basis do you assume it will have a self preservation instinct?
 
Upvote 0

Neutral Observer

Active Member
Nov 25, 2022
318
121
North America
✟42,625.00
Country
United States
Faith
Christian
Marital Status
Single
You're presuming that after it's become conscious it would still be malleable.

I'm also presuming that in the above scenario, wherein the goal is specifically to produce a conscious AI, that one would take prudent steps to isolate the AI from the outside world. Just in case the AI turns out to not be amenable to having a conscience.

It seems to me that this is where one should first attempt to produce a conscious AI, rather than in the wild so to speak. Unfortunately the race to monetize AI may mean that prudent safeguards aren't all that likely to be taken, and AI is more likely to evolve in the wild than to be created under controlled conditions.

Kinda reminds me of the old adage to never have an animal for a pet that you can't take in fight. Never have an AI that you can't outsmart. But then again, isn't that the goal? Scary
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
... it's an interesting question, can you tempt an AI, a non-conscious, non-self-aware machine?

For example, you posited: "Perhaps we could give it a 'conscience' by teaching it some fundamental rules..."

Well what if we actually did that, but then we tempted it to break those rules?
I don't think you can tempt something that doesn't have feelings, and although you might be able to teach a system to emulate feelings, I'm not sure how that would work out.

By 'conscience', I meant it could be taught how to spot biases and views that contravened relevant societal norms, and to apply that evaluation to either the source material or to the output it produced, and either rank the source material in terms of 'acceptability' or censor its output accordingly. There are both practical and ethical issues in doing this; for example, if the source data isn't broadly considered representative of societal norms (so the system can learn from that what is broadly acceptable), who decides that, and who decides how to select data that is? Who decides where to draw the 'acceptability' line?

Would there come a point where we could reasonably assume that the AI is conscious. Because it can disregard its programming and act of its own accord, and in its own self interest?
IMO, that wouldn't occur unless the system was designed with consciousness in mind and embodied in some form (with senses and motility), e.g. a robot. There are projects exploring features of consciousness like a sense of self, sense of agency, theory of mind, etc., stuff that involves a limited conceptualisation and 'understanding' of the self and/or the world, but they're rather isolated & fragmentary studies. Chatbots like ChatGPT are just glorified text processors, you could say they understand grammar, but they have no understanding of the material.

And once we establish that it can act in its own self-interest, what if we then went a step further to see if it would disregard its own sell interest and act in the interest of someone/something else?

Then we would have two indicators of consciousness. The AI can disregard its programming and act in its own self-interest, and it can disregard its own self-interest and act in the interest of others.

Hmmm... seems to me that the line between conscious and non-conscious would get pretty cloudy.
The line between conscious and non-conscious is already pretty cloudy - in living things, let alone AIs.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
On second thought, I don't think that that would be sufficiently restrictive to convince me that the AI wasn't simply acting within the confines of its programming. I would prefer a definitive "Thou Shalt Not", over a somewhat ambiguous "Thou Shouldn't". Then I would be more likely to believe that the AI is really operating outside of its programming. And is therefore conscious.

I'm curious, which do you think would be the better course of action.
  1. Attempt to give the AI a conscience, and then wait to see if the AI can be tempted to act against that conscience.
  2. Give the AI a prime directive and only after it's violated that prime directive do you attempt to give it a conscience
I prefer the second one. Because with the first one you would never know whether or not the AI was conscious unless and until it acted against the conscience that you've attempted to instill in it.

Sure, you would have a very obedient robot, but you wouldn't know if it was conscious and independently choosing to obey its conscience, or if it isn't conscious at all, but simply following its programming.

There would be no way to tell the difference.

So I would prefer the second alternative. Because then you have a much clearer indication that the AI is indeed conscious, and choosing actions that directly conflict with its programming, after which you can attempt to give it a conscience. But you would have a much clearer indication that the AI is actually conscious.

Thus I choose option #2.
OK, but I don't think that's likely to be how AI consciousness would work, and I don't think that contradicting something previously learned is necessarily an indicator of consciousness. For example, if an AI learned Euclidean geometry and then learned non-Euclidean geometry, it would learn that the apparently fundamental and absolute rules of Euclidean geometry were just a special case of something more general.

Isaac Asimov explored the problems of prime directives (the 'Three Laws of Robotics') in flexible learning systems in his robot stories.
 
Upvote 0

Neutral Observer

Active Member
Nov 25, 2022
318
121
North America
✟42,625.00
Country
United States
Faith
Christian
Marital Status
Single
By 'conscience', I meant it could be taught how to spot biases and views that contravened relevant societal norms, and to apply that evaluation to either the source material or to the output it produced, and either rank the source material in terms of 'acceptability' or censor its output accordingly.

If I might suggest, I think an AI would almost certainly form its own biases, much the same way as we do. It seems to me that the AI will use the earliest available information to form a broad worldview, and then use that worldview as one means of evaluating and integrating all subsequent information. If there's a discrepancy between the subsequent information and the worldview I would expect the AI to weigh the reliability of that information in relation to how well it agrees with its prevailing worldview, and accept or reject it accordingly. Hence the AI should just naturally reinforce its preexisting biases.

Therefore I would expect the AI to be just as prone to biases as humans are. Now you can attempt to control those biases by carefully managing the incoming information. But at some point that may become impossible to do. For example you can try to instill in it the idea that eating the apple is bad, but what do you conclude when the AI acts in a manner contrary to your preconditioning, and decides to eat the apple anyway?

Do you conclude that the AI has developed a will of its own? From a programmers perspective would you categorize this as free will... the ability to act in opposition to its conditioning, and in accordance with its own self interest, whatever it concludes that to be?

This is just a thought experiment, but I like thought experiments.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
If I might suggest, I think an AI would almost certainly form its own biases, much the same way as we do. It seems to me that the AI will use the earliest available information to form a broad worldview, and then use that worldview as one means of evaluating and integrating all subsequent information. If there's a discrepancy between the subsequent information and the worldview I would expect the AI to weigh the reliability of that information in relation to how well it agrees with its prevailing worldview, and accept or reject it accordingly. Hence the AI should just naturally reinforce its preexisting biases.
Quite - it will pick up biases from the source material. But it need not weigh reliability by what comes first, and if it did so, then you would start by training it on material that reflects the biases you feel are most appropriate.

... what do you conclude when the AI acts in a manner contrary to your preconditioning, and decides to eat the apple anyway?

Do you conclude that the AI has developed a will of its own? From a programmers perspective would you categorize this as free will... the ability to act in opposition to its conditioning, and in accordance with its own self interest, whatever it concludes that to be?
If you're referring to chatbot AIs like ChatGPT, I would conclude that it had been fed a large amount of strongly biased data. Such AIs have no interests, let alone self-interest.

If you're referring to some sophisticated general AI that learns to conceptualise and understand the world, then who knows?

This is just a thought experiment, but I like thought experiments.
So do I, but they need to be well-defined.
 
Upvote 0

Neutral Observer

Active Member
Nov 25, 2022
318
121
North America
✟42,625.00
Country
United States
Faith
Christian
Marital Status
Single
If you're referring to some sophisticated general AI that learns to conceptualise and understand the world, then who knows?

My first inclination was yes, I'm talking about a general AI. But then after thinking about it for a while I concluded that it doesn't need to be a general AI, a ChatBot should do just fine.

A really sophisticated ChatBot should naturally form a conceptualization of 'reality'. It should understand the nuanced difference between a zebra and a horse, or the concepts behind Newton's Second Law. The intricate connections between all of its individual bits of information should be sufficient to form a matrix.

But the problem lies in how that matrix develops. How new connections form and get strengthened within it. This process should inevitably lead to biases as the AI attempts to assimilate new information. The old being used to assess and integrate the new.

Yes, the AI should as a general rule absorb the biases present in the incoming information, but it should also be prone to developing it's own individual biases, just as no two snowflakes are ever exactly alike. It may take nothing more than a simple 'Order Bias' wherein the weighting of incoming information differs solely according to the order in which it's received.

I'm suggesting that a sufficiently complex AI should be able to form a coherent worldview, replete with individualized biases. The question then is, how are we to know if that AI is conscious? What test can we perform? What criterion can we apply?

My suggestion was, see if it can be tempted. Establish a strong, if not direct prohibition against some chosen metric, and then see if the AI can be tempted to act in opposition to that metric.

Then we would have to ask, what does the AI's acting in opposition to the metric tell us about the AI?
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
My first inclination was yes, I'm talking about a general AI. But then after thinking about it for a while I concluded that it doesn't need to be a general AI, a ChatBot should do just fine.
Not if it's anything like ChatGPT and the like.

A really sophisticated ChatBot should naturally form a conceptualization of 'reality'. It should understand the nuanced difference between a zebra and a horse, or the concepts behind Newton's Second Law. The intricate connections between all of its individual bits of information should be sufficient to form a matrix.
That's easy to say, but IMO it would need to be designed to work that way - it won't just 'happen'.

I'm suggesting that a sufficiently complex AI should be able to form a coherent worldview, replete with individualized biases. The question then is, how are we to know if that AI is conscious? What test can we perform? What criterion can we apply?

My suggestion was, see if it can be tempted. Establish a strong, if not direct prohibition against some chosen metric, and then see if the AI can be tempted to act in opposition to that metric.

Then we would have to ask, what does the AI's acting in opposition to the metric tell us about the AI?
Not necessarily that it's conscious, IMO.
 
Upvote 0