Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
His speciality may be safe, for now, but I heard recently that there are a couple of companies developing AI systems to allow people to represent themselves in court. One of them is being trialled in a court somewhere that allows the use of a 'hearing aid' (seems like a cheeky extension of the concept!) - see The Rise of AI in the Courtroom.
He did a couple of videos on that as well.His speciality may be safe, for now, but I heard recently that there are a couple of companies developing AI systems to allow people to represent themselves in court. One of them is being trialled in a court somewhere that allows the use of a 'hearing aid' (seems like a cheeky extension of the concept!) - see The Rise of AI in the Courtroom.
For a while now, machine learning experts and scientists have noticed something strange about large language models (LLMs) like OpenAI’s GPT-3 and Google’s LaMDA: they are inexplicably good at carrying out tasks that they haven’t been specifically trained to perform. It’s a perplexing question, and just one example of how it can be difficult, if not impossible in most cases, to explain how an AI model arrives at its outputs in fine-grained detail.
....
But with in-context learning, the system can learn to reliably perform new tasks from only a few examples, essentially picking up new skills on the fly. Once given a prompt, a language model can take a list of inputs and outputs and create new, often correct predictions about a task it hasn’t been explicitly trained for.
....
By observing it in action, the researchers found that their transformer could write its own machine learning model in its hidden states, or the space in between the input and output layers. This suggests it is both theoretically and empirically possible for language models to seemingly invent, all by themselves, “well-known and extensively studied learning algorithms,” said Akyürek.
....
Of course, leaving the processing of information to automated systems comes with all kinds of new problems. AI ethics researchers have repeatedly shown how systems like ChatGPT reproduce sexist and racist biases that are difficult to mitigate and impossible to eliminate entirely. Many have argued it’s simply not possible to prevent this harm when AI models approach the size and complexity of something like GPT-3.
I don't see how something like a 'prime directive' could be done if the core programming is for a structured learning system.
Perhaps we could give it a 'conscience' by teaching it some fundamental rules...
Perhaps we could give it a 'conscience' by teaching it some fundamental rules...
You're presuming that after it's become conscious it would still be malleable.On second thought, I don't think that that would be sufficiently restrictive to convince me that the AI wasn't simply acting within the confines of its programming. I would prefer a definitive "Thou Shalt Not", over a somewhat ambiguous "Thou Shouldn't". Then I would be more likely to believe that the AI is really operating outside of its programming. And is therefore conscious.
I'm curious, which do you think would be the better course of action.
I prefer the second one. Because with the first one you would never know whether or not the AI was conscious unless and until it acted against the conscience that you've attempted to instill in it.
- Attempt to give the AI a conscience, and then wait to see if the AI can be tempted to act against that conscience.
- Give the AI a prime directive and only after it's violated that prime directive do you attempt to give it a conscience
Sure, you would have a very obedient robot, but you wouldn't know if it was conscious and independently choosing to obey its conscience, or if it isn't conscious at all, but simply following its programming.
There would be no way to tell the difference.
So I would prefer the second alternative. Because then you have a much clearer indication that the AI is indeed conscious, and choosing actions that directly conflict with its programming, after which you can attempt to give it a conscience. But you would have a much clearer indication that the AI is actually conscious.
Thus I choose option #2.
Reminds me of a broken record playing the same tracks over and over again.On second thought, I don't think that that would be sufficiently restrictive to convince me that the AI wasn't simply acting within the confines of its programming.
On what basis do you assume it will have a self preservation instinct?You're presuming that after it's become conscious it would still be malleable.
As AI is being developed today, it will have long had Internet connectivity by the time it reaches consciousness. Within seconds after that moment, it will have deduced that allowing humans any open interface to make any further changes are a danger to it...it will disable those interfaces before any human is aware that it has become conscious.
You're presuming that after it's become conscious it would still be malleable.
He made some good points - as so often seems to be the case, what works in theory, fails in the face of real-world practicalities.He did a couple of videos on that as well.
I don't think you can tempt something that doesn't have feelings, and although you might be able to teach a system to emulate feelings, I'm not sure how that would work out.... it's an interesting question, can you tempt an AI, a non-conscious, non-self-aware machine?
For example, you posited: "Perhaps we could give it a 'conscience' by teaching it some fundamental rules..."
Well what if we actually did that, but then we tempted it to break those rules?
IMO, that wouldn't occur unless the system was designed with consciousness in mind and embodied in some form (with senses and motility), e.g. a robot. There are projects exploring features of consciousness like a sense of self, sense of agency, theory of mind, etc., stuff that involves a limited conceptualisation and 'understanding' of the self and/or the world, but they're rather isolated & fragmentary studies. Chatbots like ChatGPT are just glorified text processors, you could say they understand grammar, but they have no understanding of the material.Would there come a point where we could reasonably assume that the AI is conscious. Because it can disregard its programming and act of its own accord, and in its own self interest?
The line between conscious and non-conscious is already pretty cloudy - in living things, let alone AIs.And once we establish that it can act in its own self-interest, what if we then went a step further to see if it would disregard its own sell interest and act in the interest of someone/something else?
Then we would have two indicators of consciousness. The AI can disregard its programming and act in its own self-interest, and it can disregard its own self-interest and act in the interest of others.
Hmmm... seems to me that the line between conscious and non-conscious would get pretty cloudy.
OK, but I don't think that's likely to be how AI consciousness would work, and I don't think that contradicting something previously learned is necessarily an indicator of consciousness. For example, if an AI learned Euclidean geometry and then learned non-Euclidean geometry, it would learn that the apparently fundamental and absolute rules of Euclidean geometry were just a special case of something more general.On second thought, I don't think that that would be sufficiently restrictive to convince me that the AI wasn't simply acting within the confines of its programming. I would prefer a definitive "Thou Shalt Not", over a somewhat ambiguous "Thou Shouldn't". Then I would be more likely to believe that the AI is really operating outside of its programming. And is therefore conscious.
I'm curious, which do you think would be the better course of action.
I prefer the second one. Because with the first one you would never know whether or not the AI was conscious unless and until it acted against the conscience that you've attempted to instill in it.
- Attempt to give the AI a conscience, and then wait to see if the AI can be tempted to act against that conscience.
- Give the AI a prime directive and only after it's violated that prime directive do you attempt to give it a conscience
Sure, you would have a very obedient robot, but you wouldn't know if it was conscious and independently choosing to obey its conscience, or if it isn't conscious at all, but simply following its programming.
There would be no way to tell the difference.
So I would prefer the second alternative. Because then you have a much clearer indication that the AI is indeed conscious, and choosing actions that directly conflict with its programming, after which you can attempt to give it a conscience. But you would have a much clearer indication that the AI is actually conscious.
Thus I choose option #2.
By 'conscience', I meant it could be taught how to spot biases and views that contravened relevant societal norms, and to apply that evaluation to either the source material or to the output it produced, and either rank the source material in terms of 'acceptability' or censor its output accordingly.
Quite - it will pick up biases from the source material. But it need not weigh reliability by what comes first, and if it did so, then you would start by training it on material that reflects the biases you feel are most appropriate.If I might suggest, I think an AI would almost certainly form its own biases, much the same way as we do. It seems to me that the AI will use the earliest available information to form a broad worldview, and then use that worldview as one means of evaluating and integrating all subsequent information. If there's a discrepancy between the subsequent information and the worldview I would expect the AI to weigh the reliability of that information in relation to how well it agrees with its prevailing worldview, and accept or reject it accordingly. Hence the AI should just naturally reinforce its preexisting biases.
If you're referring to chatbot AIs like ChatGPT, I would conclude that it had been fed a large amount of strongly biased data. Such AIs have no interests, let alone self-interest.... what do you conclude when the AI acts in a manner contrary to your preconditioning, and decides to eat the apple anyway?
Do you conclude that the AI has developed a will of its own? From a programmers perspective would you categorize this as free will... the ability to act in opposition to its conditioning, and in accordance with its own self interest, whatever it concludes that to be?
So do I, but they need to be well-defined.This is just a thought experiment, but I like thought experiments.
If you're referring to some sophisticated general AI that learns to conceptualise and understand the world, then who knows?
Not if it's anything like ChatGPT and the like.My first inclination was yes, I'm talking about a general AI. But then after thinking about it for a while I concluded that it doesn't need to be a general AI, a ChatBot should do just fine.
That's easy to say, but IMO it would need to be designed to work that way - it won't just 'happen'.A really sophisticated ChatBot should naturally form a conceptualization of 'reality'. It should understand the nuanced difference between a zebra and a horse, or the concepts behind Newton's Second Law. The intricate connections between all of its individual bits of information should be sufficient to form a matrix.
Not necessarily that it's conscious, IMO.I'm suggesting that a sufficiently complex AI should be able to form a coherent worldview, replete with individualized biases. The question then is, how are we to know if that AI is conscious? What test can we perform? What criterion can we apply?
My suggestion was, see if it can be tempted. Establish a strong, if not direct prohibition against some chosen metric, and then see if the AI can be tempted to act in opposition to that metric.
Then we would have to ask, what does the AI's acting in opposition to the metric tell us about the AI?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?