Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
for all I know there are AIs that pretend to be humans who post and have conversations on CF. they probably don't sleep and so they have all the internet to learn from 24/7... God help them all.
They're onto us. We need to hasten the program!
Does how many other people have concerns over super intelligent AI?
I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.
Not out of malice... just because it think's it's practical.
Obviously a super intelligent AI could out predict us, so there's no way we could stop it.
Can we be sure that no AI is made which we can't control? Or must we trust that the first one's will protect us?
"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."
— Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence", 2003
Personally, I don't see with my crystal ball much more than specialized AI for solving certain kinds of problems -- basically, knowledge based systems. I doubt that we'll see much in the way of fully autonomous general AIs.
Does how many other people have concerns over super intelligent AI?
I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.
Not out of malice... just because it think's it's practical.
Obviously a super intelligent AI could out predict us, so there's no way we could stop it.
Can we be sure that no AI is made which we can't control? Or must we trust that the first one's will protect us?
"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."
— Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence", 2003
I think you need a better example. You don't use AI to make paperclips. Paperclip manufacture does not require the use of sentient, ambulatory robots that prowl the Earth looking for steel. Paperclips are made by same same factory machines designed in the 1930s, that bend steel wire. A mindlessly simple task for a crude machine that won't hurt anyone as long as you don't stick your finger in it.Suppose we have an AI whose only goal is to make as many paper clips as possible..
That works if it is only a computer. But if it is a true AI, then it can re-program itself to get rid of such annoying limits.Simply program it not to want to kill people.
But again, I'm not in that camp. We have trouble getting programs to work that simply generate accurate reports!
I'm saying it's possible, but difficult. And it's not all about funding. There are many more moving pieces to a software development project than most people realize. Getting all those pieces in sync with the software is the challenge.Just to be clear, are you saying that making a program that generates accurate reports is possible or impossible?
For purposes of the question, let's define "accurate" by way that it's as accurate as a human doing the similar job would be. Doesn't have to be perfect but just as good as human.
If something is difficult, but possible, it's all about project funding.
Russian robot escapes from lab, disrupts traffic, causes chaos in the streets
Read more: http://www.digitaltrends.com/cool-tech/russian-robot-escapes-lab-disrupts-traffic/#ixzz4BrGoSZ65
Why doesn't anyone think that AIs would be programmed with something akin to Asimov's Laws of Robotics? Or fail safe switches?
eudaimonia,
Mark
or just consider the fact that with consciousness comes the potential for morality , teach them some ethics , and they may just WANT ( emphasis on their wants , not just ours ) to be pacifists , seriously people , they would be considered people , not mere machines , forcing the species homo synthetica to be peaceful is just as bad , with ethics , the majority would either support just war theory , or pacifism , hardely any would support genicide, and for those who murder , there is something known as a fair trial by jury , followed by jail time and relabilitation , it is really that simpleI think they would be. The question is whether that would be enough.
eg: You tell it not to kill, so it enslaves.
You tell it not to enslave, so it puts humanity into a coma.
You tell it not to do that, so it puts us into a semi-awake drugged state.
You tell it not to do that, it imprisons us, and say that isn't slavery.
Etc,
What if we, miss something?
Or if we make a general rules, what if the general rules miss something we haven't thought of?
If androids get human level intelligence, just name them " homo sythetica " and problem solved
Ah, thanks for the correction.I'm with the Sultan. I'm a computer programmer, and you wouldn't believe how hard it is to get even relatively simple things to work, let alone a self-replicating program with hardware interfaces (i.e., a robot). I know robots exist for specialized tasks, but to my knowledge they aren't self-replicating and are strictly doing what the programmers told it to do (at best). Humans just aren't smart enough to build something that smart.
And btw, it was the founder of DEC (Digital Equipment Corp.), Ken Olsen, who couldn't imagine the need for a personal computer.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?