• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.

The Terminator could become REAL: Intelligent AI robots capable of DESTROYING mankind

digitalgoth

Junior Member
Jun 4, 2014
258
47
✟25,320.00
Faith
Other Religion
Probably better to worry about the dangers of "AI" once anything like "AI" is even remotely possible. At this point it's kind of like worrying about the dangers of humans traveling to the Andromeda galaxy.

My apologies for the wall of text, short story, AI isn't a threat unless you apply it wrong, and automation won't destroy the world.

I have to agree with this, and I say this as someone that works with, and builds AI systems, every day.

There's this big futurist "fetish" with the technological singularity, where computing and technology become recursively self-improving, and a sci-fi belief that it will then rise up to become the terminator and start making its own decisions and killing us all.

Personally, I'm not trying to sell books about yet another coming apocalypse, or trying to be social-media famous, so I have a hard time supporting this nonsense. They're right about one thing which I'll get to in a bit.

First off, our current AI concepts are modeled after how the brain works, it being a bunch of connected "neurons" that apply statistical generalizations about things. This can be useful in things like converting hand writing to actual text, or voice analysis of speech for processing (like Siri does) to cause something to happen.

Much of what AI today does is recognize patterns and make decisions based on those patterns. It might be facial recognition, identify features in a MRI, text or speech recognition, or things that can use patterns.

Creating an AI to be moral/ethical/conscious is, to me, nonsense. You can make it pattern recognize a situation, say an autonomous car having to decide whether to crash into a bus full of children, or kill the driver instead by driving over a cliff, but that's not it making a moral or ethical decision, that's just it recognizing a situation and being told to make that decision as the result of programming. A nice thing about AI is it can recognize a possible outcome far in advance, and like a good driver slow down or brake long before a problem becomes a problem. No program can make moral or ethical decisions since that's a cultural taboo/mores/rules type of decision, which an AI isn't part of. It can fake that it is, but it isn't, much like Siri isn't real, but you can pretend she is if you want to.

A bigger issue is not AI, but applying it correctly. Such as if you design an automated drone to go bomb possible terrorist targets, and it targeting a children's school or hospital either by accident or because there were terrorists hiding in there. The issue isn't the AI, as it is just identifying targets, but of it being deployed to an area where a school or children might be in the first place, or not having those sites on an exclusion list.

It's like a gun. A gun is a tool. Some people use it correctly, and some people don't and cause problems we morally disagree with. It doesn't mean the tool is the problem, it means the person used it incorrectly. If you made the gun have the ability to always hit the target, or automatically fire at a person, doesn't mean the gun is to blame, it means the person using it didn't use it "correctly".

One thing the article says correctly, is that AI would be beyond out understanding. I agree with this, just as bacteria on our skin or ants on the ground have any clue about our level of intelligence. It is equivalent to trying to understand how all the trees on earth communicate with each other, or the fact the internet is already millions of computers connected together that already form a neural network. Perhaps the internet is conscious, but we'd never recognize it as such, any more than we can talk to ants or bacteria, because it maps to nothing we understand.

The issue of a self-evolving AI isn't much of an issue (to me) because evolutionary programming to self-evolve an AI already exists, and is used all the time for self-improvement. I've programmed such systems and they have yet to try and kill me with the hard drive. Maybe I've just been nice and it let me live.

Another big issue that's brought up is how automation will render 80% of humanity jobless, which is another silly thing. It only works if you assume all humans are static, and cannot do anything other than what they're doing now. Not every business can be automated. Industries that do assembly line production can do automation fairly well, like building a car or an iPhone, but not everyone wants a robotic bartender, or waiter, or lawyer, or software engineer, or doctor, or etc.

The U.S. went from an industrial economy to a service sector economy and somehow we aren't on 80% unemployment, people found other types of jobs and other types of businesses were created. The assumption that human beings are idiots and have zero adaptability (we're one of the most adaptable species on the planet) is an insult.
 
Upvote 0

Michael

Contributor
Site Supporter
Feb 5, 2002
25,145
1,721
Mt. Shasta, California
Visit site
✟320,648.00
Gender
Male
Faith
Christian
Creating an AI to be moral/ethical/conscious is, to me, nonsense. You can make it pattern recognize a situation, say an autonomous car having to decide whether to crash into a bus full of children, or kill the driver instead by driving over a cliff, but that's not it making a moral or ethical decision, that's just it recognizing a situation and being told to make that decision as the result of programming. A nice thing about AI is it can recognize a possible outcome far in advance, and like a good driver slow down or brake long before a problem becomes a problem. No program can make moral or ethical decisions since that's a cultural taboo/mores/rules type of decision, which an AI isn't part of. It can fake that it is, but it isn't, much like Siri isn't real, but you can pretend she is if you want to.

If Siri represents the state of the art of AI, computers have a *long* way to go to even be considered the equivalent of a "dumb blond".
 
Upvote 0

digitalgoth

Junior Member
Jun 4, 2014
258
47
✟25,320.00
Faith
Other Religion
If Siri represents the state of the art of AI, computers have a *long* way to go to even be considered the equivalent of a "dumb blond".

I set Siri to an Australian accent, so it can indeed come closer.

Or perhaps that's a confirmation bias on my part.
 
Upvote 0