Exactly.
While I get the concerns over AI, I think it misses the point.
The point of application software, is to accomplish a specific task.
Before one even thinks about building an application, there is a business need.
You kinda have to know what you're going to make and why, before you can start. So indeed, for all practical purposes, software is a tool.
AI is about having those applications improve themselves through machine learning.
Once the applications stop doing what they are meant to do, they become useless.
So there would be exactly zero reason for developing an AI engine that can become "aware" and be capable of emotional decision making and stuff.
The only reason I can imagine that something like that might be build, is in some kind of science experiment or similar, to better understand how brains work and psychological processes etc.
I just don't see it happening.
I'm going to go off on a rant, not at you, because there's been a lot of trigger words lately for me about AI and it angers me.
I spend all day writing AI for financial services and predictive marketing, and as a hobby on the side biological modeling and sports prediction, and you're absolutely correct. I don't think people quite understand how difficult it
can be to make an AI system do what you want in the first place. You aren't going to design something to have all this extra nonsense tacked on where it starts "thinking" about things, just pattern recognize the inputs and generate the desired output.
All of this nonsensical fear people have about AI is ridiculous. The only way this could possibly happen, is if you strapped a crappy made outsourced AI system to an anti-aircraft gun and train it to recognize certain types of enemy's planes and fire on them when it sees them. Assuming you didn't screw up and have it target commercial jetliners or not have any sort of error checking, all it will do is identify appropriate targets and shoot at them, which is exactly what you're going to want it to do. It isn't going to suddenly classify some child as a MIG 17 and shoot at it. Those kind of bugs are found in the lab, the finished product doesn't just update itself, that's just stupid.
I keep hearing this silliness about "AI escaping the lab and killing us all". Really? I deal with evolving self-learning systems (they tend to do better than back-propagation for complex interactions) and I don't recall taking any of the output from the network execution and wiring it up to the network card, 9mm sig sauer, build a bunch of robots that are ten times stronger than humans, and somehow let it "decide" that the most efficient move is to kill us all.
As you were saying, when I build an AI system for <purpose> once it hits the threshold of an acceptable solution to a problem domain the evolutionary training system stops, because it's done. It is now a tool or "product" and we move on with our lives. There's no reason for it to keep updating itself for conditions that serve no purpose, nor to update itself to have capabilities it didn't have before. If I really wanted it to do that, I'd make
version 2.0 and release that as a patch/reinstall/MS Update, or whatever.
People are worried about weaponized AI, which could be a problem maybe. But you aren't weaponizing AI, you're taking a weapon, and adding AI to it for targeting purposes most likely. A targeting system is a basic classification system, and you're not going to bake in decision making to it. You're going to have a remote on/off switch and make the weapon hot or not and start targeting whatever you designed it to do. It's more automating the weapon where it in theory requires less manual soldier operations, keeps costs down, more accurate targeting, less casualities, etc.
I think people see too much science fiction and Terminator movies and just think that the military, which famously ignored nuclear launch codes for years so that they'd never lose control over the stockpile in case of a first-strike, will be replaced by skynet.
People seem to think AI can do things in and of itself. It can't. It doesn't make decisions outside of what you set it up to do in the first place. There's only so much input information and output information going on. Only so many actions it can do. An AI automating a production line for GM and checking car quality is not going to have the ability to start making killer robots. No one would build that, because that would be a waste of time and money. And it wouldn't work. You just don't go down to the mall, buy a $19.95 AI machine, and plug it into an assembly line and say "Make me stuff that generates a lot of profit". You can't even get smart human beings to accomplish that successfully 90% of the time, why would people think an AI would be able to accomplish that?