Probably better to worry about the dangers of "AI" once anything like "AI" is even remotely possible. At this point it's kind of like worrying about the dangers of humans traveling to the Andromeda galaxy.
My apologies for the wall of text, short story, AI isn't a threat unless you apply it wrong, and automation won't destroy the world.
I have to agree with this, and I say this as someone that works with, and builds AI systems, every day.
There's this big futurist "fetish" with the technological singularity, where computing and technology become recursively self-improving, and a sci-fi belief that it will then rise up to become the terminator and start making its own decisions and killing us all.
Personally, I'm not trying to sell books about yet another coming apocalypse, or trying to be social-media famous, so I have a hard time supporting this nonsense. They're right about one thing which I'll get to in a bit.
First off, our current AI concepts are modeled after how the brain works, it being a bunch of connected "neurons" that apply statistical generalizations about things. This can be useful in things like converting hand writing to actual text, or voice analysis of speech for processing (like Siri does) to cause something to happen.
Much of what AI today does is recognize patterns and make decisions based on those patterns. It might be facial recognition, identify features in a MRI, text or speech recognition, or things that can use patterns.
Creating an AI to be moral/ethical/conscious is, to me, nonsense. You can make it pattern recognize a situation, say an autonomous car having to decide whether to crash into a bus full of children, or kill the driver instead by driving over a cliff, but that's not it making a moral or ethical decision, that's just it recognizing a situation and being told to make that decision as the result of programming. A nice thing about AI is it can recognize a possible outcome far in advance, and like a good driver slow down or brake long before a problem becomes a problem. No program can make moral or ethical decisions since that's a cultural taboo/mores/rules type of decision, which an AI isn't part of. It can fake that it is, but it isn't, much like Siri isn't real, but you can pretend she is if you want to.
A bigger issue is not AI, but applying it correctly. Such as if you design an automated drone to go bomb possible terrorist targets, and it targeting a children's school or hospital either by accident or because there were terrorists hiding in there. The issue isn't the AI, as it is just identifying targets, but of it being deployed to an area where a school or children might be in the first place, or not having those sites on an exclusion list.
It's like a gun. A gun is a tool. Some people use it correctly, and some people don't and cause problems we morally disagree with. It doesn't mean the tool is the problem, it means the person used it incorrectly. If you made the gun have the ability to always hit the target, or automatically fire at a person, doesn't mean the gun is to blame, it means the person using it didn't use it "correctly".
One thing the article says correctly, is that AI would be beyond out understanding. I agree with this, just as bacteria on our skin or ants on the ground have any clue about our level of intelligence. It is equivalent to trying to understand how all the trees on earth communicate with each other, or the fact the internet is already millions of computers connected together that already form a neural network. Perhaps the internet is conscious, but we'd never recognize it as such, any more than we can talk to ants or bacteria, because it maps to nothing we understand.
The issue of a self-evolving AI isn't much of an issue (to me) because evolutionary programming to self-evolve an AI already exists, and is used all the time for self-improvement. I've programmed such systems and they have yet to try and kill me with the hard drive. Maybe I've just been nice and it let me live.
Another big issue that's brought up is how automation will render 80% of humanity jobless, which is another silly thing. It only works if you assume all humans are static, and cannot do anything other than what they're doing now. Not every business can be automated. Industries that do assembly line production can do automation fairly well, like building a car or an iPhone, but not everyone wants a robotic bartender, or waiter, or lawyer, or software engineer, or doctor, or etc.
The U.S. went from an industrial economy to a service sector economy and somehow we aren't on 80% unemployment, people found other types of jobs and other types of businesses were created. The assumption that human beings are idiots and have zero adaptability (we're one of the most adaptable species on the planet) is an insult.