Just to keep my comment in context i want to give an example of the argument being made in some quarters where people are supporting the rise of AI
bare in mind that the technocracy needs to process VAST amounts of information. For example they want every house to have a wifi powered 'SMART' meter which will basically spy on all of your energy useage and through the wattage it knows what devices you are using, when and for how long. It sends all of that data by wifi to a booster meter that then sends the information back to the energy companies
No one knows who has the booster meter but that meter uses even more powerful microwaves than the standard meters which can affect human health. So one family on the street is getting an even bigger dose of microwaves than everyone else
But anyway no human could process all of that data from millions of homes so they use algorithms and AI. But they want to gather data from EVERYTHING you use and they intend to make everything you use 'SMART' so that even your toothbrush will send back data about how you brush your teeth to the technocrats
So they want AI to run the SMART grid
So we will hear more and more shills of the technocracy make the case for AI because they need it to run their technological gulag. See for example this article:
Advanced Artificial Intelligence Could Run The World Better Than Humans Ever Could
Humans are pretty terrible at making choices that are good for us in the long term. AI could do better.
Dan RobitzskiAugust 29th 2018
There are fears that tend to come up when people talk about futuristic artificial intelligence — say, one that could teach itself to learn and become more advanced than anything we humans might be able to comprehend. In the wrong hands, perhaps even on its own, such an advanced algorithm might dominate the world’s governments and militaries, impart
Orwellian levels of surveillance, manipulation, and social control over societies, and perhaps even control entire battlefields of
autonomous lethal weapons such as military drones.
But some artificial intelligence experts don’t think those fears are well-founded. In fact, highly-advanced artificial intelligence could be better at managing the world than humans have been. These fears themselves are the real danger, because they may hold us back from making that potential a reality.
“Maybe
not achieving AI is the danger for humanity,”
Tomas Mikolov, a research scientist for Facebook AI, said at
The Joint Multi-Conference on Human-Level Artificial Intelligence, organized by GoodAI, in Prague on Saturday.
“Maybe
not achieving AI is the danger for humanity.”
As a species, Mikolov explained, humans are pretty terrible at making choices that are good for us in the long term. People have carved away rainforests and other ecosystems to harvest raw materials, unaware of (or uninterested in) how they were contributing to the slow, maybe-irreversible degradation of the planet overall.
But a sophisticated artificial intelligence system might be able to protect humanity from its own shortsightedness.
“We as humans are very bad at making predictions of what will happen in some distant timeline, maybe 20 to 30 years from now,” Mikolov added. “Maybe making AI that is much smarter than our own, in some sort of symbiotic relationship, can help us avoid some future disasters.”
Granted, Mikolov may be in the minority in thinking a superior AI entity would be
benevolent. Throughout the conference, many other speakers expressed these common fears, mostly about AI used for dangerous purposes or misused by malicious human actors. And we shouldn’t laugh off or downplay those concerns.
We don’t know for sure whether it will ever be possible to create artificial general intelligence, often considered the holy grail of sophisticated AI that’s capable of doing pretty much any cognitive task humans can, maybe even doing it better.
The future of advanced artificial intelligence is promising, but it comes with a lot of ethical questions. We probably don’t know all the questions we’ll have to answer yet.
But most of the panelists at the HLAI conference agreed that we still need to decide on the rules before we need them. The time to create international agreements, ethics boards, and regulatory bodies across governments, private companies, and academia? It’s now. Putting these institutions and protocols in place would reduce the odds that a hostile government, unwitting researcher, or even a cackling mad scientist would unleash
a malicious AI system or otherwise weaponize advanced algorithms. And if something nasty did get out there, then these systems would ensure we’d have ways to handle it.
With these rules and safeguards in place, we will be much more likely to usher in a future in advanced AI systems live harmoniously with us, or perhaps even save us from ourselves.
https://futurism.com/advanced-artificial-intelligence-better-humans