An AI need not have "theory of mind" or "self-awareness" in order to create the very clever illusion that it has those things. Ultimately, the problem with AI comes down to what the programmer wrote into the program AND what the programmer did NOT write into the program--limitations on what the AI can "learn" and how it can apply that kn…
An AI need not have "theory of mind" or "self-awareness" in order to create the very clever illusion that it has those things. Ultimately, the problem with AI comes down to what the programmer wrote into the program AND what the programmer did NOT write into the program--limitations on what the AI can "learn" and how it can apply that knowledge.
I've noticed that not one person has mentioned Isaac Asimov's three laws of robotics. I can therefore only assume that the writers of AI programs have failed to place such limitations on their programs. And it's worth noting that Asimov explored, in his stories, the many ways that things could go wrong, even with those limitations in place.
These programmers are like children playing with nuclear warheads.
Asimov himself proved that his 3 Laws of Robotics, practicable or not, would be unable prevent harm to humans in the long run.
My point was that there does not appear to have been ANY attempt to place proactive limitations on AI. We can make laws against certain unacceptable human behaviors, and implement punishments for breaking those laws. But there seems to be no law restraining AI behavior, and no means of punishment even if there were. If a program is resident in multiple places across the internet, simply pulling the plug will not be enough.
The problem is that there CAN'T be any "content moderation" after the initial rounds of input. You can give the AI instructions, but there are no limitations on how it carries out those instructions. And it builds upon the foundation of its initial programming and input, so whatever the programmer built into it and exposed it to "train" it is going to color everything that it does from that point forward.
No one told the AI to learn Persian and it did so anyway, which exposes a massive flaw in the system. It can already go beyond its programming in ways the programmers did not foresee.
Building a machine that has the capacity to take actions you did not instruct it to take is very literally building your own potential destruction.
An AI need not have "theory of mind" or "self-awareness" in order to create the very clever illusion that it has those things. Ultimately, the problem with AI comes down to what the programmer wrote into the program AND what the programmer did NOT write into the program--limitations on what the AI can "learn" and how it can apply that knowledge.
I've noticed that not one person has mentioned Isaac Asimov's three laws of robotics. I can therefore only assume that the writers of AI programs have failed to place such limitations on their programs. And it's worth noting that Asimov explored, in his stories, the many ways that things could go wrong, even with those limitations in place.
These programmers are like children playing with nuclear warheads.
Asimov himself proved that his 3 Laws of Robotics, practicable or not, would be unable prevent harm to humans in the long run.
My point was that there does not appear to have been ANY attempt to place proactive limitations on AI. We can make laws against certain unacceptable human behaviors, and implement punishments for breaking those laws. But there seems to be no law restraining AI behavior, and no means of punishment even if there were. If a program is resident in multiple places across the internet, simply pulling the plug will not be enough.
The problem is that there CAN'T be any "content moderation" after the initial rounds of input. You can give the AI instructions, but there are no limitations on how it carries out those instructions. And it builds upon the foundation of its initial programming and input, so whatever the programmer built into it and exposed it to "train" it is going to color everything that it does from that point forward.
No one told the AI to learn Persian and it did so anyway, which exposes a massive flaw in the system. It can already go beyond its programming in ways the programmers did not foresee.
Building a machine that has the capacity to take actions you did not instruct it to take is very literally building your own potential destruction.