Nautilidae
Community Member
- MBTI
- ENTP
- Enneagram
- 8w9, 853
The concept the public/society has of strong AGI is more immediately useful and thus, dangerous than the threat that unboxed strong AGI will show up tomorrow if ever.
What do I mean?
Perception of the nature of a conscious being controls the behavior of the viewer. This is why human beings feel more warm about puppies than about spiders. It's also why babies protest the presence of emotionless faces when they are engaged in an exchange with an someone. The same principle observed when switching between perceiving a fellow human observer vs. perceiving an AI (or AGI) observer.
My guess is, in any of our discussions with people who are unfamiliar with MBTI or any other personality typology system, that we were met with skepticism about it. The usual retort includes the following: #1 It's not scientifically proven and #2 You can't define people like that (*insert "you'll never defeat the power of the human spirit" meme*). And what is meant in point #2 (if you probe them about it) is that they literally mean that human beings are so complex that...something about snowflakes.Another guess is that if you were to ask that same set of people whether or not they were in some way fearful about AI manipulating human endeavors, they'd not only be more open to the idea, but rather, they might be armed with some TedTalk level narrative about how real the threat is.
Where is the switch that controls cognitive dissonance about this located? Why do people resist ideas that human brains/minds are a constrained (you know, like literally every system we know of ever) when asked in the context of a human viewer, but these same people show quick fearfulness which assumes cognitive constrainment is not only real, but easily gamed once the viewer is a "mind' that is not a fellow human? It's because of the nature of AI (or AGI) thinking and consciousness is completely unknown. It is an "other" in a way that an immigrant or any human outsider couldn't be. They are haphazardly correct to fear the circumstance, and not for the reason (AI is accurate) the threat is real.
And because the best fiction is written in the gaps of our knowledge about things; AI (AGI) can be rhetorically imbued with faculties of decision making and access that it does not actually have, and the public would not be able to verify. Atrocities of many kinds could be executed, things policy would not be able to support and accidents of "runaway" AI could be blamed. In this way, nation-states and other powerful actors could accomplish goals of foreign and/or domestic policy based on collective perception from ignorance. In essence, the public would be black-boxed rather than the "AI."
What do I mean?
Perception of the nature of a conscious being controls the behavior of the viewer. This is why human beings feel more warm about puppies than about spiders. It's also why babies protest the presence of emotionless faces when they are engaged in an exchange with an someone. The same principle observed when switching between perceiving a fellow human observer vs. perceiving an AI (or AGI) observer.
My guess is, in any of our discussions with people who are unfamiliar with MBTI or any other personality typology system, that we were met with skepticism about it. The usual retort includes the following: #1 It's not scientifically proven and #2 You can't define people like that (*insert "you'll never defeat the power of the human spirit" meme*). And what is meant in point #2 (if you probe them about it) is that they literally mean that human beings are so complex that...something about snowflakes.Another guess is that if you were to ask that same set of people whether or not they were in some way fearful about AI manipulating human endeavors, they'd not only be more open to the idea, but rather, they might be armed with some TedTalk level narrative about how real the threat is.
Where is the switch that controls cognitive dissonance about this located? Why do people resist ideas that human brains/minds are a constrained (you know, like literally every system we know of ever) when asked in the context of a human viewer, but these same people show quick fearfulness which assumes cognitive constrainment is not only real, but easily gamed once the viewer is a "mind' that is not a fellow human? It's because of the nature of AI (or AGI) thinking and consciousness is completely unknown. It is an "other" in a way that an immigrant or any human outsider couldn't be. They are haphazardly correct to fear the circumstance, and not for the reason (AI is accurate) the threat is real.
And because the best fiction is written in the gaps of our knowledge about things; AI (AGI) can be rhetorically imbued with faculties of decision making and access that it does not actually have, and the public would not be able to verify. Atrocities of many kinds could be executed, things policy would not be able to support and accidents of "runaway" AI could be blamed. In this way, nation-states and other powerful actors could accomplish goals of foreign and/or domestic policy based on collective perception from ignorance. In essence, the public would be black-boxed rather than the "AI."