Interesting points on prior technologies. The difference between AI and historical technologies is AI will soon think and judge on its own. All other prior technologies were driven by human thought which admittedly has its flaws but at least enabled a much broader judgement set.
AI will censor based on the biases of whoever created it. Th…
Interesting points on prior technologies. The difference between AI and historical technologies is AI will soon think and judge on its own. All other prior technologies were driven by human thought which admittedly has its flaws but at least enabled a much broader judgement set.
AI will censor based on the biases of whoever created it. Thats a very small group whose biases have been exposed see Twitter Files ( thank you for your part Bari) Another example currently being debated is historical facts that have been eliminated from the US public education system. There are many more.
Once AI can fully think for itself the gloves are off.
" The difference between AI and historical technologies is AI will soon think and judge on its own. All other prior technologies were driven by human thought which admittedly has its flaws but at least enabled a much broader judgement set. "
I disagree.
My experience (see my comment above) is that ChatGPT is programmed by humans and given sensibilities that are "of humanity". While it seems to be thinking for itself... it isn't. It comes into the thought process with bias and proclivity that comes from the programmer.
When I asked ChatGPT a question and its answer disappointed me (and I told it so)... it apologized. Think about that one. It apologized. It wanted to please me! That, BR8R, is not the thinking of a machine, but the thinking of the programmer trying to sell an answer. No different than any other biased human wanting to please.
AI is not going to “think”. We don’t even understand how OUR brains “think”...we don’t even know what consciousness IS. How on earth, then, can we believe that AI will think for “itself”. It has no ability to observe, no ability to intend, no ability to feel, and no ability to self reference. Consider the “Chinese Room Experiment”. AI does not understand what it is doing. It is “predicting” based on information that humans have gathered. Some of its answers look like a simulation of predicted text from the rantings of a madman on Reddit. No, no if AI ever censors it will be because of an greedy lunatic behind its code, programming something in the backend. The scary thing about AI is that humans will grow complacent and think this thing is actually “smart”, not plagiarizing all over the place, and worthy of our worship. The problem lies in us forgetting that this is a tool we “use” and that people program and instead insisting that it is a “super intelligence” and infallible.
Interesting points on prior technologies. The difference between AI and historical technologies is AI will soon think and judge on its own. All other prior technologies were driven by human thought which admittedly has its flaws but at least enabled a much broader judgement set.
AI will censor based on the biases of whoever created it. Thats a very small group whose biases have been exposed see Twitter Files ( thank you for your part Bari) Another example currently being debated is historical facts that have been eliminated from the US public education system. There are many more.
Once AI can fully think for itself the gloves are off.
" The difference between AI and historical technologies is AI will soon think and judge on its own. All other prior technologies were driven by human thought which admittedly has its flaws but at least enabled a much broader judgement set. "
I disagree.
My experience (see my comment above) is that ChatGPT is programmed by humans and given sensibilities that are "of humanity". While it seems to be thinking for itself... it isn't. It comes into the thought process with bias and proclivity that comes from the programmer.
When I asked ChatGPT a question and its answer disappointed me (and I told it so)... it apologized. Think about that one. It apologized. It wanted to please me! That, BR8R, is not the thinking of a machine, but the thinking of the programmer trying to sell an answer. No different than any other biased human wanting to please.
It does not want anything. Desire is an emotion. Rather it is programmed to please you.
I would go further and say that it's programmed to apologize when someone writes that they are disappointed.
AI is not going to “think”. We don’t even understand how OUR brains “think”...we don’t even know what consciousness IS. How on earth, then, can we believe that AI will think for “itself”. It has no ability to observe, no ability to intend, no ability to feel, and no ability to self reference. Consider the “Chinese Room Experiment”. AI does not understand what it is doing. It is “predicting” based on information that humans have gathered. Some of its answers look like a simulation of predicted text from the rantings of a madman on Reddit. No, no if AI ever censors it will be because of an greedy lunatic behind its code, programming something in the backend. The scary thing about AI is that humans will grow complacent and think this thing is actually “smart”, not plagiarizing all over the place, and worthy of our worship. The problem lies in us forgetting that this is a tool we “use” and that people program and instead insisting that it is a “super intelligence” and infallible.
That's a good point about AI censoring based on the biases of whoever created it. I have run into that when getting some writing help from ChatGPT.