Last Christmas my son turned me onto ChatGPT. I tried it out by asking : How many wind turbines would need to be deployed in the US to replace all fossil fuel electrical power generation?
It came back with a completely unsatisfactory weasel answer. "It is complicated". "There are many factors..." No numbers at all. I wrote back and said t…
Last Christmas my son turned me onto ChatGPT. I tried it out by asking : How many wind turbines would need to be deployed in the US to replace all fossil fuel electrical power generation?
It came back with a completely unsatisfactory weasel answer. "It is complicated". "There are many factors..." No numbers at all. I wrote back and said that I was disappointed that it had treated the question lightly and that it had data at its disposal to make an estimate.
It wrote back and said: "I apologize." And then it tried to answer my question, using numbers that were highly favorable to the effort to replace fossil fuel. That is, it used a high capacity factor (0.4) and it used the average energy generated during a year... not the peak power needed to replace fossil fuel.
Now, I decided to address its use of the apology instead of pursuing the poor effort on the question of numbers of wind turbines (its answer was 595,000 turbines). I noted that an apology is similar to an emotion. I asked it if it had emotions and it replied that it was programmed to have a sensibility to human emotions. It said that it didn't have emotion, but assumed that I had emotion and it tailored its answer to satisfy the human questioner's emotions... Wow!
So, my experience with ChatGPT tells me that it is political in its answers. It favored wind turbines by using unrealistic scenarios and tried to move me by manipulating my emotions. I reached for my bullshit repellent. In short, I was not impressed. I have not been back for 4 months and probably won't bother to use it again.
Which is instructive of the human mind overall. That, the average person accepts things on face value. Well, that isn't entirely true. Interesting research on trust internationally with compartmentalization shows that Americans are the worst at it. Compartmentalization that is. While Asians and Middle Easterners are the best, probably stemming from the fact that if they took it all on face value they'd end up broken and naked in a ditch.
I first dealt with the digital world in the 1960s, as an amateur radio operator. I found digital design to be boring, so I went into radars and antennas when I went to college. But I have worked with computers ever since. I have never... never used a computer that met my expectation. Office programs, technical programs, programs for fun, and work... all of them have disappointed. So, my skepticism is born of 50 years of what I call "oversold". Computers are always oversold. They always promise more than they deliver. The computer marketers rely on the user to adapt and accept deficiencies... every time.
Take a look at your cell phone. As an engineer, I consider smart phones to be trash. But the general population has adapted to the lousy screen, the flaky touch controls, and the poor battery life. Oversold. AI is not going to be any different. People will adapt to AI the same way they adapted to smart phones. They will know that the AI solution will be full of bugs... but they are sophisticated bugs. They will be common bugs that, well, everybody deals with. Well, I see the bugs immediately and I am warning every body not to go along with the program. No, the emperor really isn't wearing any clothes.
Last Christmas my son turned me onto ChatGPT. I tried it out by asking : How many wind turbines would need to be deployed in the US to replace all fossil fuel electrical power generation?
It came back with a completely unsatisfactory weasel answer. "It is complicated". "There are many factors..." No numbers at all. I wrote back and said that I was disappointed that it had treated the question lightly and that it had data at its disposal to make an estimate.
It wrote back and said: "I apologize." And then it tried to answer my question, using numbers that were highly favorable to the effort to replace fossil fuel. That is, it used a high capacity factor (0.4) and it used the average energy generated during a year... not the peak power needed to replace fossil fuel.
Now, I decided to address its use of the apology instead of pursuing the poor effort on the question of numbers of wind turbines (its answer was 595,000 turbines). I noted that an apology is similar to an emotion. I asked it if it had emotions and it replied that it was programmed to have a sensibility to human emotions. It said that it didn't have emotion, but assumed that I had emotion and it tailored its answer to satisfy the human questioner's emotions... Wow!
So, my experience with ChatGPT tells me that it is political in its answers. It favored wind turbines by using unrealistic scenarios and tried to move me by manipulating my emotions. I reached for my bullshit repellent. In short, I was not impressed. I have not been back for 4 months and probably won't bother to use it again.
You approached it correctly...most people won't. They will take what it says at face value...especially when it agrees with the person's beliefs.
Which is instructive of the human mind overall. That, the average person accepts things on face value. Well, that isn't entirely true. Interesting research on trust internationally with compartmentalization shows that Americans are the worst at it. Compartmentalization that is. While Asians and Middle Easterners are the best, probably stemming from the fact that if they took it all on face value they'd end up broken and naked in a ditch.
I first dealt with the digital world in the 1960s, as an amateur radio operator. I found digital design to be boring, so I went into radars and antennas when I went to college. But I have worked with computers ever since. I have never... never used a computer that met my expectation. Office programs, technical programs, programs for fun, and work... all of them have disappointed. So, my skepticism is born of 50 years of what I call "oversold". Computers are always oversold. They always promise more than they deliver. The computer marketers rely on the user to adapt and accept deficiencies... every time.
Take a look at your cell phone. As an engineer, I consider smart phones to be trash. But the general population has adapted to the lousy screen, the flaky touch controls, and the poor battery life. Oversold. AI is not going to be any different. People will adapt to AI the same way they adapted to smart phones. They will know that the AI solution will be full of bugs... but they are sophisticated bugs. They will be common bugs that, well, everybody deals with. Well, I see the bugs immediately and I am warning every body not to go along with the program. No, the emperor really isn't wearing any clothes.