Personally, I have concerns about the trustworthiness of ChatGPT. I only recently got around to testing it out and what I saw alarmed me. For instance when I asked it for statistics on race and crime in America it flat out refused to provide them, instead spitting back platitudes about the dangers of racial generalization. It took a fair…
Personally, I have concerns about the trustworthiness of ChatGPT. I only recently got around to testing it out and what I saw alarmed me. For instance when I asked it for statistics on race and crime in America it flat out refused to provide them, instead spitting back platitudes about the dangers of racial generalization. It took a fair amount of prodding for it to return a few high-level numbers.
It seems to me that ChatGPT clearly embodies left-wing political values and I’m worried about what this portends. Who gets to decide what the acceptable bounds of discourse are around controversial issues related to race, gender and culture? Is it a handful of employees at OpenAI? The political orientation of ChatGPT will inevitably lead to the balkanization of our AI landscape whereby people who are upset with the biases of the mainstream platforms end up creating alternatives.
A lot of the worry over AI has been the extent to which it might replace human labor down the line, but my biggest fear isn't AI making increasingly larger swathes of humanity obsolete. My biggest fear is that our future robot overlords all end up sounding exactly like Ibram Kendi and Robin DiAngelo. Can you imagine an army of terminators tasked with enforcing the mandates of Kendi's Department of Anti-Racism? Can you imagine if SkyNet basically became Kendi?
Yes, a tool is only as good as those who wield it. Software is just a sophisticated tool and ChatGPT is in the hands of the cultural radical left - like our institutions.
Yes. It is sad to me that the issue for me isn't even whether the AI is truthful. Even if you could make sure if was 100% truthful, it does things that show bias. Anything I have asked about Covid or vaccines always ended with Bard telling me to go get vaccinated. When I asked the same question about Biden and Trump (what good things did they do as president) it felt the need to ALSO tell me bad things about Trump but not Biden.
It tries to claim it is just reciting the data it was trained on, but that cannot be all. When I ask a question, it doesn't just answer it...if it finds it 'necessary' it adds biased commentary. I asked AI for an answer because I just wanted an answer. I can watch the news if I want punditry.
My fear is that this is purposeful. That some programmers see this as a way to sneak their propaganda into the world. They know the non-techy people will assume that a computer won't manipulate them.
I imagine the end result of AI is multiple AI "personalities". Like, one AI will be liberal, but there will also be conservative AIs, libertarian AIs, etc. Then people will only interact with AIs that share their political views.
It's funny how people get so worked up about the dangers of AI, when they can't see that, as with any tool, it's not the tool that's the danger, it's the people using it. In this case, the most important thing - who determines the neutrality of the training sets - is completely left out of the " safety " conversation. Any large language model (like CHatGPT) can only spit out what it has absorbed in training. Train it on books by Marx, Engels and Lenin and it will become the most devout communist! Train it on WP, NYT and the likes of them, and it would become woke! And the real danger is that it can do that wile people think it's "objective" because "is so powerful"
Most people do not realise that LLMs *do not think*, they just predict the most likely chain of words based on the words in front of them, *based only on the training material*, but people treat them as intelligent because of the high similarity to natural language. Yesterday Jordan Peterson himself was outraged that chatGPT was lying to him! (but to lie you need a conscience, which chatGPT does not have). So that the whole discussion about "dangers" and "safety" and "regulation" is misleading, nobody is discussing the most important thing: who decides the training sets and according to what criteria.
If you think about nuclear energy, it's like we keep talking about the fusion reaction and the danger of explosions and how we have to stop any research into it because accidents can happen, instead of talking about the rules and safety systems that have to be put in place so that only responsible people are in charge. It's never the tools, it's always the people using them that are the danger.
ChatGPT is a fun toy but isn't useful except for things such as writing Microsoft Excel macros, which it seems to do well. For example, I thought it would be fun to have ChatGPT write an introduction for my boss, who is a big proponent of the tech, and it spit back a bunch of nonsense and lies.
I will note, though, that it's not just technology that is lying. Politicians have taken it to a whole new level. People always knew that politicians greatly stretch the truth, but people like George Santos are taking it to a whole new level.
George Santos' lying is utterly inconsequential to America. However, such a pathological fabricator of the right is quite useful to the media. They use him to distract everyone from the consequential lies of the party in charge.
Leftists have been lying since the days of Walter Duranty covering up Stalin's crimes. Heck they've been lying since that foolish old German fart, Marx, penned the rambling doggerel and utter trash that has caused more murder, mayhem and suffering than any other document in history.
Personally, I have concerns about the trustworthiness of ChatGPT. I only recently got around to testing it out and what I saw alarmed me. For instance when I asked it for statistics on race and crime in America it flat out refused to provide them, instead spitting back platitudes about the dangers of racial generalization. It took a fair amount of prodding for it to return a few high-level numbers.
It seems to me that ChatGPT clearly embodies left-wing political values and I’m worried about what this portends. Who gets to decide what the acceptable bounds of discourse are around controversial issues related to race, gender and culture? Is it a handful of employees at OpenAI? The political orientation of ChatGPT will inevitably lead to the balkanization of our AI landscape whereby people who are upset with the biases of the mainstream platforms end up creating alternatives.
A lot of the worry over AI has been the extent to which it might replace human labor down the line, but my biggest fear isn't AI making increasingly larger swathes of humanity obsolete. My biggest fear is that our future robot overlords all end up sounding exactly like Ibram Kendi and Robin DiAngelo. Can you imagine an army of terminators tasked with enforcing the mandates of Kendi's Department of Anti-Racism? Can you imagine if SkyNet basically became Kendi?
I never thought of that, but it's an image that has now been burned into my brain: Kendi becomes SkyNet.
Yes, a tool is only as good as those who wield it. Software is just a sophisticated tool and ChatGPT is in the hands of the cultural radical left - like our institutions.
Yes. It is sad to me that the issue for me isn't even whether the AI is truthful. Even if you could make sure if was 100% truthful, it does things that show bias. Anything I have asked about Covid or vaccines always ended with Bard telling me to go get vaccinated. When I asked the same question about Biden and Trump (what good things did they do as president) it felt the need to ALSO tell me bad things about Trump but not Biden.
It tries to claim it is just reciting the data it was trained on, but that cannot be all. When I ask a question, it doesn't just answer it...if it finds it 'necessary' it adds biased commentary. I asked AI for an answer because I just wanted an answer. I can watch the news if I want punditry.
My fear is that this is purposeful. That some programmers see this as a way to sneak their propaganda into the world. They know the non-techy people will assume that a computer won't manipulate them.
"I can watch the news if I want punditry"
Remember when the news was just the facts, not accompanied by commentry.
I imagine the end result of AI is multiple AI "personalities". Like, one AI will be liberal, but there will also be conservative AIs, libertarian AIs, etc. Then people will only interact with AIs that share their political views.
You're right, is left biased, but why?
It's funny how people get so worked up about the dangers of AI, when they can't see that, as with any tool, it's not the tool that's the danger, it's the people using it. In this case, the most important thing - who determines the neutrality of the training sets - is completely left out of the " safety " conversation. Any large language model (like CHatGPT) can only spit out what it has absorbed in training. Train it on books by Marx, Engels and Lenin and it will become the most devout communist! Train it on WP, NYT and the likes of them, and it would become woke! And the real danger is that it can do that wile people think it's "objective" because "is so powerful"
Most people do not realise that LLMs *do not think*, they just predict the most likely chain of words based on the words in front of them, *based only on the training material*, but people treat them as intelligent because of the high similarity to natural language. Yesterday Jordan Peterson himself was outraged that chatGPT was lying to him! (but to lie you need a conscience, which chatGPT does not have). So that the whole discussion about "dangers" and "safety" and "regulation" is misleading, nobody is discussing the most important thing: who decides the training sets and according to what criteria.
If you think about nuclear energy, it's like we keep talking about the fusion reaction and the danger of explosions and how we have to stop any research into it because accidents can happen, instead of talking about the rules and safety systems that have to be put in place so that only responsible people are in charge. It's never the tools, it's always the people using them that are the danger.
ChatGPT is a fun toy but isn't useful except for things such as writing Microsoft Excel macros, which it seems to do well. For example, I thought it would be fun to have ChatGPT write an introduction for my boss, who is a big proponent of the tech, and it spit back a bunch of nonsense and lies.
I will note, though, that it's not just technology that is lying. Politicians have taken it to a whole new level. People always knew that politicians greatly stretch the truth, but people like George Santos are taking it to a whole new level.
George Santos' lying is utterly inconsequential to America. However, such a pathological fabricator of the right is quite useful to the media. They use him to distract everyone from the consequential lies of the party in charge.
Very astute Yan Shen. Now couple that with robocops.
Leftists have been lying since the days of Walter Duranty covering up Stalin's crimes. Heck they've been lying since that foolish old German fart, Marx, penned the rambling doggerel and utter trash that has caused more murder, mayhem and suffering than any other document in history.