People misunderstand the use of the term “artificial intelligence.”
To the astonishment of many, there is as yet no such thing as AI. It’s all human-generated code run very quickly. Not a single action performed by the algorithms is the least bit spontaneous. If you ask the algorithm to depart from its parameters it stops because it has n…
People misunderstand the use of the term “artificial intelligence.”
To the astonishment of many, there is as yet no such thing as AI. It’s all human-generated code run very quickly. Not a single action performed by the algorithms is the least bit spontaneous. If you ask the algorithm to depart from its parameters it stops because it has no brain, no way to step beyond the code and produce something different than that for which it is programmed.
Google co-opted the work of millions of humans over a decade in order to generate maps with semi-accurate placement of stop signs, crosswalks, etc. and likely everyone reading this site has participated.
The captchas that ask you to click on the stop signs, traffic lights etc are used by Google to generate those maps.
The same applies to DALL-E. If you ask the program to, oh, solve a vector equation for example it will freeze.
Which is one of many reasons I’ll never sit in a “self driving” car. I know how code works and I know there’s not a snowball’s chance in hades the code can handle even the most mundane of traffic hazards.
I refuse to own a vehicle with “anti-collision” sensors for the same reason.
DALL-E users may think they’re creating “art” but all they’re doing is deluding themselves.
Self-driving cars don't have to NEVER have accidents--they just have to have significantly fewer accidents than human drivers.
The interesting argument will come in, though, when a self-driving car has to "decide" whether to injure its occupants or random pedestrians. Though, in our car-centric society, I'm pretty sure courts would side with the cars.
There are already legal arguments about whether or not the provider of a self-driving car can be sued for damages if that car gets into an accident. I’m sure the EULAs for these things will be insane, and like all the other ones, we’ll just click Accept without reading them.
“To the astonishment of many, there is as yet no such thing as AI. It’s all human-generated code run very quickly. Not a single action performed by the algorithms is the least bit spontaneous.”
This statement seems to show a fundamental misunderstanding of machine learning.
The reason ML is so exciting to a lot of people, and why it seems so miraculous, is that the output is not the result of people sitting down and programming the algorithm to generate it. It is the result of feeding millions and millions of examples of training data to a neural network “substrate”, which then can be run on new inputs to approximate a useful output.
The interesting thing is that such algorithms can predict which movies you will like, or can drive a car, or generate a painting, or recognize a face, but internally, they are typically a block box. An algorithm can tell you that you might like a movie, but it doesn’t explain why it thinks that.
And such ML algorithms are all too spontaneous, which is a bad thing. You never know which input is going to produce wildly divergent output. This is part of what makes things like self-driving cars so difficult. 99.9% of the time, a Tesla on autopilot drives down the highway, steering and braking with no problem. Then, there’s a truck in an unexpected place, painted the wrong color, with the sun shining from the wrong direction, and that same Tesla plows into it. In this case, spontaneity is very bad. There is an entire branch of ML devoted to minimizing the possibility of bad outputs from these “adversarial inputs”.
This stuff is happening, and who only knows what the results will be, but I’m already loving how well things like Google Translate and speech-to-text work. Someday, self-driving cars are going to be better than human drivers, and when that day comes, it will be considered irresponsible to not use them.
Self driving cars are safer than human driven when you compare miles to miles. Yes they make stupid mistakes that humans don't. But humans make stupid mistakes that machines don't.
Self-driving cars don't have the miles compared to humans and they're tested in specific areas and conditions, not in the random day to day traffic that we experience, such as a pop-up road condition that suddenly closes one lane of traffic. They also can't choose to go around a stopped car.
I've read that some of the adverse events with self-drive cars aren't publicized because they don't want to 'unduly alarm' us, which reminds me a bit of the covid vax. Keep us in the dark for our own good, ie the mushroom treatment.
A few years ago, self-driving cars were 7x more likely to get into accidents than human driven. Things must have progressed since then if they are now safer.
I just finished an estimate for implementation of “machine learning” for a relatively basic process. I know all too well what is required for a computer to figure out what is a hexagon vs a distorted circle.
Six months of a human being working 40 hrs per week doing nothing but reviewing and correcting the computer’s results and even then the accuracy is less than 90% after the human is sent to do something else and the computer is left to “learn.”
All code has unexpected results when new variables are introduced or the code is expanded. Nothing new there. The most common result: an abend or crash.
The only people I know IRL who are enamored of ML are two who actually work in it and those who haven’t a clue but think Star Trek is around the corner.
People misunderstand the use of the term “artificial intelligence.”
To the astonishment of many, there is as yet no such thing as AI. It’s all human-generated code run very quickly. Not a single action performed by the algorithms is the least bit spontaneous. If you ask the algorithm to depart from its parameters it stops because it has no brain, no way to step beyond the code and produce something different than that for which it is programmed.
Google co-opted the work of millions of humans over a decade in order to generate maps with semi-accurate placement of stop signs, crosswalks, etc. and likely everyone reading this site has participated.
The captchas that ask you to click on the stop signs, traffic lights etc are used by Google to generate those maps.
The same applies to DALL-E. If you ask the program to, oh, solve a vector equation for example it will freeze.
Which is one of many reasons I’ll never sit in a “self driving” car. I know how code works and I know there’s not a snowball’s chance in hades the code can handle even the most mundane of traffic hazards.
I refuse to own a vehicle with “anti-collision” sensors for the same reason.
DALL-E users may think they’re creating “art” but all they’re doing is deluding themselves.
Self-driving cars don't have to NEVER have accidents--they just have to have significantly fewer accidents than human drivers.
The interesting argument will come in, though, when a self-driving car has to "decide" whether to injure its occupants or random pedestrians. Though, in our car-centric society, I'm pretty sure courts would side with the cars.
There are already legal arguments about whether or not the provider of a self-driving car can be sued for damages if that car gets into an accident. I’m sure the EULAs for these things will be insane, and like all the other ones, we’ll just click Accept without reading them.
Thank you Ma'am.
“To the astonishment of many, there is as yet no such thing as AI. It’s all human-generated code run very quickly. Not a single action performed by the algorithms is the least bit spontaneous.”
This statement seems to show a fundamental misunderstanding of machine learning.
The reason ML is so exciting to a lot of people, and why it seems so miraculous, is that the output is not the result of people sitting down and programming the algorithm to generate it. It is the result of feeding millions and millions of examples of training data to a neural network “substrate”, which then can be run on new inputs to approximate a useful output.
The interesting thing is that such algorithms can predict which movies you will like, or can drive a car, or generate a painting, or recognize a face, but internally, they are typically a block box. An algorithm can tell you that you might like a movie, but it doesn’t explain why it thinks that.
And such ML algorithms are all too spontaneous, which is a bad thing. You never know which input is going to produce wildly divergent output. This is part of what makes things like self-driving cars so difficult. 99.9% of the time, a Tesla on autopilot drives down the highway, steering and braking with no problem. Then, there’s a truck in an unexpected place, painted the wrong color, with the sun shining from the wrong direction, and that same Tesla plows into it. In this case, spontaneity is very bad. There is an entire branch of ML devoted to minimizing the possibility of bad outputs from these “adversarial inputs”.
This stuff is happening, and who only knows what the results will be, but I’m already loving how well things like Google Translate and speech-to-text work. Someday, self-driving cars are going to be better than human drivers, and when that day comes, it will be considered irresponsible to not use them.
Self driving cars are safer than human driven when you compare miles to miles. Yes they make stupid mistakes that humans don't. But humans make stupid mistakes that machines don't.
Self-driving cars don't have the miles compared to humans and they're tested in specific areas and conditions, not in the random day to day traffic that we experience, such as a pop-up road condition that suddenly closes one lane of traffic. They also can't choose to go around a stopped car.
I've read that some of the adverse events with self-drive cars aren't publicized because they don't want to 'unduly alarm' us, which reminds me a bit of the covid vax. Keep us in the dark for our own good, ie the mushroom treatment.
A couple of decades ago tech claimed cars would run on a Microsoft platform.
The joke among us was “Reboot at 70 MPH sounds exciting.”
Test them with Bill Gates in the 'drivers' seat lol!
A few years ago, self-driving cars were 7x more likely to get into accidents than human driven. Things must have progressed since then if they are now safer.
I just finished an estimate for implementation of “machine learning” for a relatively basic process. I know all too well what is required for a computer to figure out what is a hexagon vs a distorted circle.
Six months of a human being working 40 hrs per week doing nothing but reviewing and correcting the computer’s results and even then the accuracy is less than 90% after the human is sent to do something else and the computer is left to “learn.”
All code has unexpected results when new variables are introduced or the code is expanded. Nothing new there. The most common result: an abend or crash.
The only people I know IRL who are enamored of ML are two who actually work in it and those who haven’t a clue but think Star Trek is around the corner.