Original Source Here
Artificial Intelligence | Technology | Psychology
AI’s promises may kill you
Rogue AIs are fun in the movies, but not in real life
I once wrote a simple next word prediction app, similar to the ones found now on your phone when you’re texting. You can try it out here: https://kbrenchley.shinyapps.io/PlusOne/. It’s basic data science, and it predicts what the next word in your sentence will be. When I wrote it the results could be fairly specific because of the data I trained it on, but now the responses are kind of weak.
Just now, texting on my phone and using the first words offered, the text suggestions became “This is the first time I’ve seen a cat.” It’s a complete sentence that makes logical sense within itself, but it’s factually wrong. I’ve seen a lot of cats, including the one right now who is wailing for my attention.
Which brings me to ChatGPT.
OpenAI made quite a stir when they released their chatbot ChatGPT on November 30, 2022. It falls on the AI spectrum as Generative AI, which essentially takes your question/request/input and tries to find information about it, but can also create its own outputs.
That means it makes things up.
From https://chat.openai.com/chat, ChatGPT
- May occasionally generate incorrect information
- May occasionally produce harmful instructions or biased content
- Limited knowledge of world and events after 2021
I’ve tried it out. I asked it to tell me about my husband, Chaz Brenchley. Since he has been a fiction writer his entire life, there should be enough data about him on the internet. I showed him the results, and he said, “Not horrifically wrong, but significantly incorrect.”
What we should be worried about is when ChatGPT is wrong.
Which is often enough, but when it’s wrong, it can sound so… so… plausible.
Enough universities and companies are aware of what this means, and enough high school students are savvy enough with computers that their teachers will be tearing their hair out for years trying to decide if their students actually wrote that term paper.
But it’s so tempting to believe it’s right.
Which brings me to “autonomous” vehicles
I live in Silicon Valley, so I wasn’t surprised a couple of years ago to see one of the new autonomous cars driving down the road in my direction. We were reaching an intersection, and the car drove into its left turn lane and waited. As I passed it, I didn’t see anyone in the driver seat, though there was someone in the front passenger seat. I was curious, so I glanced back a couple of times in my rearview mirror, seeing the cars on that side of the road all move past the autonomous car.
Then I saw the car turn right from the left turn lane. I hope it was the human turning the wheel.
I’ve met people who proudly believe that their Tesla really is autonomous. That it really can safely drive itself. Tesla has created some interesting features that aid drivers, but at this point the driver still needs to be awake and aware of what’s going on around them. Teslas have been known to crash into a firetruck. Multiple Teslas have crashed into stopped police cars while using Autopilot. The Tesla autonomous driving feature is only as good as the programmers creating it, and the programmers are still discovering new obstacles.
Not to pick on Tesla: Waymo cars have been responsible for 62 of the 130 reported accidents made by cars in assisted/self-driving modes.
Don’t get me wrong. I want autonomous vehicles to live up to their hype. I want to live in the future where I let my car deal with whatever’s on the road in front of me. I just don’t want people to die because they believe that future is now.
AI/ML
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot