Dolphin and a surfer

Apple Intelligence, AI that tells you what to say?

Apple Intelligence was included with Sequoia 15.3 so I let Siri run for a week of probation.

A while ago on Linked In I expressed some concerns about AI as “False” headlines were appearing in news feeds. There is a “feature” that AI in general has yet to embrace that every child learns by the time they are ten. Somehow the lesson is lost again as they enter the world.

BGENT the AI Bot at BGENT.AI

As a child there are all kinds of stories, metaphors and white lies our parents tell us. The Easter Bunny, Santa Clause and in my family, Irish Gold at the end of the rainbow. At some point we figure out that none of these are real. Ok, I can’t prove the gold isn’t there, but I can’t prove god doesn’t exist either, so I’ll stick with Santa and the Easter Bunny.

Once we figure out that Santa and the Easter Bunny are stories our parents tell us as a bribe to be good we change. We begin to understand the “Principle of Authority”. As a kid, TV news wasn’t allowed to have an opinion, they just reported the news as boring facts. I learned to trust the news. Walter Cronkite ranked up there with a catholic bishop in our household. If he said it, it must be true.

President Reagan used the phrase “Trust but verify”. CSI Miami used the phrase quite well also. This is the approach I took to AI when I let Siri start organizing my emails and drafting responses. My surf tribe knew I turned it on immediately because my responses were not the way I talk or text. Some of the email responses the AI wanted to send were even funnier. A sales person sent me an invoice for something I didn’t buy, and the response was “Paid, thank you”.

As humans we use the principle of authority to do the “Trust but verify” analysis. We trust our parents but… AI is trained by a data set. That data set is limited to what the trainers load it with. The problem is once it is free to scan your emails and the internet, it doesn’t know truth, from fiction from parody. Better yet, if you watch carefully how AI crafts articles or papers you find what it does is scour the internet, take a sentence from one article, another sentence from a different article and piece it together.

The critical difference is that AI can’t yet say “Hey that doesn’t make sense”. When I let AI try and write articles for me here at Beach Street News, it would often contradict itself. For every person that likes electric cars and wrote and article, there was someone else that wrote and anti electric car article. I even let AI try and write me a script for a CoolToys® TV Episode. People would think I was crazy if I used it, but the AI was quite ok with it. AI doesn’t know which articles are true, and which are false.

The bigger issue is that AI only has one tool to determine the level of authority. As far as I can tell that is popularity of at article or a writer. The “truth” has nothing to do with it because AI has no way to understand truth. With deep fake videos becoming so good, it’s hard for people to know what is real sometimes.

The final part of probation for AI was letting it respond to emails. My brother and I both used Siri and ChatGPT to let a couple of emails bounce back and forth. We started with a real email and had the AI respond. The summary on my end for my brothers email as provided by Siri (Apple Intelligence) was that “Dwayne Johnson does not like Yoga”.

The AI response was nowhere close to what the email said. Next I let Siri respond and my brother let ChatGPT respond to Siri’s response. The conversation quickly melted down to nonsense but Dwayne Johnson came back. Siri must really like “The Rock” or she is getting paid to promote him and RED One. This gets more scary since Amazon studios owns Red One. Why is Siri promoting the competition?

That brought us to the final review of the responses crafted by Apple Intelligence and ChatGPT. We found both to change our “voice” just slightly in the direction that is clearly the “accepted” thought at Apple or ChatGPT. Each was slightly different, both oddly politically correct in some ways and totally off base in others. More importantly is was changing the perspective the other person would have with my responses. This is the scary part. Even if it is “fake” or “False” if it is repeated enough it becomes part of the social fabric and we become numb to it. The real question is who decides what we think, the AI programers or the AI?

Sorry I am not ready to conform to the matrix. Siri has failed probation, been turned off and asked to apply for the job again next year.

Leave a Reply

Your email address will not be published. Required fields are marked *