Americans have a genuine concern about identifying AI in the media they consume. In a conversation I had with a radio colleague this week, she succinctly defined the users’ quandary by explaining we want to know not only what we can trust, but also in which messages we can emotionally invest.
Or, in other words, “Don’t try and fool us.”
In an era when trust in mass media has dropped to as low as 26% (Gallup), truth and authenticity with our audiences are more important than ever.
That’s why new AI features recently announced are so important.
- Google announced that all photos that have been generated or edited with AI in Google Photos will be marked as such.
- Meta’s scarily accurate facial recognition technology will be used to help identify celebrity deepfakes in its advertising.
- The German Marshall Fund, a U.S.-based nonprofit, launched a global map that tracks the use of deepfakes and AI in election messaging.
They don’t completely solve problems with AI and IP, but they are all small steps in the right direction — from a place where users are unsure of the veracity of the media they consume to one where AI is properly IDed.
Apple Still Catching Up with Others in AI
Apple Intelligence will be available as a software update a few days after the new iPad mini hits stores, requiring customers to install it themselves. However, the initial AI features are limited, with the most advanced capabilities delayed until early next year. Apple’s AI technology is still developing, lagging behind industry leaders like OpenAI and Google. Internal Apple research indicates that ChatGPT is 25% more accurate than Apple’s Siri and can respond to 30% more queries. Some within Apple believe its generative AI technology currently trails leading competitors by over two years. Details
Originally published by Jacobs Media