Supply Chain Market Research - SCMR LLC
  • Blog
  • Home
  • About us
  • Contact

Not So Smart

1/17/2025

0 Comments

 

Not So Smart
​

”AI will be the most transformative technology since electricity.” (Eric Schmidt); "AI will not replace jobs, but it will change the nature of work." (Kai-Fu Lee); "AI will not replace humans, but those who use AI will replace those who don't." (various); "AI will be the most powerful technology ever created, and it will fundamentally alter the way we live, work, and interact." (Andrew Ng).  These are quotes about how AI will change the world from some very smart/ successful people, although a group that is heavily ‘invested’ in AI technology, giving them a bit of bias.  We certainly don’t denigrate the fact that AI has been able to help understand the complexities of human genetic code, improve weather forecasting, help to develop new materials, and is able to comb through vast amounts of data to find patterns we humans might have missed.  But while the AI community might want consumers to believe that AI is the Mighty Mouse (“Here I come to save the day…”)[1]of the 21st century, its not that easy.
In order for AI to fulfill all the hopes and dreams of its supporters, it not only has to be fast (it is), but it has to be able to work 24/7 (it can), able to learn from its mistakes (sometimes), and has to be correct 99.9% of the time (it’s not).  But the business end of AI does not have the patience to wait until AI is able to meet those specifications and has ushered us into the world of AI as a tool for getting a leg on the competition.  CE companies are among the most aggressive in promoting AI, and the hype continues to escalate, but the reality, at least for the general public, is a bit less enthusiastic, despite initially high expectations.  In a 2024 survey, businesses indicated that 23% found that Ai had underperformed their expectations, 59% said it met their expectations, and 18% said it exceeded their expectations,[2] with only 37% stating that they believe their business to be fully prepared to implement its AI strategy (86% said it will take 3 years), a little less enthusiastic than the hype might indicate.
From a business standpoint the potential issues that rank the highest are data privacy, the potential for cyber-security problems, and regulatory issues, while consumers seem to be a bit more wary, with only 27% saying they would trust AI to execute financial transactions and 25% saying they would trust AI accuracy when it comes to medical diagnosis or treatment recommendations.  To be fair, consumers (55%) do trust AI to perform simple tasks, such as collating product information before making a purchase and 50% would trust product recommendations, but that drops to 44% concerning the use of AI support in written communications[3].  Why is there a lack of trust in Ai at the consumer level?  There is certainly a generational issue that has to be taken into consideration, and an existential fear (end of the world’) from a small group, but there seems to be a big difference between the attitude toward AI among business leaders and consumers, and a recent YouGov survey[4] points to why.
US citizens were asked a number of questions about their feelings toward AI in three specific situations: making ethical decisions, making unbiased decisions, and providing accurate information.  Here are the results:


[1] - [1], Fair use, https://en.wikipedia.org/w/index.php?curid=76753763
 

[2] https://www.riverbed.com/riverbed-wp-content/uploads/2024/11/global-ai-digital-experience-survey.pdf

[3] https://www.statista.com/statistics/1475638/consumer-trust-in-ai-activities-globally/
 

[4] https://today.yougov.com/technology/articles/51368-do-americans-think-ai-will-have-positive-or-negative-impact-society-artificial-intelligence-poll
Picture
Figure 2 – YouGov AI Survey – Ethical Decisions – Source: SCMR LLC, YouGov
Picture
Figure 3 -0 YouGov AI Survey – Unbiased Decisions – Source: SCMR LLC, YouGov
Picture
Figure 4 - YouGov AI Survey –Accurate Information – Source: SCMR LLC, YouGov
It is not surprising that many Americans do not trust AI to make ethical decisions for them, but over 50% of the US population does not think AI systems are making unbiased decisions, and we expect that is without the more detailed understanding of AIs that might lead one to an even higher distrust.  That said, we were surprised that 49% of Americans believe that AIs were providing accurate information, against 39% who disagreed.  We believe that the push to include AI in almost every CE product as a selling point this early in the development of AI systems that interface with users, will do little to convince users that the information they receive from AI systems is accurate, and has the possibility of reducing that level of comfort. 
LLMs and AI Chatbots have become so important from a marketing standpoint that few in the CE space can resist using them, even if their underlying technology is not fully developed.  Even Apple (AAPL), who tends to be among the last major CE brand to adopt new technology, was forced into providing ‘Apple Intelligence’, a brand product that was obviously not fully developed or tested. While Apple uses AI for facial and object recognition, to assist Siri’s understanding of user questions, and to suggest words as you type, there was no official name for Apple’s AI features until iOS 18.1, when the name ‘Apple Intelligence’ was used as a broad title for Apple’s AI.  The two main AI functions that appeared in iOS 18.1 were notification summaries, and the use of AI to better understand context in Apple’s ‘focus mode’.  iOS 18.2 added Ai to improve recognition in photo selection, gave Siri a better understanding of questions to improve its suggestions, and allowed users to use natural language when creating ‘shortcuts’, essentially a sequence of actions to automate a task, and also enhanced the system’s ability to make action suggestions as the shortcut was being formulated.
None of these functions are unusual, particularly the notification summaries, which are similar to the Google (GOOG) search summaries found in Chrome, but there was a hitch.  It turns out that Apple AI was producing summaries of news stories that were inaccurate, with the problem becoming most obvious when Apple’s AI system suggested that the murderer of United Healthcare’s CEO had shot himself, causing complaints from the BBC.  Apple has now released a beta of iOS 18.3, that disables the news and entertainment summaries and allows users to remove summary functions on an application-by-application basis.  It also changes all AI summaries to italics to make sure that users can identify when a notification is from a news source, or is an Apple Intelligence AI generated summary.
While this is an embarrassment for Apple, it makes two points.  First, AI systems are ‘best match’ systems.  They match queries against what their training data looked like and try to choose the letter or word that is most similar to what they have seen in their training data.  This is a bit of an oversimplification, as during training the AI builds up far more nuanced detail than a letter or word matching system (think “What would be the best match in this instance, based on the letters, words and sentences that have come before this letter, or word, including those in the previous sentence or sentences?”), but even with massive training datasets, AI’s don’t ‘understand’ more esoteric functions, such as implications or the effect of a conclusion, so they make mistakes, especially when dealing with narrow topics. 
Mistakes, sometimes known as hallucinations, can be answers that are factually incorrect or unusual reactions to questions.  In some cases the Ai will invent information to fill a data gap or even create a fictionalized source to justify the answer, even if incorrect.  In other cases the Ai system will slant information to a particular end or sound confident that the information is correct, until it is questioned.  More subtle (and more dangerous) hallucinations appear in answers that sound correct on the surface but are false, making them hard to detect unless one has more specialized knowledge of a topic.  While there are many reasons why AI systems hallucinate, AI’s struggle to understand the real world, physical laws, and the implications surrounding factual information.  Without this knowledge of how things work in the real world, AIs will sometimes mold a response to its own level of understanding, coming up with an answer that might be close to being correct but is missing a key point (Think of a story about a forest without knowing about gravity…” Some trees in the forest are able to float their leaves through the air to other trees…”.  Could it be true?  Possibly, unless there is gravity)
Second, it erodes confidence in AI and can shift consumer sentiment from ‘world changing’ to “maybe correct’ and that is hard to recover from.  Consumers are forgiving folks and while they get hot under the collar when they are shown that they are being ignored, lied to, or overcharged, brands know enough to lay low for a while and then jump back on whatever bandwagon is current at the time, but ‘fooled once, fooled twice’ can take a while to dissipate.  AI will get better, especially non-user facing AI, but if consumers begin to feel that they might not be able to trust AI’s answers, the industry will have to rely on the enthusiasm of the corporate world to support it and given the cost of training and running large models, we expect they will need all the paying users they can find.  Don’t overpromise.
Picture
0 Comments



Leave a Reply.

    Author

    We publish daily notes to clients.  We archive selected notes here, please contact us at: ​[email protected] for detail or subscription information.

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    July 2020
    May 2020
    November 2019
    April 2019
    January 2019
    January 2018
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    5G
    8K
    Aapl
    AI
    AMZN
    AR
    ASML
    Audio
    AUO
    Autonomous Engineering
    Bixby
    Boe
    China Consumer Electronics
    China - Consumer Electronics
    Chinastar
    Chromebooks
    Components
    Connected Home
    Consumer Electronics General
    Consumer Electronics - General
    Corning
    COVID
    Crypto
    Deepfake
    Deepseek
    Display Panels
    DLB
    E-Ink
    E Paper
    E-paper
    Facebook
    Facial Recognition
    Foldables
    Foxconn
    Free Space Optical Communication
    Global Foundries
    GOOG
    Hacking
    Hannstar
    Headphones
    Hisense
    HKC
    Huawei
    Idemitsu Kosan
    Igzo
    Ink Jet Printing
    Innolux
    Japan Display
    JOLED
    LEDs
    Lg Display
    Lg Electronics
    LG Innotek
    LIDAR
    Matter
    Mediatek
    Meta
    Metaverse
    Micro LED
    Micro-LED
    Micro-OLED
    Mini LED
    Misc.
    MmWave
    Monitors
    Nanosys
    NFT
    Notebooks
    Oled
    OpenAI
    QCOM
    QD/OLED
    Quantum Dots
    RFID
    Robotics
    Royole
    Samsung
    Samsung Display
    Samsung Electronics
    Sanan
    Semiconductors
    Sensors
    Sharp
    Shipping
    Smartphones
    Smart Stuff
    SNE
    Software
    Tariffs
    TCL
    Thaad
    Tianma
    TikTok
    TSM
    TV
    Universal Display
    Visionox
    VR
    Wearables
    Xiaomi

    RSS Feed

Site powered by Weebly. Managed by Bluehost