Supply Chain Market Research - SCMR LLC
  • Blog
  • Home
  • About us
  • Contact

The 6th Sense?

1/21/2025

0 Comments

 

The 6th Sense?
​

​As we have noted previously, we find comparisons between humans and AI meaningless as human intelligence is based on sensory input and AI is not.  Yes, Ai systems can sort through vast fields of data far faster than our eyes and brain allow us, but that same Ai system cannot tell whether those numbers mean something other than what it has been told to look for..  It might be able to improve its ability to sort through those numbers, but it has no context other than what it was taught.  Humans, have limitations, but because they have context, are able to feel emotion in a musical piece that an AI would not.  They are able to see the emotion in Edvard Munch’s “The Scream”, without knowing the color value of every pixel, and humans can know not to sit next to someone on the subway whose nose is running.
AI’s need senses if they are ever going to challenge human intelligence and creativity, and all the tokenization of literature, images, videos, and Instagram posts cannot help.  Yes, they might be able to answer a set of test questions better than a high-school student or even a college professor, but Ai art  is not art, its copying without emotion.  That said, what would happen if AI systems wer given senses?  What if they could ‘see’ the outside world in real-time? What if they could hear the sound of a real laughing child and at the same time see what made the child laugh?  Could they begin to develop an understanding of context?  It’s a difficult question to answer and likely would require all of the human senses working together to truly allow the AI to understand context, but we are certainly not there yet, and finding ways to grant AI systems the sense of touch and smell are challenging at best.
There are plans to give AI systems a better sense of sight, by collecting data from moving vehicles.  The data becomes part of a real-world model that strives to identify ‘how things work’, which can then be used to train other models.  However for a model to be ‘real-world’,  it has to be huge.  Humans take in massive amounts of sensory ‘noise’ that has only a minute influence on decisions, but is essential in understanding how the world works.  Much of that ‘noise’ is incomplete, ambiguous, or distracting but is part of the context we need to handle the uncertainty that our complex environment brings.  Of couse, efficiency is also important, and humans have the sometimes dubious ability to filter out noise, while retaining the essence of an situation, something Ai systems would have to be programmed to do, and with all the noise being fed to a real-world model, the storage and processing needed would be astronomical.
Ethics are a hard concept to explain, and building algorithms that contain ethics are prone to bias, Humans are also prone to bias, but they are typically taught lessons in ethics by others around them.  Whither they respond with their own interpretation of those ‘lessons’ or just mimic what they see, is a human issue.  Ai systems form biases only based on their training data and their algoritms, so while a real-world model might tell the AI that driving a vehicle into a concrete wall triggers a rule of physics, it doesn’t tell them that they should feel regret for doing so when they have borrowed that vehicle from their parents.  Humas also continue to learn, at least most do, so Ai real-world models must be ever expanding to be effective and that requires more power, more processing speed, and more storage.
So the idea of a general real-world model has a number of missing parts.  That said, real-world ‘mini-models’ are a bit more feasible.  Rather than trying to model the unbelieveable complexity of the real world, building smaller models that contain sensory data that is relevant to a particular application is, at least, more realistic.  We can use visual (camera) data to control stoplights, but those systems react poorly to anomolies and that is where additional sensory data is needed.  Someone crossing against the light might be looking at the street to avoid traffic, but at the same time can hear (no earbuds) the faint sound of a speeding car that has yet to hit their peripheral vision, and that information, as unsignificant as it might be to someone walking on the sidewalk, becomes very important to the person crossing against traffic. 
Real-world models that try to mimic real-world situations must have sensory information and the ability to filter that information in a ‘human’ way, so the development of real-world models without more complete sensory information will not produce the human-like abilities to react to the endless number of potential scenarions that every second of our lives provides.  Networks could help AI’s gather data, but until the AI is able to feel the stroke of camel hair on canvas, smell the paint and see the yellow of a sunflower, they cannot understand context in the truest sense, something humans begin to learn before their first birthday.  We expect mini-real-world models will be effective for lots of applications but without sensory input, real-world context is a dream.
0 Comments



Leave a Reply.

    Author

    We publish daily notes to clients.  We archive selected notes here, please contact us at: ​info@scmr-llc.com for detail or subscription information.

    Archives

    June 2025
    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    July 2020
    May 2020
    November 2019
    April 2019
    January 2019
    January 2018
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    5G
    8K
    Aapl
    AI
    AMZN
    AR
    ASML
    Audio
    AUO
    Autonomous Engineering
    Bixby
    Boe
    China Consumer Electronics
    China - Consumer Electronics
    Chinastar
    Chromebooks
    Components
    Connected Home
    Consumer Electronics General
    Consumer Electronics - General
    Corning
    COVID
    Crypto
    Deepfake
    Deepseek
    Display Panels
    DLB
    E-Ink
    E Paper
    E-paper
    Facebook
    Facial Recognition
    Foldables
    Foxconn
    Free Space Optical Communication
    Global Foundries
    GOOG
    Hacking
    Hannstar
    Headphones
    Hisense
    HKC
    Huawei
    Idemitsu Kosan
    Igzo
    Ink Jet Printing
    Innolux
    Japan Display
    JOLED
    LEDs
    Lg Display
    Lg Electronics
    LG Innotek
    LIDAR
    Matter
    Mediatek
    Meta
    Metaverse
    Micro LED
    Micro-LED
    Micro-OLED
    Mini LED
    Misc.
    MmWave
    Monitors
    Nanosys
    NFT
    Notebooks
    Oled
    OpenAI
    Perplexity
    QCOM
    QD/OLED
    Quantum Dots
    RFID
    Robotics
    Royole
    Samsung
    Samsung Display
    Samsung Electronics
    Sanan
    Semiconductors
    Sensors
    Sharp
    Shipping
    Smartphones
    Smart Stuff
    SNE
    Software
    Tariffs
    TCL
    Thaad
    Tianma
    TikTok
    TSM
    TV
    Universal Display
    Visionox
    VR
    Wearables
    Xiaomi

    RSS Feed

Site powered by Weebly. Managed by Bluehost