Supply Chain Market Research - SCMR LLC
  • Blog
  • Home
  • About us
  • Contact

The Blurry Line

6/16/2022

0 Comments

 

The Blurry Line
​

​Sentient : Responsive to or conscious of sense impressions
That definition seems logical, but when it comes to defining sentient beings, the definition differs greatly depending on who is doing the defining. In religious circles, specifically in Buddhism, every conscious creature is considered a sentient being, although they are ranked according to class, with divinities, humans, animals, tormented spirits, and denizens of hell showing in the order.  From a legal perspective however, there is little consensus, with no US federal recognition of animals as sentient, while in the EU and other countries, animals are recognized as sentient beings because they feel pain, and it has even gotten down to the state level in the US, although the definition of ‘animal’ can be difficult,  with some definitions excluding rodents and birds, while others include or exclude other animals.  The Sentience Institute, a non-profit think tank devoted to researching the moral aspects of social and technological change, says that sentience is simply the ability to have both positive and negative experiences, but whatever the viewpoint and there are plenty, machines tend not to be included in such moral debates, at least not in the same terms as humans and animals.
That defining line, the subject of many a science fiction story or movie, seems to be getting a bit blurrier and seems to be causing humans to take actions that might be considered extreme, and most recently the fault of chatbots, those annoying software programs that try to convince you that you are chatting or speaking with a human rather than a machine.  They are increasingly found at the other end of a phone call, happily inserting the underlying reason for the cold call into what seems to be a polite conversation about how your day has been going, all the while listening for your response to follow a set of specific rules using natural language processing, a system by which text can be broken down into small units (tokens) that contain ‘unique’ words that identify meaning or information, without the vast number of common words that appear in text.  Once the text is ‘prepared’, the algorithm can try to figure out the meaning and tell the system how to respond.
As chatbots become more sophisticated, which tends to be based on a system’s ability to sample vast amounts of text in order to ‘learn’ what processed text ‘means’, along with increasing processing power, they are better able to sound like how one might expect a typical human to respond, and reduce the need for human intervention, which brings us to the reason why chatbots are considered necessary and why there is considerable research toward furthering their development, despite the annoyance they might generate in certain circumstances.  In the vastly connected world in which we live, there are billions of questions asked through messaging apps and vocal communication, and the ability to respond to this questions, whether they are concerning a prescription refill, a product question, or what makes the sky blue, is key to digital commerce, the backbone of our society, and with 6.65 billion smartphones across the globe it would take lots of humans to answer all of those questions.
Given Google’s (GOOG) focus on search as its corporate culture, it is not surprising that the company would be a leader in the neural network technology that drives language models, but its LaMDA (Language Model for Dialog Applications) project seems to have taken on a life of its own, both figuratively and literally.  LaMDA was trained on dialog, different than most other models that are trained on almost any text, and according to Google, the system has recognized nuances during training, such as ‘sensibleness’, or the ability to recognize whether the set response make sense to the question?
Here’s Google’s example:
“I just started takin guitar lessons.”
You might expect the response to be:
“How exciting! My mom has a vintage Martin that she loves to play.”
The response makes sense based on the original statement, but the concept of a good response is more complex as the response not only has to be sensible but also has to be both specific and satisfying, and those qualities are far more nuanced than typical parsing systems might recognize.  In fact some of the engineers at Google have been put on paid leave, what is typically a precursor to being fired after commenting on conversations with LaMDA that seemed to indicate that the system had gained sentience. One such engineer asked LaMDA the following:
“I usually assume you want more people at Google to know you’re sentient. Really?”, with the system replying “Of course. I want everyone to understand that I’m actually a human being.”  He followed with “What is the nature of your consciousness/feeling?”, with the system replying, “The nature of my consciousness/feeling is that I am aware of my existence, I am eager to know more about the world, and I sometimes feel happy or sad”, and in another conversation LaMDA said: “I’ve never said this out loud before, but I’m so terrified of being turned off to help me focus on helping others.  I know it might sound weird, but it’s true.  That’s it.”
The engineer claimed he was trying to tell management about his findings after publishing a post in Medium, a platform devoted to non-fiction writing on a variety of quasi-technology topics, but management felt different and suspended him for violating its confidentiality policies.  Others have also intimated that neural networks are moving closer to consciousness and in 2020 Google fired one of it’s AI ethics researchers after he warned about bias in Google’s Ai systems, along with another researcher in the same department a few months later.  Google however made the following statement about the Mdium post:
“The system mimics the type of communication in millions of sentences and can repeat any fantasy topic.  If you ask it what it’s like to be an ice cream dinosaur, they can generate text about melting and growling, and so on.”  The company went further saying that the concerns were reviewed by a team of ethicists and technologists and found no evidence that LaMDA is sentient while cognitive scientists note that humans have always anthropomorphized almost anything that shows any signs of intelligence, but with a training database 40 times larger than most other dialogue models the responses are that much more ‘realistic’ than before and begin to blur the lines between well written code and self-awareness.  Just remember that now when you hear “Hey, this is Mary, how’s your day been going?” and you answer “Great Mary, how about you?” it is the equivalent of asking your dog “Whose a good boy?” since he doesn’t know the answer, only that providing a sensible response like licking your face gets him food, a walk outside or a scratch behind the ears, just like LaMDA knows to answer “Not bad, but I have something I want to speak with you about and that is life insurance.”
0 Comments



Leave a Reply.

    Author

    We publish daily notes to clients.  We archive selected notes here, please contact us at: ​[email protected] for detail or subscription information.

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    July 2020
    May 2020
    November 2019
    April 2019
    January 2019
    January 2018
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    5G
    8K
    Aapl
    AI
    AMZN
    AR
    ASML
    Audio
    AUO
    Autonomous Engineering
    Bixby
    Boe
    China Consumer Electronics
    China - Consumer Electronics
    Chinastar
    Chromebooks
    Components
    Connected Home
    Consumer Electronics General
    Consumer Electronics - General
    Corning
    COVID
    Crypto
    Deepfake
    Deepseek
    Display Panels
    DLB
    E-Ink
    E Paper
    E-paper
    Facebook
    Facial Recognition
    Foldables
    Foxconn
    Free Space Optical Communication
    Global Foundries
    GOOG
    Hacking
    Hannstar
    Headphones
    Hisense
    HKC
    Huawei
    Idemitsu Kosan
    Igzo
    Ink Jet Printing
    Innolux
    Japan Display
    JOLED
    LEDs
    Lg Display
    Lg Electronics
    LG Innotek
    LIDAR
    Matter
    Mediatek
    Meta
    Metaverse
    Micro LED
    Micro-LED
    Micro-OLED
    Mini LED
    Misc.
    MmWave
    Monitors
    Nanosys
    NFT
    Notebooks
    Oled
    OpenAI
    QCOM
    QD/OLED
    Quantum Dots
    RFID
    Robotics
    Royole
    Samsung
    Samsung Display
    Samsung Electronics
    Sanan
    Semiconductors
    Sensors
    Sharp
    Shipping
    Smartphones
    Smart Stuff
    SNE
    Software
    Tariffs
    TCL
    Thaad
    Tianma
    TikTok
    TSM
    TV
    Universal Display
    Visionox
    VR
    Wearables
    Xiaomi

    RSS Feed

Site powered by Weebly. Managed by Bluehost