Supply Chain Market Research - SCMR LLC
  • Blog
  • Home
  • About us
  • Contact

Can You Tell?

1/10/2025

0 Comments

 

Can You Tell?
​

Deepfakes are and will continue to be a constant reminder of the potential downsides of AI.  The most recent example is a set of pictures of a small child stuck under debris from a collapsed building during a magnitude 6.8 earthquake in Shigatze City, Tibet.  At least 10 social media accounts posted these pictures, linking them to the earthquake, and received tens of thousands of reposts.  Tencent’s (700.HK) Jinzhen news platform eventually confirmed that the photos were Ai generated.
Picture
Figure 1 - Deepfake 1 - Source: 36KR.com
​AI still has issues with human hands and fingers and a closer look at one of the photos points to that issue as a tell.  Unless that child is polydactyl (1in a 1,000 chance), the AI incorrectly created an extra finger, indicating the creator was not using the latest AI image technology.  In March of 2023, Midjourney (pvt) solved this issue with its V5 release by training the model and then fine-tuning it with a massive set of annotated finger data, although even then with this time-consuming fine-tuning the Midjourney AI still had issues with the muscle placement and texture on fingers, which they further refined in Version 6.
Picture
Figure 2 - Deepfake 2 - Source: 36KR.com
Picture
Figure 3 - Midjourney Model 'Finger Updates' - Source: Midjourney
Picture
Figure 4 - Midjourney Muscle Issues - Source: Midjourney
Picture
Figure 5 - Midjourney V6 Finger Refinement - Source: Midjourney
As it turns out there are guides to discerning which images are real and which are fake.  In a manual posted by Northwestern University, they point to 5 keys to detecting fakes.
 
  • Anatomical Unreasonableness, such as unnatural hands, weird teeth, or unusual bones.
Picture
  • Stylization – Too clean or to cinematic?
Picture
  • Functional Irrationality – As an AI’s understanding of products and their use is limited, the placement of objects incorrectly is a key.
Picture
  • Physics Violations – Incorrect shadows or reflections or their eliminations is a tell.
Picture
  • Cultural or common-sense violations – These are harder to spot as they are extremely subjective
Picture
​If you believe you are better than average at picking out fakes, take the test at the link below, but remember that the difference in deepfake recognition accuracy of those who are familiar with AI and those who are not is only 0/8%.  While we do not believe that AI’s have anything close to human creative ability, they are pretty good at fooling us when it comes to images, and they get better, while we stay the same in terms of visual perception…
https://detectfakes.kellogg.northwestern.edu/
 
0 Comments

Avatars Can Be Fun

2/16/2022

0 Comments

 

Avatars Can Be Fun
​

Avatars can be fun.  Wandering through games or social media as a cartoon character or an incarnation of what you would like to be (or be seen as) is as easy as spending a few minutes using one of the hundreds of free cartoon avatar creator tools, or spending hours developing a quasi-realistic avatar based on a photo of yourself or others.  Many avatars ae specific to the game or platform, although the more sophisticated avatar development platforms work toward making avatars that are platform agnostic, but in most cases avatars are still a bit cartoonish and react with relatively limited physical mechanics and facial expressions.
Back in 1972 the game Maze War was developed by three high school students who were participating in a NASA work/study program to help visualize fluid dynamics for spacecraft.  The project morphed into the development of what is considered the world’s first 1st person shooter game, which could be played over ARPANET, the precursor to the internet, and contained the world’s first avatar as seen in Figure 2.   Game avatars continued to progress but GIFs began to surface in the 1990’s when internet chat became a reality, and typical 100x100 pixel avatars were used to represent the ‘chatter’, with game avatars moving toward more customization, while other games took a more cartoonish track, such as the Nintendo () Mario series.
That said, avatars have progressed considerably and as noted above, can take on some decidedly human characteristics as more sophisticated shading and lighting become possible to the average user, but when it comes down to it, avatars are obviously not people and are limited by the number of variable characteristics available to the avatar and the ability of software to manipulate those characteristics in a way that emulates a human form.  Given that the face has over 24 individual muscles on each side, the possibility for creating human facial expressions is a daunting task, and while computing power continues to increase, much of human emotion resides in facial expressions, which could involve any number, if not all, of those muscles.  Not only would they need to move realistically, but they also need to mimic speech, which would also include gestures and movements that are both subtle and must be realistic or they will contribute to the feeling that the avatar is not real.
We have noted that animators have been able to define human expressions in terms of muscle movements and have been able to translate that into animated characters (https://youtu.be/sCCRBg-byGM)  but mapping facial expressions against the complexities of speech is a far more complex task.  In our note we showed the use of ultra-realistic animations being used to either mimic existing newscasters or the creation of ‘new’ TV reporters that are astoundingly real and ‘read’ copy in real time, inclusive of facial expressions and body movements that make them hard to tell apart from their human counterparts.   
Of course, there is the ‘deepfake’ crowd, that uses this technology to foment distrust by using social or political figures while overlaying speech that was never uttered, a very disturbing trend during a period when we can easily see the effects of misinformation across the internet, but even more disturbing was a recent article in Scientific American that is based on a survey and article recently published in The Proceedings of the National Academy of Sciences that concluded that AI synthesized faces are indistinguishable from real faces and are considered more trustworthy than real faces. The article describes the use of GANs (Generative Adversarial Networks) that use two neural networks, a discriminator and a generator to synthesize an image of a fictitious person.
The generator starts the process with a random group of pixels and with each iteration the discriminator takes that image and compares it to its database of real faces.  If the generators face is different from the database images, which it is early on, the discriminator penalizes the generator which keeps trying until the discriminators says the ‘new’ face looks like the ones in its database.  This takes many iterations but the result is a face that has the characteristics of those in the database, albeit not exactly the same as any one in particular.  Such systems are used to fill in parts of photographs or art where damage has caused deterioration, to create virtual fashion models that require no photographer or the bevy of service people that are needed in real life, or to develop realistic avatars for games, and are used in broadcast to read copy.
However the study goes further pitting 315 participants against a roster of 128 faces as to whether they were real or fake.  The average accuracy found was 48.2%, close to the 50% level that would express a ‘chance’ choice, with the images in Figure 5 showing those that were chosen as the most and least accurately, and a second test after participants were made aware of rendering artifacts (visual hints) and general feedback, saw an improvement in accuracy, but still close to the 50% ‘chance’ level.  Taking the experiment further, the study had 223 participants rate the ‘trustworthyness’ of the faces on a scale of 1 to 7, with the results showing that the synthetic faces scored 7.7% higher than the real faces and women’s faces were 13.3% more trustworthy than men’s.  The reasoning being that the synthetic faces were more toward the ‘norm’ than the real faces and therefore more trustworthy.
While the data generated by the study was oriented toward generating consistent data, it certainly points out the potential for the use of synthetic images and to cause confusion and distrust.  Avatars are fun and in most cases are obviously representations of characters or humans that are exaggerated,  but as the development of systems that can create realistic images and speech that are almost impossible for the average person to identify as fake, the potential for misuse increases.  There are systems that can identify deep fakes but relative to the amount of content that is downloaded to the internet, they can only make a small dent, and the eventual expansion to the Metaverse will add to that potential content on a global scale. 
We have no problem with the technology being used for creating realistic imagery and even ‘human-like’ figures, but it is necessary for those synthetics to be identified to the public as such or the eventual end will be the complete distrust of the public as to what it sees, other than on a physical basis.  If you think fake Facebook accounts that spread misinformation are a scourge now just think of what it would be like to have images of political figures spouting words never spoken during a political campaign or global political leaders pushing aggressive agendas that have nothing to do with politics or detente.  In 1971 when the Dramatics  released “Whatcha See is Whatcha Get” it was true, but that ship has sailed.  If you don’t think so, go to this website and watch some of the demo videos, which were made with software on a smartphone.
https://avatarify.ai/
Picture
The Dashavatara - The Ten Primary avatras of Vishnu - Among the earliest avatars - Source: By Raja Ravi Varma - http://www.barodaart.com/oleographs-mythological.html, Public Domain, https://commons.wikimedia.org/w/index.php?curid=15947962
Picture
Maze War - Eye Ball Avatar - Source: MacGui.com
Picture
Gordon Freeman - Protaganist in Half-Life - Source: By Steam marketing, Fair use, https://en.wikipedia.org/w/index.php?curid=25816061
Picture
Muscles of the face & neck - Source: ncbi.nim.gov
Picture
The Most & Least Accurately Classified Real (R) and Synthetic (S) Images - Source: pnas.org/content/pnas/119/8/e2120481119.full.pdf
0 Comments

    Author

    We publish daily notes to clients.  We archive selected notes here, please contact us at: ​[email protected] for detail or subscription information.

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    July 2020
    May 2020
    November 2019
    April 2019
    January 2019
    January 2018
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    5G
    8K
    Aapl
    AI
    AMZN
    AR
    ASML
    Audio
    AUO
    Autonomous Engineering
    Bixby
    Boe
    China Consumer Electronics
    China - Consumer Electronics
    Chinastar
    Chromebooks
    Components
    Connected Home
    Consumer Electronics General
    Consumer Electronics - General
    Corning
    COVID
    Crypto
    Deepfake
    Deepseek
    Display Panels
    DLB
    E-Ink
    E Paper
    E-paper
    Facebook
    Facial Recognition
    Foldables
    Foxconn
    Free Space Optical Communication
    Global Foundries
    GOOG
    Hacking
    Hannstar
    Headphones
    Hisense
    HKC
    Huawei
    Idemitsu Kosan
    Igzo
    Ink Jet Printing
    Innolux
    Japan Display
    JOLED
    LEDs
    Lg Display
    Lg Electronics
    LG Innotek
    LIDAR
    Matter
    Mediatek
    Meta
    Metaverse
    Micro LED
    Micro-LED
    Micro-OLED
    Mini LED
    Misc.
    MmWave
    Monitors
    Nanosys
    NFT
    Notebooks
    Oled
    OpenAI
    QCOM
    QD/OLED
    Quantum Dots
    RFID
    Robotics
    Royole
    Samsung
    Samsung Display
    Samsung Electronics
    Sanan
    Semiconductors
    Sensors
    Sharp
    Shipping
    Smartphones
    Smart Stuff
    SNE
    Software
    Tariffs
    TCL
    Thaad
    Tianma
    TikTok
    TSM
    TV
    Universal Display
    Visionox
    VR
    Wearables
    Xiaomi

    RSS Feed

Site powered by Weebly. Managed by Bluehost