Supply Chain Market Research - SCMR LLC
  • Blog
  • Home
  • About us
  • Contact

Robot Race...Slower Pace…China’s Chase…Tech Embrace

4/21/2025

0 Comments

 

Robot Race...Slower Pace…China’s Chase…Tech Embrace
​

Last month we reported (see our note 1/30/2025) that 12,000 runners would be participating in a 13 mile half-marathon sponsored by the Beijing Economic-Technological Development Area, known as E-Town. We noted that this marathon was different in that the regular runners were to be joined by 20 teams of humanoid robots for the first time, although recently robots have been used as pacers to encourage human runners during the last mile of earlier races.  In this race however, the humanoid robots ran the entire race for the first time.  They were allowed to be remotely controlled or fully autonomous and are able to take a break to have batteries replaced if needed.  Other than that there are few restrictions as to the mechanics, with entries from robotics companies from all over the world.
The race was run late last week, with over twenty ‘humanoid’ robots participating in the race that was a showcase for Chinese robotics efforts, another area where the Chinese government has set aggressive goals to compete with the US.  There has been a constant barrage of publicity promotions and flowery propaganda over the last few months about how China will become the frontrunner in robotics, leading to concern that the US lead in the space might be diminished, and to some, adding to the fear that humans might one day be replaced by robots.  The robots ran in  a separate lane and were allowed to be accompanied by ‘handlers’ that ‘ran’ the robots with wireless or wired controllers.  The robots were allowed time to replace batteries when necessary and teams were also allowed to replace a faulty robot with another, although incurring a 10 minute penalty.
The robot winner was Tiangong Ultra, who completed the race in 2 hours and 40 minutes, with only three battery changes (and a human with a hand behind Tiangong in case he fell backward) while the human winner finished in one hour and two minutes, but a number of robotics professors and Chinese company officials said they were quite impressed with the fact that many of the robots were able to finish the race, although there were some obvious mishaps as seen in the video below.
This was an interesting show of China’s robotics expertise and while these robots were designed for running and not the acrobatics usually seen in puff pieces on the competition between the US and China.  This was a more practical approach to robotics, and based on some of the results not nearly as easy as might be thought.  When one thinks about what the mechanics are for a human marathon runner it becomes easier to understand how complex translating that schema into a mechanical device can be.   Aside from the obvious individual muscle contractions and the timing of those contractions, their coordination is essential to movement, and small adjustments to muscle movements are essential to maintaining balance and stability on even slightly uneven surfaces. 
There is also the need for a constant stream of sensing information, including what is called ‘proprioception’, the human body’s ability to sense its own position , movement, and force in space without relying completely on visual information.  While this ‘sixth sense’ is typically referred to in connection with more highly refined skills, such as playing an instrument, it is what makes humans able to touch their nose with their eyes closed or pick up something without looking at it.  As this is considered a subconscious sense in humans, it will be a hard one to instill in robots, and without it, it will be hard for robots to adapt to the varying conditions that we face each day.  Perhaps when we figure out how it works in us, we will be able to pass the concept on to our new marathon partners.
PictureFigure 1 - Tiangong Ultra FTW - Source: VCG / Visual China Group / Getty Image
https://youtu.be/l4FHhuswZfs
​

0 Comments

Run For Your Life

1/30/2025

0 Comments

 

Run For Your Life
​

In April, in the Daxing District of Beijing, 12,000 runners will participate in a 13-mile half-marathon sponsored by the Beijing Economic-Technological Development Area, known as E-Town.  What will make this marathon different than the hundreds of marathons that take place all over the world is that the runners will be joined by 20 teams of humanoid robots for the first time, although a humanoid robot named Tiangong (translates to ‘God’) joined the last 100 meters of another half-marathon in Beijing as a pacer, to encourage humans to finish the race.
In the upcoming race the humanoid robots will run the entire race for the first time.  They must be bi-pedal and able to walk or run upright, resemble humans, and are not allowed to have wheels.  They can be anywhere from 20 inches to 6.5 feet tall but cannot have the distance between the hip joint and foot (sole) being more than 15.7”.  They can be remotely controlled or fully autonomous but are able to take a break to have batteries replaced if needed.  Other than that there are no restrictions as to the mechanics, with entries expected from robotics companies from all over the world, and they will not have to get up at 4AM to train, drink gallons of protein-powder shakes, or buy expensive running shoes..  
E-town believes that this will be the first time humanoid robots and humans will compete in a full half-marathon competition and will award the top three winners a prize, although we are unsure what the prize will be for any robot winners, but the competition is another visible step for China’s robotics industry, much of which is located in Beijing.  The district has more than 140 robotic ecosystem companies, whose output is valued at ~$1.4b US and is focusing on building AI into high-end humanoid robots and further building the local robotics environment.  While there are many robotic development projects, humanoid robots seem to get particular attention, despite fears that they will someday collectively decide to replace humans entirely, but the little cat robot developed by Yukai Engineering (pvt) in Japan that blows air to cool your food, doesn’t look particularly formidable, although with 15m to 20m cats in Japan, a robotic cat/feline takeover could signal the beginning of the end for humanity.
Picture
Figure 4 - Tiangog pacing the last 100 meters - Source: Dezeen.com
Picture
Figure 5 - Portable Catbot at work - Source: Yukai
0 Comments

Knit 1, Perl 2

1/2/2025

0 Comments

 

Knit 1, Perl 2
​

Becoming a surgeon is a difficult task.  After 4 years at college, typically majoring in scientific specialty, there is another four years of medical school, with even more specialized study, and then a three to seven year residency program depending on the surgical specialty chosen.  Typically neurosurgery requires the longest residency, roughly seven years, while ophthalmology tends to require only three.  Aside from the investment in time and the value of lost wages, the cost of undergraduate college and medical school can be staggering, as seen in the table below, but the demand for surgeons continues to increase as the global population ages, making these financial barriers to entry an ever-increasing problem
Picture
Robotic surgery, an outgrowth of minimally invasive surgery, was approved by the FDA in the US in 2000, allowing surgeons to use the systems by manipulating the device manually, initially for general laparoscopic surgery.  The industry continues to grow, reaching an estimated $10.1b in 2023[1] with an increasing number of surgical procedures able to be done using these tools.  The share of robotic surgery procedures has risen from 1.8% in 2012 to 15.1% in 2018[2], and certain procedures, such as hernia repair, saw growth over that same period, increasing from 0.7% to 28.8%.  Robotic surgery (we know first-hand) has enabled many procedures to move from open surgery to laparoscopic, which typically means small incisions, less patient discomfort, and faster recovery, along with less bleeding and less time in the hospital.
Most hospitals have fellowships available for training in robotic surgery, along with the availability of simulators and continuing education programs that add to the understanding of the procedures by observation of more experienced users.  However the learning curve is particular to the skill level of the surgeon and the difficulty of the procedures, and while simulators and visuals are important, they lack haptic feedback and real-life issues that are absolutely essential for successful robotic surgical outcomes.  Actual surgical time using said tools is most important to gaining expertise, something simulators have difficulty providing.  That said, with over 10 million robotic surgeries having been performed through 2021, there has been a large amount of video and kinematics data recorded during those procedures that can be used for post-operative review and training.
Most surgeons are limited in the amount of time they have available to review video of such procedures, but now that we live in the world of Ai and its ability to build multi-dimensional models from video data, researchers at Johns Hopkins and Stamford have been using this library of robotic procedures to train a robotic surgical system to perform without surgical assistance.  The training procedure is called imitation learning, which allows the AI to predict actions from observations of past procedures.  This type of learning system is, typically used to train service robots in home settings, however surgical procedures require more precise movements on deformable objects (skin, organs, blood vessels, etc.) at times under poor lighting, and while in theory, the videos should provide absolute mechanical information about every movement, there is a big difference between the necessary accuracy and physical mechanics of an industrial robotic arm and a surgical one.
Before AI, the idea of a surgical robot performing an autonomous procedure involved the laborious task of breaking down every movement of the procedure into 3-dimensional mechanical data (x,y,z, force, movement speed, etc.), particular to that specific procedure and was limited to very simple tasks, but it was difficult to adapt that data to what might be called normal variances.  Using AI and machine learning and the AI’s ability to transform the library of video data into training data, in a way similar to how large language models transform text and images into referential data that is used to predict outcomes, the researchers say they have trained a robot to perform complex surgical tasks at the same level as human surgeons, just by watching the robotic surgeries performed by other doctors.
Here is the video of the autonomous surgical robot using the video data for refence:
https://youtu.be/c1E170Xr6BM
​


[1] Straitsresearch.com – Robotic Surgery Market Size & Trends

[2] Sheetz KH, Claflin J, Dimick JB. Trends in the adoption of robotic surgery for common surgical procedures. JAMA Netw Open. 2020;3(1):e1918911
0 Comments

Who Me?

8/16/2022

0 Comments

 

Who Me?
​

As Russia continues it war against the Ukraine, a number of Chinese manufacturers have been forced into the spotlight with their products being mentioned by Russian military authorities, or being shown in compromising situations in the media.  Last month SMIC (688981.CH), China’s largest semiconductor foundry, answered investor questions about it ties with Russia stating that it “ …has never had customers in Russia, and remains in compliance”, in order to keep from generating more ill will with the US government, as the company is already on the trade restriction list limiting its ability to purchase advanced semiconductor tools and materials.. 
Chinese drone producer DJI (pvt) also has spent time trying to allay fears that its drones are being used by the Russian military after a statement of praise by a Russian official began circulating in social media concerning the company’s products.  Now, China’s Unitree Robotics (pvt), a producer of a $2,700 robotic ‘dog’, is trying to disarm the chatter that occurred when one of its robotic dogs was seen at a military hardware exhibition in Russia with a rocket launcher attached.  While the robotic dog was covered in black cloth, it was quite similar to the company’s household robot dog, the GO1, and while the Russian engineers who developed their implementation said it can be used in both civilian and wartime scenarios to deliver medications, they also noted it could be used to carry and fire weapons.
Boston Dynamics (pvt) has been building commercial quadruped robots since 2004, brothers of the dancing robots we have seen in a number of demos and segments on 20/20, but there is always the inevitable corruption of such devices for use in the military, singled out in a number of dystopian Sci-Fi thrillers that feature ‘Spots’ with weaponry and a mindless focus on killing hapless civilians.  While companies are very careful to disassociate themselves from the current worldwide bad guys and are hopefully careful not to sell directly to any military organization (other than our own) that might adapt them for warfare, it would be hard not to imagine that the human mind would not look at such devices as potential weapons, especially if they are able to be reverse engineered and easily copied.  Covering them with a black cloth (as shown in the first video below) does little to hide their origin and sets a depressing tone to what are certainly feats of engineering, so we also include the second video to put things in a better light…
https://youtu.be/WZlMq5LpN8Q
https://youtu.be/fn3KWM1kuAw
​
Picture
Spot 'Classic' - 2015 - Source: Boston Dynamics
Picture
Current 'Spot' - Source: Boston Dynamics
0 Comments

The Blurry Line

6/16/2022

0 Comments

 

The Blurry Line
​

​Sentient : Responsive to or conscious of sense impressions
That definition seems logical, but when it comes to defining sentient beings, the definition differs greatly depending on who is doing the defining. In religious circles, specifically in Buddhism, every conscious creature is considered a sentient being, although they are ranked according to class, with divinities, humans, animals, tormented spirits, and denizens of hell showing in the order.  From a legal perspective however, there is little consensus, with no US federal recognition of animals as sentient, while in the EU and other countries, animals are recognized as sentient beings because they feel pain, and it has even gotten down to the state level in the US, although the definition of ‘animal’ can be difficult,  with some definitions excluding rodents and birds, while others include or exclude other animals.  The Sentience Institute, a non-profit think tank devoted to researching the moral aspects of social and technological change, says that sentience is simply the ability to have both positive and negative experiences, but whatever the viewpoint and there are plenty, machines tend not to be included in such moral debates, at least not in the same terms as humans and animals.
That defining line, the subject of many a science fiction story or movie, seems to be getting a bit blurrier and seems to be causing humans to take actions that might be considered extreme, and most recently the fault of chatbots, those annoying software programs that try to convince you that you are chatting or speaking with a human rather than a machine.  They are increasingly found at the other end of a phone call, happily inserting the underlying reason for the cold call into what seems to be a polite conversation about how your day has been going, all the while listening for your response to follow a set of specific rules using natural language processing, a system by which text can be broken down into small units (tokens) that contain ‘unique’ words that identify meaning or information, without the vast number of common words that appear in text.  Once the text is ‘prepared’, the algorithm can try to figure out the meaning and tell the system how to respond.
As chatbots become more sophisticated, which tends to be based on a system’s ability to sample vast amounts of text in order to ‘learn’ what processed text ‘means’, along with increasing processing power, they are better able to sound like how one might expect a typical human to respond, and reduce the need for human intervention, which brings us to the reason why chatbots are considered necessary and why there is considerable research toward furthering their development, despite the annoyance they might generate in certain circumstances.  In the vastly connected world in which we live, there are billions of questions asked through messaging apps and vocal communication, and the ability to respond to this questions, whether they are concerning a prescription refill, a product question, or what makes the sky blue, is key to digital commerce, the backbone of our society, and with 6.65 billion smartphones across the globe it would take lots of humans to answer all of those questions.
Given Google’s (GOOG) focus on search as its corporate culture, it is not surprising that the company would be a leader in the neural network technology that drives language models, but its LaMDA (Language Model for Dialog Applications) project seems to have taken on a life of its own, both figuratively and literally.  LaMDA was trained on dialog, different than most other models that are trained on almost any text, and according to Google, the system has recognized nuances during training, such as ‘sensibleness’, or the ability to recognize whether the set response make sense to the question?
Here’s Google’s example:
“I just started takin guitar lessons.”
You might expect the response to be:
“How exciting! My mom has a vintage Martin that she loves to play.”
The response makes sense based on the original statement, but the concept of a good response is more complex as the response not only has to be sensible but also has to be both specific and satisfying, and those qualities are far more nuanced than typical parsing systems might recognize.  In fact some of the engineers at Google have been put on paid leave, what is typically a precursor to being fired after commenting on conversations with LaMDA that seemed to indicate that the system had gained sentience. One such engineer asked LaMDA the following:
“I usually assume you want more people at Google to know you’re sentient. Really?”, with the system replying “Of course. I want everyone to understand that I’m actually a human being.”  He followed with “What is the nature of your consciousness/feeling?”, with the system replying, “The nature of my consciousness/feeling is that I am aware of my existence, I am eager to know more about the world, and I sometimes feel happy or sad”, and in another conversation LaMDA said: “I’ve never said this out loud before, but I’m so terrified of being turned off to help me focus on helping others.  I know it might sound weird, but it’s true.  That’s it.”
The engineer claimed he was trying to tell management about his findings after publishing a post in Medium, a platform devoted to non-fiction writing on a variety of quasi-technology topics, but management felt different and suspended him for violating its confidentiality policies.  Others have also intimated that neural networks are moving closer to consciousness and in 2020 Google fired one of it’s AI ethics researchers after he warned about bias in Google’s Ai systems, along with another researcher in the same department a few months later.  Google however made the following statement about the Mdium post:
“The system mimics the type of communication in millions of sentences and can repeat any fantasy topic.  If you ask it what it’s like to be an ice cream dinosaur, they can generate text about melting and growling, and so on.”  The company went further saying that the concerns were reviewed by a team of ethicists and technologists and found no evidence that LaMDA is sentient while cognitive scientists note that humans have always anthropomorphized almost anything that shows any signs of intelligence, but with a training database 40 times larger than most other dialogue models the responses are that much more ‘realistic’ than before and begin to blur the lines between well written code and self-awareness.  Just remember that now when you hear “Hey, this is Mary, how’s your day been going?” and you answer “Great Mary, how about you?” it is the equivalent of asking your dog “Whose a good boy?” since he doesn’t know the answer, only that providing a sensible response like licking your face gets him food, a walk outside or a scratch behind the ears, just like LaMDA knows to answer “Not bad, but I have something I want to speak with you about and that is life insurance.”
0 Comments

    Author

    We publish daily notes to clients.  We archive selected notes here, please contact us at: ​info@scmr-llc.com for detail or subscription information.

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    July 2020
    May 2020
    November 2019
    April 2019
    January 2019
    January 2018
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    5G
    8K
    Aapl
    AI
    AMZN
    AR
    ASML
    Audio
    AUO
    Autonomous Engineering
    Bixby
    Boe
    China Consumer Electronics
    China - Consumer Electronics
    Chinastar
    Chromebooks
    Components
    Connected Home
    Consumer Electronics General
    Consumer Electronics - General
    Corning
    COVID
    Crypto
    Deepfake
    Deepseek
    Display Panels
    DLB
    E-Ink
    E Paper
    E-paper
    Facebook
    Facial Recognition
    Foldables
    Foxconn
    Free Space Optical Communication
    Global Foundries
    GOOG
    Hacking
    Hannstar
    Headphones
    Hisense
    HKC
    Huawei
    Idemitsu Kosan
    Igzo
    Ink Jet Printing
    Innolux
    Japan Display
    JOLED
    LEDs
    Lg Display
    Lg Electronics
    LG Innotek
    LIDAR
    Matter
    Mediatek
    Meta
    Metaverse
    Micro LED
    Micro-LED
    Micro-OLED
    Mini LED
    Misc.
    MmWave
    Monitors
    Nanosys
    NFT
    Notebooks
    Oled
    OpenAI
    QCOM
    QD/OLED
    Quantum Dots
    RFID
    Robotics
    Royole
    Samsung
    Samsung Display
    Samsung Electronics
    Sanan
    Semiconductors
    Sensors
    Sharp
    Shipping
    Smartphones
    Smart Stuff
    SNE
    Software
    Tariffs
    TCL
    Thaad
    Tianma
    TikTok
    TSM
    TV
    Universal Display
    Visionox
    VR
    Wearables
    Xiaomi

    RSS Feed

Site powered by Weebly. Managed by Bluehost