Supply Chain Market Research - SCMR LLC
  • Blog
  • Home
  • About us
  • Contact

Run For Your Life

1/30/2025

0 Comments

 

Run For Your Life
​

In April, in the Daxing District of Beijing, 12,000 runners will participate in a 13-mile half-marathon sponsored by the Beijing Economic-Technological Development Area, known as E-Town.  What will make this marathon different than the hundreds of marathons that take place all over the world is that the runners will be joined by 20 teams of humanoid robots for the first time, although a humanoid robot named Tiangong (translates to ‘God’) joined the last 100 meters of another half-marathon in Beijing as a pacer, to encourage humans to finish the race.
In the upcoming race the humanoid robots will run the entire race for the first time.  They must be bi-pedal and able to walk or run upright, resemble humans, and are not allowed to have wheels.  They can be anywhere from 20 inches to 6.5 feet tall but cannot have the distance between the hip joint and foot (sole) being more than 15.7”.  They can be remotely controlled or fully autonomous but are able to take a break to have batteries replaced if needed.  Other than that there are no restrictions as to the mechanics, with entries expected from robotics companies from all over the world, and they will not have to get up at 4AM to train, drink gallons of protein-powder shakes, or buy expensive running shoes..  
E-town believes that this will be the first time humanoid robots and humans will compete in a full half-marathon competition and will award the top three winners a prize, although we are unsure what the prize will be for any robot winners, but the competition is another visible step for China’s robotics industry, much of which is located in Beijing.  The district has more than 140 robotic ecosystem companies, whose output is valued at ~$1.4b US and is focusing on building AI into high-end humanoid robots and further building the local robotics environment.  While there are many robotic development projects, humanoid robots seem to get particular attention, despite fears that they will someday collectively decide to replace humans entirely, but the little cat robot developed by Yukai Engineering (pvt) in Japan that blows air to cool your food, doesn’t look particularly formidable, although with 15m to 20m cats in Japan, a robotic cat/feline takeover could signal the beginning of the end for humanity.
Picture
Figure 4 - Tiangog pacing the last 100 meters - Source: Dezeen.com
Picture
Figure 5 - Portable Catbot at work - Source: Yukai
0 Comments

Buyer Basics

1/30/2025

0 Comments

 

Buyer Basics
​

Yesterday we spent some time examining the effect of Chinese TV set consumer subsidies, and as part of that note mentioned the substantial growth seen in China for large and ultra-large TV sets.  While we appreciate the desire among Chinese consumers to fill their living rooms with the largest TV set possible, that desire is not always the case in the US and other countries. CNET did a survey last December, polling almost 1,200 respondents with two questions.
  • If money were not an issue, what is the largest TV you’d put in your house?
  • How much are you willing to spend on your next TV?
The responses to the first question were weighted and calculated against four age groups, Gen Z (13 – 28 yrs.), Millennials (29 – 44), Gen X (45 – 60), and Boomers (61-79), along with an ‘all’ category (red).  Surprisingly, the TV size with the highest response rate overall was 65”, smaller than we might have expected, followed by 75”, both of which were popular with older consumers (45 -79), while Gen X’ers seemed to favor 85” sets considerably more than other age groups.  Millennials stood out in the 100”+ category, but more telling was the “No TV” category, where younger consumers were the most likely not to have a TV at all.
The second question, the more valuable of the two in our view, reflected the current spending ‘potential’ of US consumers.  40% of the consumers surveyed indicated that they were only willing to spend under $500 for a new TV, which, in most cases, rules out the ‘premium’ TV category that includes most OLED and Mini-LED/QD TVs, but does include a number of 75” LCD TVs.  While the overall share of those who will be willing to spend between $500 and $1,000 drops from 40% to 33%, the combined share of those who will be willing to spend up to $1,000 is 73%, leaving only 19% willing to spend above $1,000 and 8% not will to spend anything on a new TV.  Millennials stood out as those most willing to spend over $1,000 (26%), although that was not significantly above the average. 
Picture
Figure 1 - CNET Survey “If money were not an issue…” question – Source: SCMR LLC, CNET
Picture
Based on the data from the survey, we were curious to see if retailers matched their TV set offerings with consumer spending expectations.  Our quick survey of TV set models at Best Buy (BBY), Walmart (WMT), Amazon (AMZN), and Samsung’s (005930.KS) on-line site makes it quite easy to see what customer price point each retailer is trying to appeal to.  Amazon, the volume leader, is close to being as price agnostic as possible, offering roughly the same number of models in each price category, essentially an all-things-to-all-people retail approach.   Best Buy is focused on high-end consumers, with by far, the largest number of model offerings in the over $2,000 category, while Samsung’s offerings are a bit more balanced, although certainly more oriented to the middle and upper price tiers.  Walmart’s approach is most closely aligned with the survey results, with almost 70% of its offerings priced at $1,000 or less, similar to the 73% of survey respondents in that same category, so it seems not only does Walmart know its customer base but would be the logical destination choice for the most number of TV set buyer.
Picture
TV Set Model Offerings Share by Price Range - Source: SCMR LLC
Picture
Figure 3 - Sitting Around the TV (1950 - 1960) - Source: The Guardian
0 Comments

Buying Growth

1/29/2025

0 Comments

 

Buying Growth
​

TV set unit volume has been on the decline in China, and, as we have noted, the Chinese government has been subsidizing TV set purchases since August of last year as part of a broader program to stimulate the replacement of older CE products.  The “Swap Old for New” program has a stated purpose of stimulating consumer spending, promoting energy conservation, and improving the quality of life of consumers by upgrading appliances to newer and more technically advanced models, although we believe stimulating consumer spending is the overriding goal.
There is little empirical data as to why TV set volumes have been on the decline in China for the last few years but if the Chinese population is moving closer to the attitudes of westernized countries, they have less time to sit in front of a TV as their lives become more complex.  However, even more of an influence is the availability of other display types, particularly inexpensive smartphones.  US consumers spend 3 hrs. 33 mins. on their phones each day, just a bit below the global average of 3 hrs. 50 mins, and Chinese citizens are getting close at 3 hrs. 19 mins.[1]  With less overall time available and a more convenient solution than TV, it is not surprising that TV set volumes have declined as China’s social development matures.
.


[1] https://explodingtopics.com/blog/smartphone-usage-stats
Picture
Figure 1 - China - TV Set Unit Volume - Source: SCMR LLC, RUNTO
In fact, TV set sales in China have declined, but have also shifted to larger sets, serving as an entertainment source rather than an information source, with the average TV set size increasing 3.3” in 2024 after a 3.0” increase the year before.  For the first time ever, the unit volume share of 75” TVs was greater than that of 65” TVs.  65” TVs became the leader only two years ago and 85” TVs, which made up 10.9% of unit volume, saw 56.7% y/y unit growth last year.
 
Picture
Figure 2 - China - TV Set Size Share - Source: SCMR LLC, RUNTO
The effectiveness of Chinese consumer subsidies, which are financed through both government budget allocations and the issuance of bonds, can be measured to a degree as they are specific to certain product criteria.  The Chinese government has 5 levels of energy efficiency that are attached to consumer products, with one being the most efficient and 5 the least.  The subsidies only apply to levels 1 and 2 on a sliding scale, and Chinese TV brands found relatively inexpensive ways to move lower efficiency rated Tv models up to levels 1 and 2 to qualify without the expense and time of a full redesign.  By the time the TV set subsidies kicked in, many lower efficiency models had already been upgraded, giving consumers a wide range of choices under the plan.
Since TV set energy efficiency was only a small factor in Chinese consumer’s minds before the subsidy became effective, comparing 1H energy efficient TV set volumes (Pre-Subsidy) against 2H energy efficient TV set volumes (Post-Subsidy) gives clues as to how influential the subsidies were.  As shown below, energy-efficient TVs (level 1 & 2) represented 27.1% of unit volume and 38.1% of sales, however in 2H, after the subsidies had been put in place (August), energy efficient TVs represented 72.3% of unit volume and 80.8% of sales.  To compensate for possible seasonality we show that the y/y increase in units in 1H was 66.3% but was 232.2% in 2H and similarly, sales were up 44.1% y/y in 1H but up 184.5% y/y in 2H.
Picture
​The Chinese swap subsidy plan remains in effect in 1Q, although we expect its effect will diminish over time, however Chinese prognosticators expect it to have enough of an impact to grow Chinese TV set shipment 2.1% this year.  While from a macro view, there is certainly a case to be made for TV set shipment growth on the mainland, but our concern is that the 2024 TV set subsidies have ‘stolen’ sales and potential upgraders from 2025, resulting in weaker than expected full year 2025-unit volume.  Sales could increase as the share of larger TVs  continues to grow, against the negative of competitive brand pricing, but growth in unit volume might be harder to find.  Of course there is always the potential for larger subsidies but buying growth through subsidies is expensive way for the Chinese government to show the world that China is back on a growth track.
0 Comments

Two Bites this Week

1/29/2025

0 Comments

 

Two Bites this Week
​

After the Chinese gut punch the Ai industry received this week, there was another impactful event that pointed toward China’s relentless push toward becoming a world-class player in the semiconductor space, despite the US government’s steps to keep China from competing.  ChangXin Memory Technologies (pvt), China’s leading memory producer, has released its first 16Gb DDR5 chip (already produces DDR3L, DDR4, LPDDR4X, and LPDDR5).  Earlier versions (G1 – G3) were produced on 23.8nm and 18nm nodes, or the equivalent of D2y and D1x generations.  As DDR5 has surpassed DDR4 in terms of volume, this gives China there own DDR5 product that will compete against Samsung Electronics (005930.KS), SK Hynix (000660.KS), and Micron (MU).
The three non-Chinese DDR5 producers began mass production of DDR5 in 2021 and current products have a feature size between 12nm and 14nm, which gives them a three-year lead of CXMT.  However CMXT was able to skip the 17nm node step to reduce development time and is able to give the product a higher bit density than both Samsung’s and SK Hynix’s products (Micron is slightly higher).  Higher bit density allows for greater memory capacity without increasing the chip size.  The CMTX chips were found in a teardown of a Chinese high-performance solid-state drive that is the first SSD to be able to store 20Gb/mm2 by stacking 16 CMXT DDR5 chips.
Picture
​We do not make the mistake if taking this event to mean that China’s home-grown memory industry has caught up with current supplier technology, but it does indicate that China continues to shorten the lead its competitors have, even considering the roadblocks that have been put in its way.  Whie the semiconductor space represents a higher level of complexity compared to the display space, comparisons can be made, and earlier this week, the afront to the US dominance of the Ai world lends a bit more credibility to China’s overall efforts in technology.  Sometimes when you poke the bear, it bites back.
0 Comments

Finally…

1/28/2025

0 Comments

 

Finally…
​

​As we have noted a number of times, Royole (pvt), once the company that beat display industry giants to the development of a commercial flexible OLED display, was forced into bankruptcy by its unpaid employees, who are owed over $14m in salary and unfunded equity.  The company’s remaining assets, which include almost 100,000 m2 of land, 235,000 m2 of building space, and $62.5m in equipment, went to a court-sponsored auction in December, with an asking price of $172m.
Unfortunately there were lots of lookers but no bids and subsequent attempts led to the same results, however it seems that the latest auction received a bid from a single petitioner for $70.5m, which seems to have been accepted.  The identity of the bidder has yet to be revealed, but there is considerable speculation that it is HKC, one of China’s four large panel LCD producers.  HKC has little or no OLED R&D, so there is the possibility that they are looking to enter OLED space, although one would expect that more modern equipment would be desired. They could sell off or scrap the more specialized OLED equipment and build out an OLED fab themselves or keep what can be used for LCD and create another large or small panel fab in the Royole space.  In either case it puts an end to what was once a highly speculative company that raised over $1.3b in debt and equity capital and was valued at $6b at its peak.  At least the employees can get paid.
 
0 Comments

Saved by the State

1/28/2025

0 Comments

 

Saved by the State
​

Last week we noted that panel pricing for 2024 was up 0.2% y/y for the year, with relatively stable pricing for notebooks and monitors and a bit more volatility for TV, tablet and mobile panel pricing. Figure 2 shows that monthly panel sales were both strongest and most ahead of 5-year sales averages in 2Q, with 6 out of 12 quarters above the averages and 5 below (August was flat against the average) for the year.  This tracks against 50.5% of yearly panel sales in 1H and 49.2% in 2H, the opposite of the norm of 48.7% in 1H and 51.3% in 2H (5-yr. avg.).  
Picture
Picture
Figure 1 - Panel Pricing 2024 - Variance from 5-Year Norm by Type - Source: SCMR LLC
As to individual company sales performance in 2024, all large panel producers saw panel sales increase in 2024 except Innolux (3482.TT), who saw sales decline by 1.2%.  On a y/y basis Samsung Display (pvt) was the leader, expanding shipments and sales of their QD/OLED panels, and while SDC gained share in the large panel space in 2024, their share of the large panel market remains low (4.4%).  Chinastar (pvt) saw the biggest sales gain y/y at 31.5% but will see its sales and share expand again in 2025 when they close on the purchase of LG Display’s (LPL) Guangzhou, China large panel fab.  While all four Chinese large panel producers saw sales increases, only BOE (200725.CH) and Chinastar saw share gains in the large panel space.  2025 will also see the end of Sharp’s (6753.JP) participation in the large panel space as they lease their Gen 10 LCD fab in Sakai, Japan to Softbank (9434.JP) to use as a data center, after years of disappointing sales and operational losses.
All in, while the first half of 2024 was stronger than 2H, the Chinese government was responsible for keeping large panel producers from ending the year on a more sour note.  Their “Swap Old for New” program, which included TV sets at the end of August, helped to stimulate enough Chinese consumer demand to put a halt to sliding TV panel prices, which peaked in July and began a rapid decent.  The subsidy program offered consumers a subsidy of between 15% and 20% depending on the energy efficiency of the TV. And TV brands found ways of meeting the energy requirements without redesigning those sets that did not qualify.
The program continues into 2025, however it is unsure whether TV set demand in China for 2025 has been pulled into 2024 because of the subsidy program.  If that is the case (we believe so), it has the possibility of causing large panel producers to overproduce heading into 1Q, under the belief that the ongoing subsidies will continue to stimulate set sales.  In 4Q Chinese TV set brands Hisense (600060.CH) and TCL (000100.CH) increased sales targets and panel purchases as a result of the subsidy program.  With Chinese New Year coming at the end of this month, it will be March before can get a read on actual TV set shipments in China.  This issue, along with the potential for additional tariffs on Chinese goods, will be the drivers for large panel producers in the early part of the year, making it considerably more difficult to plan production, although we expect Chinese TV set brands will continue to target higher TV set sales and panel purchases until they receive some indication that they are no longer stimulating incremental demand.  That would typically lead to a more conservative production stance in 2Q, but with the tariff wildcard and the possibility for a bit of overproduction in 1Q, it could go in almost any direction.
Picture
Figure 2 - Large Panel Sales Seasonality - 5-Year Average Monthly Sales v. 2024 Sales - Source: SCMR LLC
Picture
0 Comments

DeepSeek

1/27/2025

0 Comments

 

DeepSeek
​

The definition of panic is: “Sudden uncontrollable fear or anxiety, often causing wildly unthinking behavior.”, but that does little to shine light on what is causing the panic or the circumstances leading up to said panic.  Today’s ‘panic’ was caused by a Hangzhou, China Ai research lab, less than 2 years old, that was spun off of a high-profile quant hedge fund.  Their most recent model, DeepSeek (pvt) V3 has been able to outperform many of the most popular models and is open source, giving ‘pay for’ models a new competitor that can be used to develop AI applications without paying a monthly or yearly fee.  By itself, this should be added to the list of worries that AI model developers already consider, but there are a number of existing AI models that are open source and they have not put OpenAI (pvt), Google (GOOG), Anthropic (pvt), or Meta (FB) out of business.  It is inevitable that as soon as new models are released, another one comes along that performs a bit better.  But that is not why panic has set in today.
We believe that valuation for Ai companies is much simpler than one might think, as any valuation, no matter how high, is valid only as long as someone else is willing to find a reason to justify a higher valuation.  Models that help with valuation in the Ai space tend to extrapolate sales and profitability based on parameters that don’t really exist yet or are so speculative as to mean little.  There are some parameters that are calculable, such as the cost of power, or the cost of GPU hardware today, but trying to estimate revenue based on the number of paying users and the contracted price for AI CPU time 5 or 10 years out is like trying to herd cats.  It’s not going to go the way you think it is.
One variable in such long-term valuation models is the cost of computing time and the time it takes to train the increasingly large models that are currently being developed.  In May of 2017 the AlphaGo Zero model, the leading model at the time, cost $600,000 to train.  That model, for reference, had ~20m parameters and two ‘heads’ (Think of a tree with two main branches), one which predicted the probability of playing each possible move, and the other estimating the likelihood of winning the game from a given position.  While this is a simple model compared to those available today, it was able to beat the world’s champion Go player based on reinforcement learning (the ‘Good Dog’ training approach) without any human instruction in its training data.  The model initially made random moves and examined the result of each move, improving its ability each time, without any pre-training.
In 2022 GPT 4, a pre-trained transformer model with ~1.75 trillion[1] parameters, cost $40m in training costs, and a 2024 training cost study estimated that the training cost for such models has been growing at 2.4x per year since 2016 (” If the trend of growing development costs continues, the largest training runs will cost more than a billion dollars by 2027, meaning that only the most well-funded organizations will be able to finance frontier AI models.[2]”).  There are two aspects to those costs, first, the hardware acquisition cost, with ~44% for computing chips, primarily GPUs (Graphics processing units), which are used to process data rather than graphics.  Additionally, ~29% is for server hardware, 17% for interconnects, and ~10% for power systems. The second is the amortized cost over the life of the hardware, which includes between 47% and 65% for R&D staff, but runs between 0.5x and 1x of the acquisition costs.
All in, as models get larger, training gets more expensive, and with many Ai companies still experimenting with fee structure, model training costs are a critical part of the profitability equation, and based on the above, will keep climbing, making profitability more difficult to predict.  That doesn’t seem to have stopped Ai funding or valuation increases, but that is where DeepSeek V3 creates a unique situation.
The DeepSeek model is still a transformer model, similar to most of the current large models, but it was developed with the idea of reducing the massive amount of training time required for a model of its size (671 billion parameters), without compromising results.  Here’s how it works:
  • Training data is tokenized.  For example a simple sentence might be broken down into individual words, punctuation, spaces, etc., or letter groups, such as ‘sh’, ‘er’, or ‘ing’, depending on the algorithm.  But the finer the token, the more data processed, so tradeoffs are made between detail and cost.
  • The tokens are passed to a gate network which decides which of the expert networks will be best to process that particular token.  The gating network, acting as a route director, choosing the expert(s) that have done a good job with similar tokens previously.  While one might think of the ‘expert networks’ as doctors, lawyers, or engineers with specialized skills, each of the 257 experts in the DeepSeek model can change their specialty.  This is called dynamic specialization, and while the experts are not initially trained for specific tasks, the gate networks notices that, for example, Expert 17 seems to be the best at handling tokens that represent ‘ing’, and assigns ‘ing’ tokens to that expert more often.
Here's is where DeepSeek differs…
  • The data that the experts pass to the next level is extremely complex, multi-dimensional information about the token, how it fits into the sequence, and many other factors.  While the numbers vary considerable for each token, The data being passed between an expert network and an its ‘Attention Heads” can be as high as 65,000 data points (Note: This is a very rough estimate).
  • The Expert networks each have 128 ‘Attention heads’, each of which looks for a particular relationship within the that mass of muti-dimensional data that the Expert Networks pass to them.   They could be structural (grammatical), semantic, or other dependencies, but DeepSeek has found a way to compress that data being transferred from the experts to the attention heads, which reduces the amount of computational demand from the Attention Heads.  With 257 expert networks, each with 128 Attention heads and the large amount of data contained in each transfer, the compute time is the big cost driver for training.
  • DeepSeek has found a process (actually two processes) that can compress the data that each expert network is passing to it Attention Heads by compressing the multi-dimensional data.  Typically compression would hinder the Attention Heads’ ability to capture the subtle nuances that is contained in the data, but DeepSeek seems to have been able to use compression techniques that do not affect the sensitivity of the Attention Heads for those subtleties.  


[1] estimated

[2] Cottier, Ben, et al. “The Rising Costs of Training Frontier AI Models.” arXiV, arxiv.org/. Accessed 31 May 2024.
 
Picture
Looking back at the cost of training large models that we mentioned above, one would think that a model the size of DeepSeek (671 billion parameters and 14.8 trillion training tokens) would take a massive amount of GPU time and cost $20m to $30m, yet the cost to train DeepSeek was just a bit over $5.5m, based on 2.789 million hours of H800 time at $2.00 per hour, closer to the cost of much smaller models and outside of the expected range.  This means that someone has found a way to reduce the cost of training a large model, potentially making it easier for model developers to produce competitive models.  To make matters worse, in the case of DeepSeek, it is open source, which allows anyone to use the model for application development.  This undercuts the concept of fee-based models who expect to charge more for every increasingly large model, and justify those fees on the increasing cost of training. Of course the fact that such an advanced model is free makes the long-term fee structure models that encourage high valuations less valid.
We note that the DeepSeek model benchmarks shown below are impressive, but some of that improvement might come from the fact that the DeepSeek V3 training data was more oriented toward mathematics and code.  Also, we always remind investors that it is easy to cherry-pick benchmarks that present the best aspects of the model.  That said, not every developer requires the most sophisticated general model for their project, so even if DeepSeek did cherry-pick benchmarks (We are not saying they did), a free model of this size and quality is a gift to developers and the lower training costs are a gift to those that have to pay for processing time or hardware.  Its not the end of the AI era, but it might affect valuations and long-term expectations if DeepSeek’s compression methodology proves to be as successful in the wild as the benchmarks make it seem it might be, but the fact that this step forward in AI came from a Chinese company will likely cause ulcers and migraines across the US political spectrum and could cause even more stringent clampdowns on the  importation of GPUs and HBM to China, despite the fact that they don’t seem to be having much of an effect.
Picture
Figure 1 - DeepSeek V3 Benchmarks - Source: DeepSeek
0 Comments

Panel Pricing Paralysis

1/24/2025

0 Comments

 

Panel Pricing Paralysis
​

December saw relatively little change in panel prices.  TV panel pricing (↑0.2%) and mobile panel pricing (↓0.6%) were the only categories that saw any movement, and we expect little change in January as much of the CE space is waiting to see whether President Trump makes substantial changes in the US import tariff levels.  Additionally, as Chinese New Year comes relatively early (Jan. 29) this year. Production demand for holiday inventory is mostly complete. 
Picture
Looking at our final 2024 pricing statistics, while there were some large monthly price swings[1], primarily in TV panel prices (see Figure 1), but in total, panel prices were relatively stable, particularly toward the end of the year.  Panel producers, for the most part, acted rationally, however less so in the TV panel segment, where the desire for higher prices relatively early in the year, without strong consumer demand, caused an inventory issue that required panel producers to cut utilization rates in 3Q.  Given that Chinese panel producers control a majority of the large panel LCD market, the onus falls on them for the large positive swing in March and the negative one in August, and given that the upward push in large panel prices was not driven by demand, it was inevitable that it was unsustainable.  By the end of the year TV panel prices were up only 2.0% y/y.
It is difficult to predict how panel prices will move in the 1st and possibly the 2nd quarters of this year as the potential for tariff changes that will affect demand in the US for CE products is, at this point, a bigger factor than the outlook for general CE product demand in the US.  With only two LCD large panel producers outside of China, onerous tariffs on TV sets will determine whether Chinese panel producers will be profitable this year, and we expect they will have little choice but to raise set prices to offset additional tariffs.  Consumers will see those landed price increases as inflationary, making it difficult to justify replacement cycle purchases, but the scale of the potential tariff changes is really the key as margins on LCD panels, particularly from Chinese producers are quite thin.  Its just a waiting game until the big man makes up his mind.


[1] Figure 1 zeros all panel prices at the end of 2023 and shows the relative m/m movements in each type of panel
Picture
Figure 1 - 2024 Panel Price Relative ROC by Type - Source: SCMR LLC, OMDIA, Trendforce, RUNTO
Picture
Figure 2 - 2024 Aggregate Panel Pricing by Type - Source: SCMR LLC, OMDIA, Trendforce, RUNTO
Picture
Figure 2 - 2024 Aggregate Panel Pricing by Type - Source: SCMR LLC, OMDIA, Trendforce, RUNTO
0 Comments

Meta Mistake

1/24/2025

0 Comments

 

Meta Mistake
​

0 Comments

Meeting Moohan?

1/24/2025

0 Comments

 

Meeting Moohan?
​

​Yesterday we spent some time on how Samsung (005930.KS) has integrated AI into its just announced flagship Galaxy S Series smartphone line.  Aside from a lean toward on-device AI processing, similar to Apple (AAPL) Intelligence, there seem to be a concerted effort to move typical user physical tasks to the AI, particularly through voice control.  By this we mean by asking the AI to perform a task (“Summarize the transcript of the meeting I had with the marketing staff yesterday, write a cover letter, let me check it, and then send the cover letter and meeting summary to my Tier 3 e-mail list.  Send me a confirmation when its done”).  In terms of workflow, without the AI and the tight integration with Samsung applications, this would have entailed:
  • Opening the app that recorded the meeting.
  • Opening an app that could create a transcript of the meeting and creating and saving the transcript.
  • Opening a word processor and loading the transcript.
  • Editing the transcript into a summary and saving.
  • Opening an e-mail app.
  • Creating a cover letter.
  • Defining the send list on the cover letter.
  • Attaching the summary to the cover letter and sending
  • Closing all apps.
In theory, if all of the applications were either Samsung applications or 3rd party applications that were able to access the AI API, the new workflow would be:
  • Tell the AI what to do.
  • Check the summary and cover letter and edit it if necessary.
  • Do other stuff…
The AI is able to break your task down into its component parts and creates ‘agents’ to perform each part.  Because the AI is so closely integrated into the Samsung applications, the agents are able to open the necessary apps, perform their function, with the Ai directing the process.  The user does not have to open or close any apps or do any other work other than review, unless the summary or cover letter needs editing.  Note that we start the description of the ‘new’ workflow with ‘in theory’ as it is difficult to determine the actual level of AI/application integration until the phones are available for testing under Samsung’s 1UI 7 user interface.  We expect the level of integration might not be quite at this level yet but simplifying workflow, even a little, is what makes consumers think about upgrading.
The reason we continue to focus on Samsung’s level of Ai integration is because of Samsung’s upcoming release of its XR headset (aka ‘Project Moohan’ –( 프로젝트 무한 ) which translates to ‘Project Infinity’) seems to follow a similar path toward deep AI/application integration, particularly through its use of Android XR, Google’s (GOOG) new XR OS that will be used for the first time in the upcoming Samsung Moohan release.  While the details of Android XR are still sparse, the overall objective is to create a platform based on open standards for XR devices and smart glasses (AR) that uses existing Android frameworks, development tools, and existing Android elements (buttons, menus, fields, etc.), making it able to access all existing Android applications in a spatial environment. 
In a practical sense, it would allow existing 2D Android applications to be opened as 2D windows in a 3D environment, and with modification, can become 3D (Google is redesigning YouTube, Google TV, Google Photos, Google Maps and other major 2D applications for a 3D setting).  Android XR will also support a variety of input modes, including voice and gestures through head, eye, and hand tracking, and will support some form of spatial audio.
One feature of the Samsung XR headset that we believe will be well received is the visual integration with the AI.  Siri can hear what you say and can respond, but while it can ‘see’ what the user sees, it doesn’t have the capability to analyze that information on the fly and use it.  Meta’s headsets can hear what the user hears and perform a level of analysis for context, but that function is primarily for parsing voice commands.  Typically the Meta system does not access the camera information unless requested by the user and then takes a snapshot.  It is able to perform limited scene analysis (object recognition, lighting, depth, etc.) to allow for virtual object placement, but works specifically on the snapshot and only ‘sees’ what is in the real world, excluding virtual world objects. 
If the recent demo versions of the Samsung XR headset are carried through to the final product, the headset will hear and see, both real world and virtual objects, and analyze that information on a continuous basis.  This allows the user to say, “What kind of dog is that?” to the AI at any time and have the AI respond based on a continuous analysis of the user’s visual focus.  The user can also ‘Circle to search’ an object within view with a gesture, as the Ai also recognizes virtual objects (the circle) as well as the real-world data.  According to Google, the embedded AI in the Samsung headset also has a rolling ~10-minute memory that enables it to remember key details of the user’s visuals, which means you can also ask “What kind of dog was that in the window of the store we passed 5 minutes ago?” without having to go back to the store’s location.
We know there will be limitations to all of the features we expect on both the Samsung Galaxy S series smartphones and the Samsung XR headset, but, as we noted yesterday, Samsung seems to understand the concept that both the AI functionality and the user’s Ai experience are based on how tightly the Ai is integrated into the device OS and the applications themselves.  This desire has led them to work closely with Google and that allows users to use familiar Android apps, along with those specifically designed or remodeled for the spatial environment.  Hopefully they price it right at the onset, learning from the poor Vision Pro results, but we will have to wait a few more weeks to find out.
0 Comments
<<Previous

    Author

    We publish daily notes to clients.  We archive selected notes here, please contact us at: ​[email protected] for detail or subscription information.

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    July 2020
    May 2020
    November 2019
    April 2019
    January 2019
    January 2018
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    5G
    8K
    Aapl
    AI
    AMZN
    AR
    ASML
    Audio
    AUO
    Autonomous Engineering
    Bixby
    Boe
    China Consumer Electronics
    China - Consumer Electronics
    Chinastar
    Chromebooks
    Components
    Connected Home
    Consumer Electronics General
    Consumer Electronics - General
    Corning
    COVID
    Crypto
    Deepfake
    Deepseek
    Display Panels
    DLB
    E-Ink
    E Paper
    E-paper
    Facebook
    Facial Recognition
    Foldables
    Foxconn
    Free Space Optical Communication
    Global Foundries
    GOOG
    Hacking
    Hannstar
    Headphones
    Hisense
    HKC
    Huawei
    Idemitsu Kosan
    Igzo
    Ink Jet Printing
    Innolux
    Japan Display
    JOLED
    LEDs
    Lg Display
    Lg Electronics
    LG Innotek
    LIDAR
    Matter
    Mediatek
    Meta
    Metaverse
    Micro LED
    Micro-LED
    Micro-OLED
    Mini LED
    Misc.
    MmWave
    Monitors
    Nanosys
    NFT
    Notebooks
    Oled
    OpenAI
    QCOM
    QD/OLED
    Quantum Dots
    RFID
    Robotics
    Royole
    Samsung
    Samsung Display
    Samsung Electronics
    Sanan
    Semiconductors
    Sensors
    Sharp
    Shipping
    Smartphones
    Smart Stuff
    SNE
    Software
    Tariffs
    TCL
    Thaad
    Tianma
    TikTok
    TSM
    TV
    Universal Display
    Visionox
    VR
    Wearables
    Xiaomi

    RSS Feed

Site powered by Weebly. Managed by Bluehost