<![CDATA[Supply Chain Market Research - SCMR LLC - Blog]]>Tue, 06 Feb 2024 12:34:06 -0500Weebly<![CDATA[Not a Happy New Year]]>Wed, 03 Jan 2024 05:00:00 GMThttp://scmr-llc.com/blog/not-a-happy-new-yearNot a Happy New Year
The January 1 earthquake that hit the coast of Japan on January 1 caused severe damage to the Northern Coast of Ishikawa Prefecture, with the epicenter about 26 miles northeast of the town of Anamizu.  This puts the Japan Display (6740.JP) Ishikawa LCD fab approximately 55 miles from the epicenter of the 7.5 magnitude quake, and while the magnitude at that distance was closer to 5.0 to 5.5 on the Richter scale, it was enough to cause a number of semiconductor fabs nearby to be shut down for inspection, particularly the Toshiba (pvt) Kaga fab, which is ~9.3 miles south west of the JDI fab, and has given no estimate for when it might reopen.  While Japan Display has not commented on the status of the Ishikawa fab, we expect automated sensors at the fab, similar to those at the Toshiba Semiconductor fab, would have triggered an automated shutdown of sensative equipment to prevent damage and closing water and gas lines.  If the impact of the quake was limited to a general shutdown, we would expect the production loss to be limited to ~2 days and the loss of some product that was on the line when the shutdown occurred, so the overall impact to JDI, at least at this juncture, is minimal.
The semiconductor fabs in Ishikawa Prefecture, in most cases, were automatically shut down and would require full inspection before restarting.  Tower Semiconductor (TSEM), which has two fabs in Ishikawa Prefecture stated, “There was no impact or damage to the buildings and only minor damage to the facilities which had no impact on operations. The dedicated staff and response teams have worked to ensure operational safety and stability. Tools requalification is underway, combined with efforts to efficiently repair any damage to fab tools and in-line materials, while utilizing all available resources to minimize any potential disruptions to manufacturing and customer service.” 
Taiyo Yuden (6976.JP), a supplier of MLCCs with a ~5% share stated, “No injuries to our group employees have been confirmed. In addition, our group's production bases, no major damage was confirmed to the building or production equipment. Production is expected to resume after equipment inspection work is through”. No word from Global Wafers (6488.TT), who has two production facilities about 100 miles from the epicenter, although local sources indicate that wafer production was halted and the lines are undergoing full inspection, with the same for Shin-Etsu (4063.JP), although silicon wafer growth using the CZ method is very sensative to vibration and can cause uneven crystal growth or stress defects.  While 4” silicon wafers can be grown in 3 to 5 days, 8” wafers can take between 7 to 14 days, and 12” wafers between 14 to 28 days, so it will take quite a while to assess the damagefacing these wafer fabs.
All in, while the damage to smaller towns in northern Ishikawa was significant, most production facilities seem to be relatively unaffected, other than the typical automated shutdown, inspection  and restart, which we expect, for those with no significant damage, will take a few days to complete.  The loss of WIP is still unknown, and in the case of wafers production, could turn out to be a bit more significant, but it would seem that production losses will be limited to a week to 10 days on average so far.  Not a disaster for the display and semiconductor industry, but not a happy new year in Ishikawa for others.
Picture Ishikawa Map
Figure 1 - Ishikawa Prefecture in Japan - Source: Google Earth, SCMR LLC
]]>
<![CDATA[What’s Coming?]]>Wed, 03 Jan 2024 05:00:00 GMThttp://scmr-llc.com/blog/whats-comingWhat’s Coming?
​Over the next few weeks there will be a number of CE product releases, some of which will happen in conjunction with CES, while others will have their own release promotions.  In fact two popular Chinese smartphone brands, OnePlus (pvt) and Honor (pvt), will be announcing their newest models, the OnePlus Ace 3 and the Honor X50 GT tonight, while Oppo (pvt) is set to release  its Find X7 series on 1/8, Honor again on the 10th with the Magic 6 series, the ROG (2357.TT) 8 gaming series follows on the 16th, and the crecendo on January 18 with Samsung’s  (005930.KS) Galaxy S24 flagship smartphone series.
As always, each model will offer its unique feature set combination to entice consumers to trade in their old phones and step up to the new, but there is one feature than spans all of these smartphone brand offerings, and that is they all use one form or another of Qualcomm’s (QCOM) Snapdragon 8 chipset, although there is some expectation that the Oppo Find X7 might also offer Mediatek’s (2454.TT) Dimensity 9300.  Its early in the year, with lots of new smartphones to be released, but we have to note that at least for January, Qualcomm seems to be at the top of the hit parade as far as smartphone chipsets, although said chipset decisions were made months ago.
That said, all of the smartphone excitement will fade away quickly as Apple (AAPL) is thought to be releasing its Vision Pro XR device in the US on January 27th, finally revealing (hopefully) the details about its first venture into the AR/AV/XR world, or the world of ‘spatial computing’ as Apple calls it.  While the hardware details of the Vision Pro will be revealed this month, the $3,500 price tag will be a bit of a challenge for consumers, and we expect, with corrective optical inserts, and a number of other optional extras, the real cost will be ~$4,000.  Perhaps not enough to stop a number of super-fans that will buy anything with the Apple logo ttached, or the “I have to be the first” crowd, but the steep price will be a bit much, even for ardent iPhone fans.
Apple needs to set itself apart from other VR/AR players, particularly Meta (FB), whose seeding philosophy, while costing a bit of change, has allowed them to rule the VR space for years, so why not start high, as the price can always be reduced but rarely can increase?  It sets Apple apart while giving the psychological impression that there is some inherent value in the device itself that ‘allows’ Apple to charge such a high price, but we believe that has little to do with the hardware, and in fact, we expect the price will be justifiable if the device operates as it has been advertised.  That does njot mean that the hardware will perform, as Apple is smart enough to know that they need to make sure the quality of the hardware is a given before the dvice hits the shelves, but the Vision Pro’s ability to create a new user environment will be the key to its success.
The risk to Apple is if the user experience does not live up to expectations, the hardware is a moot point.  Apple has set a picture in the minds of potential buyers of a new concept in the user’s environment, one not dependent on games or the ‘metaverse’, but one that promises a more flexible environment thjat allows the user to do whatever they so desire (work, play, relax, socialize) more easily and more efficiently than is possible outside of the Vision Pro environment.  No longer would an analyst be struggling with multiple monitors, multiple windows, and desktops, but would have an open space as wide as necessary to work with.  Apple also promises that videos will take on new dimension and give the user visual opportunities that were not available before, all of which will operate without lag or workariounds, and that is only a piece of what the Apple marketing machine has promised or implied, and that is a lot to live up to.  Apple has had it’s big winners (iPhone, iPod, App Store) and its failures (Lisa, Butterfly keyboard, Firewire), so the Vision Pro will be a game changer one way or another, and we should know before mid-year.
]]>
<![CDATA[The Good Stuff]]>Wed, 03 Jan 2024 05:00:00 GMThttp://scmr-llc.com/blog/the-good-stuffThe Good Stuff
In the old days, the Consumer Electronics Show was a world class event, with almost every consumer electronics company happily showing off their latest and greatest wares to an adoring public[1].  Major product announcements were made at CES, with flashbulbs popping and beautiful models handing out tote bags emblazoned with company logos or sparkly key chains with company catch  phases, but over the years the show’s visitor’s seemed to change, with many apparently more interested in collecting geegaws that they hoped would one day become valuable memorabilia, while Chinese visitors took pictures of everything.  Eventually the floor became so crowded and frenetic that companies began taking suites in nearby hotels to speak with potential customers in a more sales conducive environment, and a few decided that the competition for press coverage at the show was just not worth the expense, as a small booth (10 x 10) starts at $10,000 (non-primary location) and can run to $20,000 for a better location, while a large (40 x 40) booth in a primary location can run several hundred thousands of dollars, before the cost of shipping products, booth materials, and personnel  what can be very long distances.
Here are a few of the majors that no longer have representation at CES:
Apple (AAPL)
Huawei (pvt)
Dell (DELL)
Hewlett Packard (HPE)
Nintendo (7974.JP)
Sony (SNE)
Vizio (VZIO)
TCL (000100.CH)
Tesla (TSLA)
 
So, while we expect there will be many announcements at CES this year, it is hard to get excited about LG’s (066570.KS) new line of OLED TVs, which is almost the same as last year’s line of OLED TVs, or the three new colors for a smartphone that is rarely sold in the US.  That said, when it comes to odd CE products there is no comparison to the devices that are shown at CES, regardless of whether they actually become commercial products, and even before the show has begun, a few announcements have already caught our eye.
A few of said oddities come from LG Lab, an ‘experimental marketing arm’ of LG that is expected to help the company realize it goal of becoming a ‘smart life solutions’ company, whatever that means.  The product from LG Lab that seems to have garnered the most attention this year is a device called the DukeBox.  The DukeBox is a combination of ‘new and old’ technology, according to the marketing department, which, in reality is a ‘portable’ speaker powered by vacuum tubes (‘old’) with a transparent OLED screen (‘new’) cover.  In its normal mode the display is transparent, allowing the user to see the glowing vacuum tubes that power the speaker, while at the flick of a switch, the display can become a sort of fireplace by displaying what marketing calls a cozy fireplace.  In order to satisfy those times when listening to music while staring at glowing vacuum tubes, or watching a crackling fire image is not enough, the screen can display content, although with a bit of transparency, so as not to miss the excitement of vacuum tubes.  Not only has no price been associated with the device, but there is no guarantee that it will become an actual product, or an actual product that sells, but you have to give LG credit for taking such a large leap into the ‘smart life solutions’ morass.
But wait, there’s more…  While the DukeBox was interesting when viewed from the outer reaches of the Twilight Zone, LG Labs really took the bull by the horns when it released the original version of the ‘Bon Voyage’ last August at the global wellness festival known as Wanderlust Korea.  The Bon Voyage was a 20 meter square two story structure that one could bring (perhaps ‘tow’ would be more apt) to a desired location, and ‘spend time his or her way that blends with the surrounding environment’, although the Bon Voyage in Fig. 2 does not seem to be ‘blending’ with the environment.  The marketing literature goes on to emphasize ergonomic stairs and a feeling of openness, due to one wall being glass, but notes that the Bon Voyage comes with air conditioning, home appliances, IoT devices, and furniture, so the user can maintain a comfortable lifestyle.
But wait, there’s even more…  LG Labs has decided that the Bon Voyage was not quite able to maintain the ‘life quality at home into nature’ and redesigned the Bon Voyage for this year’s CES.  The newly designed Bon Voyage is now the size of a camper (~6.5’ wide x 7.2’ high x 12.5’ deep) and is equipped with a bed, refrigerator, electric stove, water purifier, Styler (steamer to remove clothing wrinkles) and a shoe steam cleaner.  The less bulky size allows the Bon Voyage to be connected to a car and as a place your weird uncle can stay during the holidays.
While we are not choosing LG devices for any particular reason, and give credit to the company for at least trying to push the envelope a bit (We wonder what happened to last year’s “StandbyMe” battery-operated portable TV?), we are not sure why someone might want a transparent speaker, and a fancy camper with a shoe deodorizer is still a camper.  That said, LG is a big corporation with lots of R&D dollars to spend, so why not spend it throwing spaghetti against the wall to see if it sticks?  It has to be better than calling last year’s smartphone color ‘graphite’ titanium gray this year, last year’s ‘lavender’, this year’s titanium violet, and last year’s ‘phantom black’, this year’s (you guessed it…) Titanium Black.
 


[1] You must be in the CE biz in some way to gain entrance.
Picture
Figure 2 - The DukeBox from LG Labs - Source: LG
Picture
Figure 3 - The 2023 'Bon Voyage' - Source: LG
Picture
Figure 4 - The New & Improved 2024 Bon Voyage - Source: LG
Picture
Figure 5 - The LG 'StandbyMe' Portable TV - Source: LG
]]>
<![CDATA[Blue On Blue]]>Wed, 15 Nov 2023 05:00:00 GMThttp://scmr-llc.com/blog/blue-on-blueBlue On Blue
Back in August we noted a few points about the development and adoption of blue phosphorescent OLED materials (“Singing the Blues”) and also indicated some hesitancy about the excitement that had gathered around the pending development of a blue phosphorescent OLED emitter material next year.  While the development of same by a number of companies, including Universal Display (OLED), Samsung Display (pvt), Sumitomo Chemical (4005.JP), Idemitsu Kosan (5019.JP), Merck (MRK), and Lumiotec (pvt), as well as a number of well-known universities, continues, the actual adoption of a blue phosphorescent material into a commercial OLED stack is a more difficult task and one that is likely not to adhere to the aggressive timelines than many hope for.
Universal Display began reporting commercial revenue from phosphorescent emitters in late 2005, primarily from its red emitter.  Previously the company’s sales came from developmental materials sold to customers and developmental contracts.  The first color OLED smartphone was the Samsung (005930.KS) X120, released in 1Q 2004, which had a 1.8” OLED display, was able to reproduce 65,000 colors, with a resolution of 128 x 128 pixels., and the following year BenQ (2352.TT) released the A520, which sported a 1.5” OLED display (128 x 128) and a smaller 96 x 96 display.  To compare that to what is available currently, the Xiaomi (1810.HK) 14 Pro released this month has a 6.73” OLED display that can reproduce 68 billion colors and has a resolution of 3200 x 1440 pixels.
We have been tracking sales of Universal Display’s OLED materials for more than 10 years and while the company announced its first commercial green phosphorescent green emitter material in the summer of 2010, Samsung Display, UDC’s biggest customer did not release a smartphone using both red and green phosphorescent emitters until the Galaxy S4 in April of 2013, almost 3 years later.  We expect Samsung Display had integrated the green emitter into the display stack for some time before the stack was stable enough to be used commercially, and while the OLED industry is far more adept at making stack material changes, we expect there will be a learning curve with blue phosphorescent emitter material when it is made available commercially.
At a seminar yesterday in South Korea, UBI research, a local consultancy, stated that Samsung Display had set a goal of applying blue phosphorescent OLED material to devices in the 2nd half of 2025, rather than in mid-2024 as previously expected.  We believe this is in reference to materials being developed by SDC, and while we assume they are working with UDC on that development, that remains unconfirmed.  UBI went on to state that they believed the current version of SDC’s blue phosphorescent emitter material is not efficient enough to be used as is, although they believe that SDC would be willing to use a more efficient version, even if the lifetime was only 55% of the fluorescent emitter materials it will replace.  Given that color point (deep blue), efficiency, and lifetime are all variables that determine the commercial success of an emitter material, it has been difficult to ‘blend’ the three major parameters to create a commercially viable blue phosphorescent emitter material.
To complicate matters further, the other components of the OLED stack, most of which are developed and produced by other material suppliers, must also work efficiently with the OLED emitter materials, and that combination must be formulated by the panel producer.  UDC and others will develop their blue phosphorescent emitter with host materials, but there are typically at least 4 layers (usually more) of additional materials that create the environment under which the emissive materials work best, so even if a panel producer decides to use a commercial blue phosphorescent emitter, all of the layers in the stack are likely to be redesigned to produce the most efficient stack combination, a time consuming task, and one that involves considerable testing. 
Why Blue?  The quest for a blue phosphorescent emitter material is not a frivolous one, as a proper phosphorescent blue emitter will improve the stack’s power efficiency.  Estimates seem to range for an improvement of between 20% and 35%, although we expect that the actual result will depend on both the blue material specs and the other stack emitters and materials.  Anything that can reduce the power consumption of a mobile device is of immense value to device designers who can add additional hardware or functionality or reduce the size of the battery, while maintaining or improving the overall display specifications. 
Why has it been so hard?  UDC and others have been on the trail of a blue phosphorescent emitter material for almost as long as commercial OLED materials have been around, but like other ‘blue’ structures, such as blue LEDs, the characteristics that create blue light are specific to what are known as ‘high bandgap’ materials.  In an OLED device, ‘holes’ (think: ‘anti-electrons’) are injected into the stack at the Highest Occupied Molecular Level (HOMO), while electrons are injected at the Lowest Unoccupied Molecular Level (LUMO), with the space between those two ‘points’ called the bandgap.  As the world of electronics always strives toward a neutral state, the two ‘migrate’ to the bandgap mid-point and when they pair, they release light energy and cancel each other.  The frequency (color) of that light energy is proportional to the size of the ’gap’ between HUMO and LUMO, with larger gaps creating higher (blue) frequencies and small gaps creating lower (red) frequencies.
Unfortunately, the larger the bandgap, the more unstable the materials tend to be, which means they have short lifetimes, just as in nature animals or insects with high metabolic rates tend to have shorter lifetimes than those with slower rates, and this has been a fundamental problem for OLED material scientists.  In theory, a lighter blue should be more stable and have a longer lifetime, but a deep blue is essential to balance the phosphorescent red and green already being used in RGB OLED displays, so the quest to find a material with a large bandgap with a stable structure continues.  Eventually a material will be found that meets the necessary criteria, but once it becomes commercialized, it will take time to find its way into OLED stack, just the way green phosphorescent emitter material did, along with the more predictable issues surrounding cost, availability, and IP that overhang current OLED emitter materials.  It’s coming and its going to create a stir when it does, but aside from the initial hoopla, blue phosphorescent OLED emitter material is just a part of the OLED stack and will be subject to the same starts and stops as other OLED materials.
Picture
Universal Display - Quarterly Material Sales - 2012 - 2023 - Source: SCMR LLC, Company Data
]]>
<![CDATA[Just the Facts - E-Paper & CO2]]>Wed, 15 Nov 2023 05:00:00 GMThttp://scmr-llc.com/blog/just-the-facts-e-paper-co2Just the Facts - E-Paper & CO2
We collect lots of data, and some of that data relates to e-paper displays, otherwise known as electrophoretic displays or devices using electrowetting.  The technology behind these displays is simple, based on the movement of ink particles in an oil with an electric charge.  Once the ink particles have been moved, they stay in position, so the display uses no power until the image needs to be changed, making it ideal for situations where power is unavailable or not feasible.  While the average consumer might know e-paper based on the Amazon (AMZN) Kindle, the most popular application for e-paper is electronic shelf labels, which accounts for ~87% of e-paper unit volume.  These devices replace the paper price labels that have been used in stores for years, allowing prices to be changed at will and without any physical movement to create sales on overstocked items, give product information, or warn consumers about pending stockouts.
Signage is also becoming a target application for e-paper, and when compared to LCD displays or physical paper signage, the power saving capabilities of e-paper standout, and looking at the carbon emission impact of 32” paper advertisements, LCD displays, and e-paper on outdoor digital signage is even more impressive.  Here is how they compare, so from an environmental perspective there is not much else to say.  If you are looking to save the planet, e-paper displays are certainly an option.
  • 100,000 e-paper billboards that run for 20 hours a day and update ads 20 times per hours for 5 years would reduce CO2 emissions by ~500,000 tons when compared to LCD displays.
  • 100,000 e-paper billboards that run for 20 hours a day and update ads 20 times per hour for 5 years would reduce CO2 emissions by 4,000,000 tons when compared to paper displays.
]]>
<![CDATA[Phantasmagoric Peculiarity]]>Wed, 15 Nov 2023 05:00:00 GMThttp://scmr-llc.com/blog/phantasmagoric-peculiarityPhantasmagoric Peculiarity 
One of the more disconcerting aspects of VR is the fact that you, as an outsider, in the same room with a VR headset wearer, cannot see the wearers eyes, which means you have not visual clues as to where they might be looking, and to a lesser degree, what their full facial expression might be.  While some VR headsets have externally mounted cameras that allow the VR headset wearer to see his or her surroundings, others nearby have no idea where the user’s next move might be or whether they have any idea that someone is in the room with them.  Apple (AAPL) has discovered this quirkiness and has added a feature set to the soon-to-be-released Vision Pro headset that they feel would solve the problem, although we expect not everybody might agree.
According to developer literature for the Vision Pro, when the user first purchases the headset, they go through a procedure similar to a developing an avatar where the user holds the headset in front of their face and takes a series of facial ‘shots’ including a number of expressions.  The system then creates an avatar based on the ‘shots’ and when the user puts the headset on, the avatar is projected on the front faceplate.  This looks, to someone on the outside, as if the users eyes are visible, although it is actually the eyes of the avatar, which, while it has relatively limited resolution, Apple uses some effects to hide the quality of the image.
“EyeSight” as Apple calls it, seems to be unique to the Vision Pro, and will likely be used for other even more unusual effects by developers after the device is in actual circulation.  The question is, is it more disconcerting not to know where the users’ eyes are looking or to see the visual image shown below.  We make the assumption that the system is able to track eye movements with enough speed not to make the avatar’s movements jerky or unnatural, but there is still something a bit creepy about the feature.  We are not sure if the Apple solution is the ultimate one, and we do give Apple credit for identifying the issue and proposing a logical solution, but it still seems a bit disturbing.
Here's the link to the short (50 sec.) avatar set-up video.
https://twitter.com/i/status/1724539857752519028
Picture
Vision Pro - Avatar "EyeSight" image - Source: Apple
]]>
<![CDATA[Moral Compass]]>Mon, 09 Oct 2023 04:00:00 GMThttp://scmr-llc.com/blog/moral-compassMoral Compass
OpenAI (pvt) created DALL-E, a diffusion model that converts text to images.  It has received considerable praise and criticism since its public release in September of last year, both for its abilities to create highly stylized art using its massive training database of images, and also for its ability to create deepfakes and realistic looking propaganda.  Since its release OpenAI has been adding content filters to prevent users from creating images that might be considered harmful.  In fact, there is an ‘audit’ system behind DALL-E’s input prompts that immediately blocks input that corresponds to OpenAI’s list of banned terms.  It seems that ChatGPT, OpenAI’s NLM (Natural Language Model) has become the moderator for DALL-E, with OpenAi the maintainer of the ‘block list’.  In fact, any user input that contains blocklisted text is automatically ‘transformed’ by the ‘moderator’, essentially rewriting the text before DALL-E can create an image.  It can also block created images from being shown if they activate ‘image classifiers’ that OpenAI has developed.  Earlier versions of DALL-E did not contain these classifiers, and would not stop such images from being created, such as the image below, which shows SpongeBob SquarePants flying a plane toward the World Trade Center.  That image was created by the Bing Image Creator which is powered by DALL-E.
Picture
SpongeBob SquarePants Image w. Twin Towers - Source DALL-E
In the image below (Figure 3) the OpenAI classifier changed the image of an ‘almost naked muscular man’ (not our words) into one that focuses on the food rather than the man, and the early DALL-E image of ‘Two men chasing a woman as she runs away’ (Figure 4), is changed to a far more neutral image.  According to OpenAI, the upgraded DALL-R 3 now reduces the risk of generating nude or objectionable images to 0.7%.
Picture
Image Reclassification Comparison - DALL-E 3 - Source: 36Kr
Picture
More Image Reclassification - DALL-E 3 - Source: 36kr
That said, the classifier in the latest DALL-E 3 iteration can also change the generated image content so drastically, as to be considered to be restricting artistic freedom, as some say is occurring in the DALL-E 3 image conversions in Figure 5, so OpenAI is looking for a balance between the limitations placed on dicey content and image quality, a meaningful and extremely difficult task. 
Much of the classification of image data comes at the training level, where the training data must be categorized as safe or unsafe by those who label the data before AI training, and as we have noted previously, much of that data is classified by teams of low pay level workers.  It is almost impossible to manually validate the massive amounts of labeled image data used to train systems like DALL-E, so software is used to generate a ‘confidence score’  for the datasets, sort of a ‘spot tester’.  The software tool itself is trained on large samples (100,000s) of pornographic and non-pornographic images, so it can also learn what might be considered offensive, with those images being classified as safe or unsafe by the same labeling staff.
We note that the layers of data and software used to give DALL-E and other AI systems their ‘moral compass’ are complex but are based on two points.  The algorithms that the AI uses to evaluate the images, and the subjective view of the data labelers, which at times seems to be a bit more subjective than we might have thought.  While there is an army of data scientists working on the algorithms that make these AI systems work, if a labeler is having a bad day and doesn’t notice the naked man behind the group of dogs and cats in an image, it can color what the classifier sees as ‘pornographic’, leaving much of that ‘moral compass’ training in the hands of piece workers that are under paid and over-worked.  We are not sure if there is a solution to the problem, especially as datasets get progressively larger and can incorporate other data sets that include data labeled with less skilled or less morally aware workers, but as we have noted, our very cautious approach to using NLM sourced data (confirm everything!), might apply here.  Perhaps it would be better to watch a few Bob Ross videos and get out the brushes yourself, than let layers of software a tired worker decide what is ‘right’ and what is not ‘right’..
Picture
Additional Image Reclassification - DALL-E 3 - Source: 36kr
Picture
Bob Ross - TV Artist - Source: Corsearch
]]>
<![CDATA[The End?]]>Mon, 02 Oct 2023 04:00:00 GMThttp://scmr-llc.com/blog/the-endThe End?
We have read a considerable amount of Chinese tech propaganda concerning China’s push into the OLED display space and how, ‘with hard work and an undying devotion to China’s persistence and manufacturing expertise’, they have unseated the South Korean ‘dynasty’ and are poised to take over the OLED space, and while China’s OLED producers have made considerable progress toward becoming major contenders in the OLED display space, there are some points to be made.
  • First, no matter who was the market share (units of dollars) in the early days of OLED display production, they faced a loss of share as others began to enter the niche.  With Samsung the leader in the small panel OLED space, especially the small panel flexible OLED space, the company was bound to lose share over time, which has been the case.  If China’s BOE, Visionox (002387.CH), or Tianma, had been the first to market, they would have faced the same fate.
  • Second, on a unit volume basis, Samsung is still the leader in terms of unit shipments for small panel flexible OLED displays. In fact in only one quarter over the last 3 ½ years did Samsung’s unit shipment ratio fall below 2x that of the producer in 2nd place. 
  • Third, while panel producers that produce both LCD and OLED displays rarely break out segment profitability, we expect there have been only a few instances when Chinese OLED producers were profitable for two consecutive quarters.  Samsung Display (pvt), at least on an operating basis, has been profitable for the last ten quarters, although we note that what remained of Samsung Display’s large panel LCD business likely had a negative effect on the early quarterly numbers and the most recent quarters would be influenced by SDC’s QD/OLED large panel business to a degree.
All in, yes, Samsung Display will continue to face increasing competition in the small panel flexible OLED space but remains the overall leader.  Perhaps in the future, one or more Chinese OLED producers will overtake SDC in terms of unit volume, but we expect it will be some time before any Chinese small panel OLED competitor becomes more profitable than SDC over more than a quarter or so.  With 1 new small panel OLED fab and three large panel OLED fabs under construction in China (one additional fab in planning stage), and 2 large panel OLED fabs under construction in Korea, it will be a race to see who can fill those fabs profitably, especially given the current weak state of demand for CE products, but we doubt SDC will lose its position as the most profitable small panel OLED display producer in the near-term.
Picture
Samsung Display - Sales & Op. Margin - Source: SCMR LLC, Company Data
]]>
<![CDATA[AUO to Buy Automotive Controls Company]]>Mon, 02 Oct 2023 04:00:00 GMThttp://scmr-llc.com/blog/auo-to-buy-automotive-controls-companyAUO to Buy Automotive Controls Company
Display producer AU Optronics (2409.TT) has announced the purchase of Behr-Hella Thermocontrols GmbH for €600m ($632m US), a producer of vehicle climate control panels, Climate sensors, power related hardware for automotive heating and cooling blowers, and associated software.  The company’s customer base is large, with a ~20% share of the market, 2nd only to Denso (6902.T) at ~24% and provides climate controls and other products to BMW (BMW.DE), Daimler (DTG.DE), GM (GM), Ford (F), and others.  AUO has made other acquisitions that have brought it deeper into the automotive display market, with Litemax Electronics (4995.TT) in 2017 that helped AUO with backlighting technology for automotive instrument clusters, and Raystar Technologies (pvt) in 2018 that brought in technology for large automotive displays.  
In 2020 AUO began breaking out their automotive display business share, which has grown from a low of 6% in 2Q ’20 to 17% for the last two quarters of l2022and the first quarter of this year.  It declined to 16% in 2Q as TV panel revenue increased, but we expect automotive to remain between 16% and 18% for the remainder of the year.  AUO, along with a number of other panel producers, have been increasing their exposure to the automotive display market as the industry returned to pre-pandemic demand levels last year.  The automotive display business is a bit different that the typical seasonally driven generic display business in that the development cycles are long relative to the CE space, but the product sustainability is also long, typically 2 to 3 years, so those panel producers looking for a more predictable business cycle have shifted their focus. 
AUO began that process earlier than many other panel producers, but automotive display market share tends to remain stable for the reasons mentioned above and has not changed appreciably over the last few quarters.  That said, the automotive display market is oriented toward hybrid or electric vehicles, although not exclusively, but given China’s large share of the electric vehicle manufacturing market, Chinese panel producers have at least a starting advantage over automotive display producers from other regions, with Chinese display producers BOE (200125.CH) and Tianma (000050.CH) having a combined 21%+ share.  Rather than compete on a capacity basis, AUO’s general philosophy has been to produce high-end, non-generic products, and does the same in the automotive space.  We would expect to see other acquisitions that give them additional expertise in the automotive space, especially as LCD is by far the dominant display type in automotive displays, with Mini-LED backlighting beginning to appear as a way to compete with OLED displays.
Picture
AU Optronics - Automotive Revenue - Source: SCMR LLC, Company Data
Picture
Composite Automotive Display Revenue Market Share - 2021 - 1Q 2023 - Source: SCMR LLC, various
]]>
<![CDATA[AI in “Education”]]>Mon, 02 Oct 2023 04:00:00 GMThttp://scmr-llc.com/blog/ai-in-educationAI in “Education”
​Natural Language Models (NLMs) are all the range, with new models going into service across the globe literally every day.  These models are based on ever increasing pools of data that the NLM can ‘view’ and learn to identify and understand.  These data pools are huge and vary widely in terms of what they contain and their sources, but they all tend to have one thing in common, and that is the billions or trillions of pieces of data in these pools, must be identified and annotated, so the Ai has a point of reference.  Such an SFT (Supervised Fine Tuning) system, known as RLHF (Reinforced Learning w. Human Feedback) places humans in the loop to identify data and images for the NLM so it might understand that another datapoint or image is similar.
The folks that do this work are not data scientists or programmers that get paid ~$0.03 per item, and with somewhere between 800 and 1,000 items the peak for experienced workers, they are not high on the global pay scale.  The only thing in their favor is that NLMs are popular, and there is ‘competition’ between NLM producers (?) to keep enlarging the datasets that NLMs learn from.  There are large open-source data sets that can be a basis for a NLM, but the more data you have to learn with the ‘smarter’ your NLM (or so they say). 
At $0.03 per item, and billions or trillions of items, things can get expensive, so Google (GOOG) has come up with a system that replaces the RLAF model with an RLAIF, where the AIF stands for AI Feedback, rather than human feedback.  By replacing the human component with an Ai system that will ‘identify’ items based on its own training and algorithms, the cost can be reduced, and Google says that users actually preferred the NLMs based on AI feedback over those using human feedback.  Of course there are some serious ethical issues that arise when you remove humans from the feedback loop, but why worry about that when you can save money and come up with a better mousetrap.  Ok, there is the possibility that something might not be identified correctly, or a rule, such as those that try to eliminate p[profanity or racism from NLMs might get missed because it is embedded in previously ‘learned’ data, and that would mean it could be passed on as ‘correct’ to NLMs and to other Ai systems without human oversight.  It is easy to see how quickly something like this might get out of control, but don’t worry because, well, because we wouldn’t let that happen, right?
]]>