Supply Chain Market Research - SCMR LLC
  • Blog
  • Home
  • About us
  • Contact

Welcome to the App Store

5/7/2025

0 Comments

 

Welcome to the App Store
​

A new bill has been introduced in the House of Representatives that would require owners of operating systems with over 100m US users (such as IoS or Android) to  allow users to choose their own default applications or app stores, as opposed to having the choice made for them with pre-loaded applications.  It also requires that OS owners allow users to hide or delete pre-installed applications or app stores.  Developers must also be provided with access to the OS interfaces, hardware, and software  at the same level as the company or its 3rd party partners.  Under the bill OS companies cannot require that developers use the company’s in-app payment system as a condition for access to the OS or app store, and cannot require pricing terms that are more favorable than on other comparable sites, while they are also restricted from using non-public information collected from an application on their platform to compete with that application.
Currently a number of states have passed or are discussing similar legislation, but most are focused on age verification (Utah put it into law).  Florida has proposed a law that would force Apple to open access to outside app stores and payment systems and a number of other states have proposed legislation that allows sideloading and prohibits mandating OS owned applications.  However, this bill would set rules at the federal level and would prohibit states from enacting laws that would be counter to actions stated in the federal bill, which is the 2nd of its kind to be introduced to Congress.  Similar bills have been subject to aggressive lobbying, partisan politics, and challenges from companies and 1st Amendment supporters, and we expect this bill will face the same, but as politicians look for causes that are popular with consumers, we expect Apple and others will ultimate lose some of their proprietary ‘rights’ and face a more level competitive playing field.  Its hard to tell if the time is right for such a bill to pass, but bills that have gone before and failed have at least built some awareness, so if consumers get more involved in the conflict, the law will eventually be put into effect.  Whether it would be better as a federal law or a state law, depends on who you ask.  Full text of the bill here
​
0 Comments

Light Me Up

4/7/2025

0 Comments

 

Light Me Up

The US government has been working toward providing high-speed internet (100Mbs) to all Americans, but while the FCC shows that only 1.5% of those living in urban areas don’t have such access, 22.5% in rural areas are without same and much of that population has 25Mbs or less.  The BEAD program, a $42.45b government funded program that provides subsidy capital to each state to expand high-speed internet access to rural communities is a major driver behind the efforts to broaden high-speed internet coverage, however while the program was signed into law on 11/15/21 and funded by Congress on 6/26/23, the progress has been slow with preparation for construction underway, but with few providers willing to step up before capital is allocated.
Here's the current BEAD status:
  • 56 of 56 state and territory applications have been approved by the NTIA
  • 47 of 56 entities have completed the state challenge process
  • 32 of 56 entities have begun selecting service providers
  • 4 of 56 entities have completed service provider selection
  • 3 of 56 have released final proposal for public comment
In most cases the construction will be primarily the laying or stringing of fiber, a complex and costly process that entails gaining right-of-way access, local permits and the cost of trenching fiber, which can run, depending on the location, between $15,000 and $20,000/km and can take weeks to install.  In some cases it is just not feasible to bury fiber optic cable, so it must be strung, leading to higher annual maintenance costs, but there are alternatives.
Elon Musk’s Starlink (pvt) system, a group of LEO satellites that provide coverage in certain areas, presents an alternative.  However the service is relatively expensive at $300 - $400 for the hardware (basic) and between ~$80 and $120/ month for residential customers.  While Starlink has brought down the cost of placing satellites in orbit, it is expected that the current network of a few hundred satellites would likely have to be brought up to a much higher number to provide global service, and at a cost of ~$1m each (that’s the reduced cost) to launch  new Starlink satellites and a ~5 year life for existing ones, the cost of such a system would seem to make it an option that is not suitable for rural areas where much of the population is low-income.  Unless funded by the government Starlink would seem to be out or reach for many and  currently funding for such programs is quite difficult as Congress recently let the Affordable Connectivity Program, which supplemented the monthly cost of internet access for low-income families expire.
The is another alternative when the cost of fiber installation is prohibitive or virtually impossible and the customer economics do not make sense for satellite.  The technology is FSOC, aka Free Space Optical Communication, and is used commercially as both a last mile alternative in fiber systems, for disaster recovery when other means of communication are unavailable, and in military tactical situations as it provides a high bandwidth point-to-point link that is inexpensive and easy to deploy.  The technology is relatively simple to understand as it consists of:
  • Optical Transceiver – This consists of a laser which carries the signal
  • Modulator – Takes the incoming signal and encodes it into the laser beam using intensity or other optical properties.
  • Optics – Lenses and mirrors that shape and direct the laser beam.
  • PAT System (Pointing, Acquisition, & Tracking) – This system has movable platforms that can steer the laser beam.  Sensors and cameras detect the position of the incoming signal and control hardware and software that calculates position and adjusts the beam.
  • Optical Amplifier – Increases the signal strength in longer range FSOC systems and compensates for atmospheric changes.
Depending on the installation, commercial FSOC systems have a maximum range of 20km. (~12 miles) on a direct line-of-sight basis, but has lower latency than RF (satellite) and very low installation cost (~$1,000/km).  Here’s how it stacks up against fiber and satellite:
Picture
As with all communication that travels through the atmosphere, different conditions can cause issues with key variables in optical systems  FSOC systems that are for short distances (campus, etc.) typically operate at 850nm, while longer distance systems, where the effects of weather are more pronounced, operate at 1310nm and 1550nm,which are less affected by rain or snow and are compatible with fiber operating wavelengths.  When the weather is poor (droplets reflect and absorb laser light) FSOC systems can adapt in a number of ways, the primary being by increasing power, similar to RF systems (satellite).  However RF systems are power limited so as not to overpower adjacent bands, while FSOC is limited only by laser (eye) safety requirements.
Other techniques, such as spatial diversity (similar to MIMO), essentially creating multiple optical paths (multiple lasers) or multiple frequency paths for data, adding repeaters (shortening the signal path), specialized modulation techniques that vary the data rate depending on weather, and predictive atmospheric modeling for proactive adjustments, all can contribute to offsetting the atmospheric issues that occur with atmospheric optical systems.  We note also that maintaining precise alignment with the sensors at the receiver end is a key function, as small variations can interrupt transmission, however FSOC systems have progressed substantially, allowing them to self-correct and maintain a solid line-of-sight connection in almost all circumstances, and new chip level systems reduce the lens and mirror count considerably, making the systems even more stable.
Interestingly, what brought FSOC technology back to our attention was the fact that Alphabet (GOOGL) recently spun out Taara (pvt), an FSOC company from its “Moonshot Incubator” (Where ‘Google Brain’ lab and Waymo (pvt) came from) retaining a minority stake along with initial VC funding, with the idea that the company can now look at other financing options.  We originally heard about Taara back years ago (2018) when Google (GOOG) mentioned a project (LOON) that was attempting to create a system of high-altitude (stratosphere) balloons that could deliver internet service to remote locations across the globe, an idea we thought foolish.  While the laser technology (Free-space optics) has been used to communicate with the International Spece Station the idea seemed a bit on the ridiculous side, and eventually the Loon project was cancelled due to regulatory issues about flying large ballons in public airspace.
That said, the laser technology continued to be developed and refined and the company has announced (2026 release) the development of a silicon photonics chip that will replace the mirrors, sensors, and hardware used to ‘steer’ the laser light with software, reducing the overall size of the Lightbridge (the company’s current product), which is roughly the size of a traffic light, to a single chip containing the emitters and all of the directional correcting mechanisms.  The new chip is the size of a fingernail and is expected to be incorporated in a new (much smaller) device.  This will allow the company to lower the hardware and installation price and create modules that are able to deployed indoors, such as on a factory floor (ceiling).
Picture
Figure 1 – Current Lightbridge FSOC Unit - Source: Taara
PictureFigure 2 - 2026 FSOC Chip - Source: Taara
Taara has deployed its systems with a number of partners in a number of unusual instances.  One such was in Pipeline Estate, a low-income, densely populated community in Nairobi, Kenya.  Taara’s partner in Nairobi expected to see a large number of sign-ups when the service was connected, but after a few early new customer sign- ups things tailed off quickly.  After close examination it turned out those new users were consuming ~10 times the normal bandwidth and after a more detailed scrutiny they found that each of the new users were reselling pieces the large bandwidth now available.  At a cost of $10 to the primary user and a similar price to their shadow customers, they were making a substantial profit on the new service.  As this was not the model expected, the company created software that allowed ISPs and local residents to ‘micro-share’ the bandwidth and manage the usage.  Taara, after that experience, that capitalized on this situation and devised a system that allowed retail store owners to resell connectivity from the Taara system as an extra income source.
In India, Aitel (532454.IN) has used Taara’s system for FWA backhaul in conjunction with microwave links in a number of areas in India where physical issues (rivers, mountains, etc.) have limited capacity as a hybrid solution, and to provide service to locations where India’s well-known bureaucracy made it difficult or impossible to get microwave licensing within a reasonable amount of time. 
T-Mobile (TMUS) has used the technology to provide cell service at Coachella and the Albuquerque International Balloon Fiesta, where large crowds typically overloaded cell systems and laying fiber was definitely not cost effective.  As the Taara system could be set up quickly, they were able to provide 5G service for over a million calls over the nine day festivals at 99.9% uptime.  The Taara system was attached to a fiber link 2 km from the festival and paired with a T-Mobile Cell-on-Wheels station near the event and a second COW about a mile away.
There are others working on FSOC technology, but as can be seen below, most are looking at the technology from a military perspective, with few focused on backhaul as an adjunct to fiber.
  • Raytheon (RTX) – has developed “NexGen Optix”, a FSOC tactical communication system for “challenging environments” to provide secure connectivity.

Picture
Figure 3 - Raytheon NexGen Optix field communication system- Source: Raytheon
  • Viasat (VSAT)– Viasat’s Mercury FSOC for military applications including ground-to-ground, ground to Air, Ship-to-ship, and ship-to-ground secure communication based on DOD requirements.
Picture
Figure 4 - Viasat Mercury FSOC system - Source: Viasat
  • Mynaric (pvt) – Laser-based communication (FSOC) between flying objects (satellites, planes, drones, etc.) to transfer data between objects and the earth.
Picture
​Figure 5 - Mynaric - aerial communication module - Source: Mynaric
  • fSona Networks (pvt) – FSOC back haul system but no active website.  Recent reverse merger
  • CaiLabs (pvt) – Primarily aerospace and defense base stations and fiber lasers but also have TILBA-LOS FSOC point-to-point to be released this year.
Most of the FSOC development we have seen is oriented toward military applications and air-to-ground communication, likely the reason why Alphabet figures it’s time to a move Taara from the lab to a more potentially financially viable position.  This technology is relatively new to the back-haul market and seems to have become robust enough that it can have a place among other high bandwidth communication technologies, along with some more specific applications where laying fiber is not practical due to physical issues. 
While the more obvious cases are related to geophysical objects (rivers, offshore islands, and emergency disaster communication) short-haul links in urban areas where the cost and time to lay fiber is prohibitive, but line-of-site is available, does make sense if the technology proves to be as cost effective as Taara seems to indicate.  With the ability to eliminate much of the hardware needed to maintain alignment, the cost of base stations and repeaters should continue to drop, making the FSOC viable in more instances and certainly more competitive against regulated systems (microwave, RF).  While FSOC would not completely solve the need for high-speed, low-cost internet across the US, used in conjunction with fiber, it has the potential to make a dent, if the silicon proves to be as cost effective as it is promoted to be.   It’s a long way from the ballon network…
 
Please note that we do not receive any compensation from or have any connection to the companies or technology we write about.  We look for interesting products, companies, and technologies that we believe might be of interest to our readers.  While we might speak with some of these companies, we receive no proprietary information, and all opinions are our own.
0 Comments

All Around the Mulberry Bush

3/31/2025

0 Comments

 

All Around the Mulberry Bush
​

Apple (AAPL) has changed its healthcare focus from a meager “Project Quartz” to a more meaningful and robust “Project Mulberry”, including AI agents to collect and process the data that Apple devices collect about you.  This is not the ‘secret stuff’ brands collect, like what OS you are using, what device you are on, your search results (if you let them), and almost everything about what you have bought[1], but more the data that you allow Apple to collect by using the Apple watch, your iPhone, your AirPods, and even some 3rd party applications.  This is ‘health’ data, that includes sleep patterns, steps, calories, heart rate, weight, and a variety of other metrics about your bodily functions.
The objective is to provide Apple users with information that will make them healthier and more fit, but Apple, even before the platform is available, has made the upgrade to AI agents and an integration with Apple Intelligence, to make that information more ‘real-time’, personal, and meaningful.  The agents are the scavengers that will poll your Apple devices for the health information they collect and bring it to Apple Intelligence for monitoring and evaluation.  It is thought that Apple will not only offer you evaluations of your nutritional and sleep habits but could even offer camera-based assessments of your workouts and access to educational videos, put together by internal and external health experts. 
While the range of detail is thought to delve into physical therapy, mental health, and even cardiology, the initial focus is thought to be nutritional, with monitoring and alerts leading to personalized health advice based on your data, although there has been talk of AI-based mental health counseling and chronic disease predictive analysis.  As one might expect, Apple’s focus seems to be on the ‘user experience’, the part of the Apple persona that allows them to charge a premium for their products, but Apple is certainly not the first to go in this direction in this new age of AI.  Google’s  (GOOG) Fit is a similar collector of personal health data through Android’s Health Connect.  This platform allows permitted 3rd party apps to supply and collect data that feed the Google Fit app, but is more a collector, aggregator, and visualizer than an advice tool, although Google is currently working to integrate that data into its other health related services, with a tie-in to reference ‘reputable sources’ on YouTube.
Amazon (AMZN) also has a health program, but its focus is more oriented toward B2B with the Amazon Pharmacy supplying information on medications and interactions and the Amazon Clinic and One Medical able to set up virtual video or text sessions with clinicians (some on staff) that can evaluate conditions, make diagnoses, and prescribe medication for relatively common illnesses.  There are also companies like Noom (pvt) or MyFitnessPal (pvt) that are more specific to food and calorie management but given the enthusiasm for Ai that seems rampant across the health sector, we expect almost every health related application to leverage AI to stay competitive.
There are a few caveats here, particularly HIPAA regulations which regulate any health information that is maintained or transferred.  Entities involved must encrypt health data, limit access, perform risk assessment, maintain audit trails, breach notifications, and take ‘reasonable steps’ to prevent access to or disclosure of patient information.  HIPAA is difficult enough to understand and maintain, but adding AI to the mix opens everything up to new legal questions, many of which have yet to reach the courts and as liability becomes a potential issue when health-related advice is being given, we expect many new court cases that will not only focus on the potential liability of poor or incorrect data, but will include questions of algorithmic bias, inadequate software testing, and the fact that Ai systems are essentially ‘black boxes’ that make it impossible to derive where or how an AI arrived at a particular diagnosis or conclusion. 
Smart lawyers will not only include site owners but also those who wrote the algos that run them, looking for biases that could cause hallucinations, errors in judgement, or flawed diagnoses based on poor human vetting.  When Ai developers are called into court to defend issues like what data was included in an AI’s training or what process was used to draw a conclusion, high level math will not be how they are judged by a jury, so while Apple jumps into the fray to provide a positive health experience through Project Mulberry and Apple Intelligence, its not like Wikipedia, where you take things with a grain of salt.  Healthcare decisions affect people’s lives, as some can be significantly influenced by the information given by Ai healthcare.  There are good and bad doctors, and sometimes doctors make mistakes, which is why malpractice insurance exists, but will there be malpractice insurance for an application that gives incorrect advice or misdiagnoses an ailment or mental condition?


[1] IP Address
Device Type & Model
Operating System
Device Identifiers (trackers like AAID, IMEI
Screen Resolution
Installed apps (some)
Browser type & version
Cookies (optional)
Browsing History (Optional)
Location Data (Optional)
Referring websites
App usage
Contacts & Calendar (Optional)
Photos & Videos (Optional)
In-app purchases
Search queries (Optional)
Social Media Activity
Shopping activity
Form submissions
Wi-Fi network name
Data usage
Bluetooth data
Sensor tracking
Accelerometer & Gyroscope data
Ambient Lighting data
‘like’ data
DNS lookups
…To name a few.
Picture
0 Comments

Business Models?

1/2/2025

0 Comments

 

Business Models?
​

Alibaba (BABA) Cloud announced that it was lowering the price of its LLM model for the 3rd time to remain competitive in the Chinese AI market.  The model, known as Qwen-VL has a number of primary features, such as multi-modality (Can accept both text and image input), High-resolution processing (>1m pixels), enhanced image extraction, and multilingual support (Eng, Chinese, Japanese, Korean, Arabic, Vietnamese) and is closest to Google’s (GOOG) Gemini, which has similar features.  The model is part of Alibaba’s cloud-based AI chatbot family, which focuses on enterprise customers rather than the consumer market as a way to differentiate itself from Chinese and other AI competitors.
While much has been said about the competitive nature of Chinese companies, that rhetoric has been typically focused on manufacturing, however it seems that the AI market in China has spurred an even more intense competition to gain share in its own market.  In June of 2024 there were over 230 million AI product and services users in China, according to state-sponsored data, which grew to over 600 million by the end of October, with almost 200 LLM commercially available models to choose from.  While we believe the share of the potential user base that is using an AI on at least a weekly basis is higher currently in the US than in China, and generated more sales in 2023, expectations for industry growth over the next seven years are higher for China (25.6% CAGR for China v. 23.3% for the US[1] ), which is the impetus for the even more aggressive nature of Chinese AI Chatbot brands.
With this intense level of competition among AI Chatbot model providers, we were curious to not only to see if we could quantify the rate of price reductions but also compare those to model price reductions outside of China.  We note that this is an unscientific comparison, as each of the models has its own set of features and characteristics, and the availability of this data is, at best, poor, but we gathered as much data as possible, and converted the Chinese price data to US dollars for comparison.  Most notable is that the price of the most recent Tencent (700.HK) model, which has been available for roughly one month, is now the same as the Alibaba Qwen-VL model, which has been available for well over a year, and while the non-Chinese model prices have come down at a similar rate to the Chinese models, the current prices of the non-Chinese models are appreciably higher.  Overall, one might question the viability of the current business models behind commercial chatbots based on the data.
We note also that Baidu’s (BIDU) ERNIE model is now free, and Google’s BARD has morphed into Gemini.  We can find no specific data on how the MetaAI (FB) models are broken out pricewise and we note also that there are newer models for some (GPT-4 for example) that have much higher performance and cost, but this is as close to a comparison as we could make given the time involved.  The prices are for 1,000 tokens of input data in all cases.


[1] GrandView Research
Picture
0 Comments

Google Joins the Fold

5/11/2023

0 Comments

 

Google Joins the Fold
​

Google (GOOG) has now (almost) joined the list of smartphone brands that have released foldable smartphones, with the announcement of the Google Pixel Fold, the company’s first foldable model.  With Samsung on its 4th yearly iteration, and foldables from 7 other major smartphone brands, Google, and Apple (AAPL) were among the few that have not joined the foldable craze.  The Pixel Fold itself, is as close as one might come to a Galaxy Z Fold 4 clone, with only fractional differences in size and features, and, not unsurprisingly, the same price as the Z Fold 4, although if you pre-order, you can get a Google watch for free.
As the Pixel Fold is still to be delivered there is little real-world data to see how consumers find the software and applications, but there is one feature that seems somewhat noteworthy.  When using the main screen to make a presentation, the presenter is able to see the same images on the back screen, giving the ability to make sure the verbiage is in sync with the display presentation.  It’s a simple application, but a helpful one that gives some indication that Google is thinking more about applications than hardware, which is likely a better direction for them to go, rather than battle Samsung point for point.
Growth expectations are considerable for foldable smartphones this year, with estimates ranging between 30% and 51% growth in unit’s y/y, but the absolute numbers are still relatively small, and even smaller looking when compared to the overall smartphone market.  Even at the top end of the growth estimates, foldable smartphone unit volume is likely to be less than 2%, but with the average smartphone selling for ~$300, foldables sell at a huge premium.  Samsung Electronics, who has been the leader in the foldable smartphone space for years, had an over 80% share last year and with such premium pricing, and a weak smartphone market, we expect more of the same this year, and likely a new model (aside from the Z Fold/Flip 5 series) for the holidays, which will reinvigorate the hardware competition between brands,  and potentially give those who have passed on foldables thus far, a reason to become more interested.
Picture
0 Comments

Navigating without GPS

9/29/2022

0 Comments

 

Navigating without GPS

In the pre-GPS world, day-to-day navigation was done with a combination of landmarks, maps, signs, and a good sense of direction, a struggle for many who do not have that last item, but with the advent of GPS systems, the necessity for a good sense of direction has become almost moot, other than keeping you from driving into a lake when the navigation system is incorrect.  Life without GPS is difficult for those that have grown up with it, and the fact of not having your smartphone with you when traveling almost anywhere is testament to our dependency on GPS navigation, but there are alternatives on the horizon.
In VR systems it is absolutely necessary for the user (and the system) to know where they are spatially, not only for the safety of the VR user but for others that might be negatively affected by wildly flailing arms or other body parts, or for the possible destruction of nearby furniture or pets.  Kidding aside, there are a number of systems that are used to gather position information for VR systems, primarily the requirement to mark absolute boundaries before a VR session, or the system’s ability to sense object information around the user using RF, although more sophisticated sensing systems are needed for the advancement of VR into society.  That said, the characteristics needed for positioning in AR systems are far more related to everyday location information, as the evolution of AR systems into daily life will necessitate precise positioning information in addition to the visual information seen by the user.
Without location data the AR system will not be able to point you toward a particular destination or help you pick out your rideshare in an airport waiting line, and more importantly, it could point you in the wrong direction if being used in a driving situation.  We have noted that Google (GOOG) has been collecting visual data for its ‘Street View’ database since 2007, which it incorporates in Google Maps and Google Earth, and more recently made the 20+m GB of data and 10+m miles of roadway imagery available to developers under the “Live View” API, and while Google is the leader in non-GPS location data, they are certainly not alone.  Apple’s geo-anchors use the company’s “Look Around” data, movement indicators, and user imagery to develop a global 3D map, while Facebook (FB) focuses on “Live Maps”, a collection of physical and geometric information, along with a host of other social media oriented companies that are looking for ways to generate location data without using GPS.
The problem stems from the fact that the GPS system relies on signals from at least 4 of the 30+ GPS satellites that orbit our world.  There are instances when atmospheric conditions or signal blockage can compromise GPS signals and smartphone GPS data has a 4.9m (16 feet) radius, which could make it a bit difficult to pinpoint a specific parked vehicle or an item in a large warehouse.  Niantic (pvt) a small company that was spun out of Google, and financed by Google, Nintendo (7974.JP), and The Pokemon Company (pvt), recently bought 8th Wall (pvt), a company that created an interactive AR development platform that is browser based rather than a standalone application.  Niantic’s system is based on geometrics and the system’s understanding of its surroundings along with an understanding of objects themselves, essentially ‘does that object look like something in my database that is defined as a…’, with some of its data collected from Pokemon Go users, a vast network of users who play individually but are now able to play ‘in-network’.  As much of the game is based on finding Pokémon hidden in various locations, the visual data that is collected during the games is added to the Niantic database to build out the 3D maps.
While all of the players in the non-GPS location space are approaching it from different angles, the important factor is that the data is robust, giving object recognition and spatially oriented systems more information on which to rely, making their ability to recognize an exact location more precise.  As noted Google has a huge amount of data from which its systems can match,  and certainly has an advantage over smaller data sets, but tapping into social media or other image sources can build maps quickly with sophisticated algorithms, so there is still no foregone conclusion that one system will rule.  The good news is that will all of this data collection, and more socially acceptable hardware for AR, the idea that you could walk down a street wearing AR glasses that tell you how to get to your destination by painting a red dotted line on the sidewalk or indicating which way to go to find that food truck that used to be nearby, is getting to be more of a reality.
0 Comments

Lost in Translation – Ask the Machine

7/8/2022

0 Comments

 

Lost in Translation – Ask the Machine
​

​There is a war going on that rarely makes it into the press as the battlefield is not on the ground, in the air, or in space, but inside the guts of massive processing nodes that are used to understand the nuance of language.  The algorithms that direct these processor are based on a subset of Ai that deals directly with syntax, expression, and a host of other language variables that make understanding other languages a challenge for humans and a monumental task for digital entities.  The opponents here are companies rather than political entities, and well-known ones at that, with Microsoft (MSFT), Google (GOOG) and Meta (FB) all pitted against each other with the focus of being the dominant force in the digital translation market.
There are no casualties or battle lines in this war, as it is fought with processing metrics and advertising, and that makes it hard to know who is winning, but the participants seem to have taken on a particular metric to make the public aware of who might be in the lead.  That metric is the number of languages a system can translate, which seems to be used a s a gauge toward whose system is the most advanced.  While the number of languages that a system can translate is certainly important, especially to those languages that might be considered secondary, but by far the most important metric for translation services is accuracy. 
Recently Meta indicated that its NLLB-200 Ai model has increased the number of languages it can translate to 200, which it accomplished in two years, while Google Translate is only able to work with 133 languages and Microsoft’s system translates only 111 languages, although that includes two Klingon dialects, and is clearly staking a claim as the world’s most advanced translation tool.  While the number of languages a system is able to detect is certainly easily understood by the general public, the quality of the translation, which is based on the algorithm and the sample base, is far more important and there are two ways in which that can be evaluated, by humans or by machines.  Using humans to evaluate translation quality throws subjectivity into the mix, while automated machine evaluation does not, but a machine evaluates a translation by averaging words and sentences against a human evaluation, with the idea that the closer the machine score is to a human score, the better the translation is.
With all of this being beyond the scope or desire of the general population, translation giants will continue to use the simplest metrics to give credence to their systems, but will have little correlation to real world results, as the ability of the AI to understand nuance and what to do with that understanding is really the key.  All three companies mentioned here have access to vast stores of speech, which certainly goes toward the ability of an AI to learn, but the algorithm is the key and that is something that not only grows with more resources but must change as humans better learn how to convert those subjective views into language that a machine can understand.  So the question is, does the AI need more language to read or does it need more human evaluations in order to take the subjectivity out of their evaluation?  The only way to know is…
E ninau i ka Mīkini - Hawaiian
Spurðu vélina - Icelandic
Faighnich dhan Inneal – Scottish Gaelic
quaeritur machina – Latin
Ask the Machine - English
 
 
 
 
 
 
 
0 Comments

Google Again

7/7/2022

0 Comments

 

Google Again
​

​AR or Augmented Reality needs one very important piece of information if it is to provide a service to consumers, and that is where it is.  While you can see where you are through the AR lenses, if the system is to project other images or data onto your actual image, it has to know where in space the glasses are to position things correctly, especially directional information, which we expect to be an integral part of AR for consumers.  Without location data the AR system will not be able to point you toward a particular destination or help you pick out your rideshare in an airport waiting line, and more importantly, it could point you in the wrong direction if being used in a driving situation.  At a recent conference, Google (GOOG) highlighted its work in what it calls Google Live View for developers, which is based on Google’s Street View database, a 20m+ GB collection that comprises satellite imagery, geological surveys, municipal maps, and the 10m+ miles of roadway taken by its Street View cars, all combined to produce accurate maps, turn-by-turn navigation, business locations and real-time  traffic information, much of which is sourced from GPS enabled smartphone users, all of which will be licensed (API) to developers who wish to use this incredibly accurate mapping data.
While we expect Google will certainly be a player in the AR application market, the company is limited in what applications they can develop, both by resources and by the necessity not to undercut their client base, so by licensing this vast data resource, the API gives developers direct access without having to build their own interface which might include built-in sensors or ‘anchors’ that would have to be pre-defined and expensive and completed to develop on their own.  Google seems to have stepped forward as the de facto supplier of the geospatial information needed to make AR work, and is already using the API as part of its Live View mode inside the Google Maps application and as the AR Places filter in Google Lens.  The idea of the API is to free developers from having to build out their own spatial information interfaces by just using (and licensing) the Geospatial API.
GPS is commonly used for location data, but the accuracy of that data is not always accurate, especially in densely built-out areas, leading to between 5 and 10 meters of positional accuracy and 35 to 40⁰ of rotational accuracy, which could lead to AR images being out of view or the necessity for consumers to place anchors at a particular location to allow the system to have a point of reference, which might work well in a small space but not in a larger venue  By using the Google Geospatial API all of the amassed location and image data that Google has collected over the last 15 years becomes available to the developers application, at least anywhere where Google has StreetView data (not in Germany, North Korea or China and a number of other countries), which gives almost pinpoint accuracy and an easy way to guide users to location where they might spend some money.
After spending some time understanding how the Google API works from a technical standpoint, we were even more impressed than just listening to the company’s promo.  Without going into detail, the API links the user’s mobile camera to the company’s servers and matches the cameras image to the Google data.  Given that the StreetView data has been filtered by Google AI to eliminate non-stationary objects like cars and people, the algorithm matches the camera image by comparing against the buildings in its database, which includes buildings all over the world.  Once a match is made, the detailed coordinates are sent to the API from the Google servers and the AR application can then accurately overlay the new information on the camera image.  Considering the billions of buildings the algorithm has to ‘look at’ to match the camera image, the technology is quite incredible
While it might seem that we are favorable toward Google, as we have singled out their AR translation application previously and the API mentioned above, Google has amassed such a large volume of disparate data that it seems able to coordinate, that they can easily create applications and data for developers and customers that is unavailable from other sources.  In reality our favorable view of Google from an AR point of view comes from the fact that the company not only collects user information, as many social platforms do, but has been collecting other, sometimes seemingly strange data, for so many years that its databases are the most fertile grounds for AI learning, whether it be image related or data related, so the edge that it gives the company for things like the positional data mentioned above or the translation data we have mentioned in the past is a distinct advantage over other data collectors that have not been doing so for an extended period or to the extent Google has. 
As the globes primary search engine, the company has had access to so much consumer data that would likely be unavailable to others, it puts the company in the sights of those concerned with anti-competitive behavior.  But the fact that the company has had the foresight to spend the time and resources to collect what might have seemed irrelevant data is more the reason why it was collected and not as much for the world domination that some consider Google’s goal.  They had the opportunity and were willing to allocate the resources likely without the ultimate goal of using it for such specific applications.  That’s smart thinking, and while it might lead to world domination in data resources, it was good planning years ago that made it possible. 
0 Comments

Game Changer for Google

5/12/2022

0 Comments

 

Game Changer for Google
​

While much of the press after Google’s (GOOG) I/O conference was focused on the potential for Google’s entry into the Smartwatch market and its potential to challenge Apple (AAPL), we noted that the company teased conference viewers with a quick look at a prototype of new AR glasses that are about as natural looking as one might hope and still provide the features of current AR glasses.  While the idea of ‘regular’ looking AR glasses was originally Google’s idea with the ill-fated “Google Glass” product concept, what underlies the style of the prototypes is their focus on utility, with Google looking to promote AR by making it more than a way to fit furniture into your living room.  The company ‘focused’ on language translation as an application ripe for AR, where all of Google’s work on translation technology is incorporated in the AR glasses, meaning you get an instant transcription of the conversation you are having if you speak the same language, and a fully translated, real-time transcription in your language if the speaker is not speaking your language.
While this is a prototype, the effect of having a direct translation of a conversation is enormous in our view and while these glasses and the applications associated with them are not an actual product yet, the implications for AR for just this application are enormous.  Merely by looking at a person during a conversation you can see the translated words in front of your eyes, which makes any conversation, regardless of the participant’s ability to translate, incredibly rich.  Of course there will be detractors that will say you will miss the nuance that speaking other languages might give, but for everyday conversations, including ASL, the short video below opened our eyes to this game changing application for AR.  Hopefully Google is far enough along to actually produce such a product sometime during the next few years as we believe it will have a last effect on both the AR world and those that face the challenges of speaking a single language in a multi-language world.  If you are linked to a browser check on YouTube when you click on the link below, click on ‘browse Youtube’ to see the actual video.
https://youtu.be/lj0bFX9HXeE
​
Picture
Google AR Teaser - Source: Google
0 Comments

Google Buys Micro-LED Start-up

3/18/2022

0 Comments

 

Google Buys Micro-LED Start-up
​

While not announced officially, it seems Google (GOOG) has bought a small California start-up, Raxium (pvt), that specializes in the development of micro-LED displays, particularly for AR applications.  The company’s technology is based on light field displays, which can produce 3D images without the optics normally associated with 3D viewing.  In a normal display the image is scanned by its volume (picture the slices of a loaf of bread), while in light field displays scan radially (picture layers of a cake), allowing at least two ‘layers’ to reach each eye.  Since those images are from slightly different locations on a horizontal plane, you are able to ‘see’ around an object just by moving your position.
One issue concerning light field imaging is that they require a large number of ‘elements’ in order to capture the ‘cake slices’, some 2 to 3 times what might normally be used in volumetric imaging, and that is where Micro-LEDs come into the picture, as they are considerably smaller than typical pixels and sub-pixels, allowing designers the ability to place a greater number of imaging elements in the same space as more typical self-emitting display systems.  That said, producing micro-LEDs and placing them on control circuitry is a difficult task and makes such a process an expensive one, which is reflected in the high cost of Micro-LED displays, and those are using much larger Micro-LEDs than would be needed here.  Raxium is also developing a process for producing ‘monolithic’ Micro-LEDs (see Figure 5), essentially LEDs that are produced directly on a silicon substrate, as opposed to sapphire substrates from which they have to be transferred.
The application that we expect has interested Google here is the use of light field imaging in an AR device, which would allow someone using such a device to see a far more realistic image overlaid on the real world, and in theory, the user should be able to see more than just a ‘flat’ image overlay, being able to look ‘around’ the object to get a better perspective on how it fits with the actual image he sees through the glasses.  The example might be an engineer wearing such glasses, who was replacing a part on a mechanical device, could bring in the image of a replacement part to see what other parts might need to be removed to install it, but would also be able to see around the part just by shifting his view, rather than having to load static images of the sides or back of the part. 
This is only one application of such an AR device and there are so many more that large company’s like Google, Apple (AAPL), Microsoft (MSFT), Meta (FB), and a slew of smaller companies have been trying to develop micro-display technology that will enable AR since ~2013, when Google announced the abortive ‘Google Glass’ device.  We expect that Google sees Raxium as another step toward such a device and we expect the valuation for Raxium might be high, but if it advances Googles efforts toward a ‘revolutionary’ AR device, the cost is worth it. In the long run.
 
Picture
- Raxium Micro-LED Pixel Comparison - Source: Raxium
Picture
- 3 Dimensional Monolithic Micro-LED Display & Transistor Matrix - Source: Meng, W., Xu, F., Yu, Z. et al. Three-dimensional monolithic micro-LED display driven by atomically thin transistor matrix. Nat. Nanotechnol. 16, 1231–1236 (2021). https://doi.or
0 Comments
<<Previous

    Author

    We publish daily notes to clients.  We archive selected notes here, please contact us at: ​[email protected] for detail or subscription information.

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    July 2020
    May 2020
    November 2019
    April 2019
    January 2019
    January 2018
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    5G
    8K
    Aapl
    AI
    AMZN
    AR
    ASML
    Audio
    AUO
    Autonomous Engineering
    Bixby
    Boe
    China Consumer Electronics
    China - Consumer Electronics
    Chinastar
    Chromebooks
    Components
    Connected Home
    Consumer Electronics General
    Consumer Electronics - General
    Corning
    COVID
    Crypto
    Deepfake
    Deepseek
    Display Panels
    DLB
    E-Ink
    E Paper
    E-paper
    Facebook
    Facial Recognition
    Foldables
    Foxconn
    Free Space Optical Communication
    Global Foundries
    GOOG
    Hacking
    Hannstar
    Headphones
    Hisense
    HKC
    Huawei
    Idemitsu Kosan
    Igzo
    Ink Jet Printing
    Innolux
    Japan Display
    JOLED
    LEDs
    Lg Display
    Lg Electronics
    LG Innotek
    LIDAR
    Matter
    Mediatek
    Meta
    Metaverse
    Micro LED
    Micro-LED
    Micro-OLED
    Mini LED
    Misc.
    MmWave
    Monitors
    Nanosys
    NFT
    Notebooks
    Oled
    OpenAI
    QCOM
    QD/OLED
    Quantum Dots
    RFID
    Robotics
    Royole
    Samsung
    Samsung Display
    Samsung Electronics
    Sanan
    Semiconductors
    Sensors
    Sharp
    Shipping
    Smartphones
    Smart Stuff
    SNE
    Software
    Tariffs
    TCL
    Thaad
    Tianma
    TikTok
    TSM
    TV
    Universal Display
    Visionox
    VR
    Wearables
    Xiaomi

    RSS Feed

Site powered by Weebly. Managed by Bluehost