Supply Chain Market Research - SCMR LLC
  • Blog
  • Home
  • About us
  • Contact

Can AI Create IP?

10/3/2022

0 Comments

 

Can AI Create IP?
​

WIPO (The World Intellectual Property Organization) is a body that represents 77 member nations, including China, the US, and Russia, that creates IP policy under which its members are supposed to operate and maintains an IP database for many of the members that do not have a substantial IP system.  While policy still varies from country to country, there is a topic that has become a talking point for the IP world and that is artificial intelligence.  “The Next Rembrandt”, a portrait created by an AI system that sampled 168,263 Rembrandt painting fragments coupled with a facial recognition system or Google’s GOOG) Deep Mind AI system (WaveNet) that has been used to create short snippets of “Chopin-like” music based on millions of Chopin samples, are bringing up questions as to ownership of such IP, and that creates considerable controversy, but it goes much further.
Picture
"The New Rembrandt" - Source: Bas Korsten
​Things get really confusing when the AI itself is the applicant for its own ‘created’ works and while discussions at WIPO and other IP venues continue, it has been up to the courts in each country to rule on whether such applications are valid.  AI developer Steven Taylor submitted an application to 16 countries last year with the inventor’s name “DABUS”.  He had no knowledge of the devices described in the application and claimed that DABUS had created two inventions after developing the general knowledge to devise such items.  The Korean Intellectual Property Office asked Mr. Taylor to amend the application using a natural person as the inventor, however Mr. Taylor did not make such a change, and the application was rejected.
The basis for the rejection was that patent law and precedents state that only ‘natural persons’ may be inventors, with a number of countries, including the US, Germany and the UK using this as a guideline for applications, although a lower court in Australia recognized AI as an inventor last year, although it was later overturned  by a higher court.  A conference held late last year  that represented seven patent offices (including the US, China, and Europe) came to the conclusion that those countries have not yet reached the level of technological sophistication that would allow an AI system to ‘invent’ without human intervention and therefore could not consider an AI as a natural person/inventor, but as AI systems become more refined and make an increasing number of decisions on their own, the definition of natural person does not seem to meet the necessary level of breadth that is needed in such cases.
In most instances where such decisions have to be made, the application is either denied or is attributed to the developer of the software that is the basis for the AI creation[1], but copywriting of works generated by AI does not seem to have been specifically prohibited, although there are many countries, including the US, that specify a copyright must be created by a human being, or as US case law[2] stated it, “…the fruits of intellectual labor that are founded in the creative powers of the mind”.  Hong Kong goes in the other direction, along with India, Ireland and New Zealand, and the UK, based on the UK’s Copyright, Designs, and Patents Act, which states: In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”.
But referring the patent back to the software ‘s author would be the same as assigning a patent to the actual writer of the patent, not the one who came up with the invention, and call into question an almost infinite number of semiconductor patents that were designed on software developed by a relatively small number of companies.  Should all of those semiconductors be the property of the companies that developed the design software?  It’s not a question we can answer and one that will get harder to answer as AI systems become more able to make basic decisions on their own. The underlying program that runs the basic functions of the AI are the framework under which the system runs but that framework allows an increasing amount of decisions to be made by the system itself as it learns.  There will be a point at which AI systems can get close to synthesizing human thought or creativity, at which point the question becomes far more difficult to answer, but there will always be the case that the software is the basis for the higher level AI decisions or creativity, which would give credence to the ‘human only’ ideology.


[1] Guadamuz, Andreas. “Artificial Intelligence and Copyright.” WIPO MAgazine, May 2017.

[2] Feist Publications v Rural Telephone Service Company, Inc. 499 U.S. 340 (1991)) 
0 Comments

Even John Draper Would be Worried

7/25/2022

0 Comments

 

Even John Draper Would be Worried
​

In November 2018 the Chinese State run news agency, Xinhua (state), unveiled it 1st AI generated news anchor based on an actual Chinese newscaster named Zhang Zhow.  The artificially generated newscaster is able to mimic the actual broadcaster’s movements and speech, allowing him to be available to the news desk on a 24 hour basis, with a female AI broadcaster developed about a year later, based on another well-known Xinhua broadcaster.  These clone reporters are able to mimic complex facial and body movements that are derived from the AI being trained on the specific person’s video images and fill in for the broadcaster when an event happens quickly or the actual person is unavailable or on assignment.
Picture
Xinhua AI News Anchor - Source: Xinhua via CNBC
Picture
Xinhuaa Female AI News Anchor - Source: Interesting Engineering
Picture
Real & AI Version of Chinese newscaster - Source: Richard Agular Podcast
In April 2016 a Japanese Advertising agency pitted a human creative director against an AI system to compete for program ideas for Chlorettes’s Mint Gum, with the AI ‘creative director’ developed by the Japanese arm of McCann (IPG).  While the human CD won the contest, although the vote was close (54% to 46%) and similar contests for two other advertising programs went to the AI.  Taking the AI concept a bit further in the advertising world are a number of Ai companies that specialized in the creation of digital models that can be used for advertising without the time and expense consumer photoshoots that are typically used.  The models shown below were generated by Rosebud.ai (pvt), a company backed by a ‘Who’s Who’ of Silicon Valley cognoscenti, and now seemingly focused on helping users generate NFT’s, with companies like DataGrid (pvt) https://youtu.be/8siezzLXbNo and NEON (pvt) https://youtu.be/2UlBFiL6noU creating even more life-like models that are more expressive than those mentioned earlier.
Picture
AI generated Digital models - Source: Topbots.com
In 2017 Japanese advertising agency Dentsu (4324.JP) introduced AICO, short for AI Copywriter, a system that is able to generate advertising copy that could generate vast amounts of copy after researching (being trained on) specific topics and given a bit of context on the target idea.  Since then many AI based copywriting tools have been developed, a number of which are easily accessible on line.  Here’s a demo from Anyword (pvt) on “Coffee-Making Software” generated by their AI system:
“Everybody needs coffee.  Our state-of-the-art machine does not discriminate.
No matter the type of coffee you enjoy, there is a setting for it on our coffee maker.  We priced ourselves on being able to offer options for everyone, making our machine versatile enough to be used by anyone.
The process couldn’t be easier!  Choose your favorite roast from our custom selection, then choose which brew method fits your lifestyle best.  From there the AI will take care of the rest, using precision technology that won’t burn your coffee beans or brew too fast so your grounds can properly steep to release their flavor before being round.
Give us a try today!”
The ultimate (for the time being) escalation of the use of AI in the advertising world is Rozy, an incarnation of Korea’s Sidus Studio X (035420.KS), who is a ‘virtual influencer and model’ who earned close to $1m last year.  While the 22 year old influencer looks and acts like more typical influencers or pop stars that sign deals to hawk various items to those who are predisposed to follow whatever trend is current, she is not a real person and is therefore available 24 hours/day  and does not need to be flown (along with a retinue of make-up artists, wardrobe specialists, hair designers, and aides) to exotic locations for a few hour shoot, with only the background footage being necessary, which can be shot by local videographers.  She does not age, nor does she get negative publicity after heavy partying with other celebs, and she has picked up endorsements from over 100 sponsorships since her ‘creation’ in 2020, but the ultimate in AI based ‘advertising’ is expected to be MAVE, a virtual only all girl group devised by Kakao Entertainment (103260.KS) that will follow in the footsteps of ‘aespa’, the SM Entertainment (SMCE) actual girl group that has a virtual equivalent.
Picture
Virtual Influencer "Rozy" - Source: KoreaBoo.com
Picture
'aespa' - The Physical Band - Source: Billboard
Picture
'aespa' - The avatars - Source: soompi.com
​All in, the use of AI in broadcast and advertising continues to increase and as the technology improves it will get increasingly more difficult for consumers to know whether they are being served by a human or a virtual copywriter, newscaster, product influencer, or even a friend, as some of the AI imaging products can be used to ‘prank your friends’ by creating virtual images of on-line videos and changing dialogue or actions.  While advertisers extol the use of AI to ‘personalize’ the advertising experience by having the systems write ‘better’ and higher volumes of copy and targeting that copy to the collected data for individuals, and news networks justify their Ai use as a cost-saving measure, we wonder how much humanity gets lost when an AI does the interpretation of an image or a headline and whether the Ai has even the smallest bit of conscience when it comes to advertising (not to say more advertisers have much).  In 1937 radio journalist Herb Morrison’s onsite broadcast about the crash of the airship Hindenburg, which killed 35 people in a fiery explosion might have been this:
“The airship is combusting and falling down with flames and smoke as it nears the mooring mast.  There are passengers screaming and the air is filled with acrid smoke”
Instead of the historic rendition that expressed a bit more emotion:
“It’s fire and it crashing! . . . This is the worst of the worst catastrophes in the world! Oh, it’s crashing . . . oh, four or five hundred feet into the sky, and it’s a terrific crash, ladies and gentlemen. There’s smoke, and there’s flames, now, and the frame is crashing to the ground, not quite to the mooring mast. Oh, the humanity, and all the passengers screaming around here!
. . . I can’t talk, ladies and gentlemen. Honest, it’s just laying there, a mass of smoking wreckage, and everybody can hardly breathe and talk . . . Honest, I can hardly breathe. I’m going to step inside where I cannot see it. . . .”
0 Comments

Lost in Translation – Ask the Machine

7/8/2022

0 Comments

 

Lost in Translation – Ask the Machine
​

​There is a war going on that rarely makes it into the press as the battlefield is not on the ground, in the air, or in space, but inside the guts of massive processing nodes that are used to understand the nuance of language.  The algorithms that direct these processor are based on a subset of Ai that deals directly with syntax, expression, and a host of other language variables that make understanding other languages a challenge for humans and a monumental task for digital entities.  The opponents here are companies rather than political entities, and well-known ones at that, with Microsoft (MSFT), Google (GOOG) and Meta (FB) all pitted against each other with the focus of being the dominant force in the digital translation market.
There are no casualties or battle lines in this war, as it is fought with processing metrics and advertising, and that makes it hard to know who is winning, but the participants seem to have taken on a particular metric to make the public aware of who might be in the lead.  That metric is the number of languages a system can translate, which seems to be used a s a gauge toward whose system is the most advanced.  While the number of languages that a system can translate is certainly important, especially to those languages that might be considered secondary, but by far the most important metric for translation services is accuracy. 
Recently Meta indicated that its NLLB-200 Ai model has increased the number of languages it can translate to 200, which it accomplished in two years, while Google Translate is only able to work with 133 languages and Microsoft’s system translates only 111 languages, although that includes two Klingon dialects, and is clearly staking a claim as the world’s most advanced translation tool.  While the number of languages a system is able to detect is certainly easily understood by the general public, the quality of the translation, which is based on the algorithm and the sample base, is far more important and there are two ways in which that can be evaluated, by humans or by machines.  Using humans to evaluate translation quality throws subjectivity into the mix, while automated machine evaluation does not, but a machine evaluates a translation by averaging words and sentences against a human evaluation, with the idea that the closer the machine score is to a human score, the better the translation is.
With all of this being beyond the scope or desire of the general population, translation giants will continue to use the simplest metrics to give credence to their systems, but will have little correlation to real world results, as the ability of the AI to understand nuance and what to do with that understanding is really the key.  All three companies mentioned here have access to vast stores of speech, which certainly goes toward the ability of an AI to learn, but the algorithm is the key and that is something that not only grows with more resources but must change as humans better learn how to convert those subjective views into language that a machine can understand.  So the question is, does the AI need more language to read or does it need more human evaluations in order to take the subjectivity out of their evaluation?  The only way to know is…
E ninau i ka Mīkini - Hawaiian
Spurðu vélina - Icelandic
Faighnich dhan Inneal – Scottish Gaelic
quaeritur machina – Latin
Ask the Machine - English
 
 
 
 
 
 
 
0 Comments

AI At Fashion Week

2/15/2022

0 Comments

 

AI At Fashion Wee

At the end of last year LG (003550.KS), parent of LG Electronics (005930.KS), LG Display (LPL), LG Chem (051910.KS), and a host of other subsidiaries, announced their “ultra-large” AI model (Exaone) that is said to have the ability to capture 300 billion variables, generating a higher level of language skills than other Ai systems.  As most human knowledge is contained in language, the current thinking is that developing more comprehensive, more efficient, and faster AI systems based on language will lead to a system that will come closer to the human characteristics that seem to be the goal of AI developers.  According to LG, Exaone, rather than searching through images based on the text surrounding them, this system creates new images based on the data it has examined, and can show and describe its creations in both Korean and English.
While perhaps not the application that many AI experts were counting on, LG has used the Exaone recently to answer a question, “What would it look like if there were flowers on Venus?”  While we are sure that is not the only question asked of this Ai system, which cost many millions in R&D and training costs, it seems a bit strange that this is the direction taken for Exaone, at least at this point, but Exaone not only came up with an answer (actually 3,000 answers) but has created ‘Tilda’, a virtual “environmentally-conscious fashion designer” to help us humans to understand what Exaone came up with. 
Tilda has collaborated with Korean fashion designer Park Youn-hee to create a Fall/Winter ready-to-wear clothing collection (200 outfits) “Greedilous by Tilda – Flowers on Venus”, that is being highlighted at the New York Fashion Week running through Wednesday.  Tilda supplied the ideas about patterns based on the question mentioned above and Park took those ideas and created the collection in 45 days, a process that typically takes many months.  Tilda sourced her ideas from the 250 million high resolution images and 600 billion textual data that was feed to the Exaone.  After fashion week Tilda is planning to launch an independent eco-friendly fashion brand to deliver her message about the environment through fashion.
Officials from the LG AI Research Institute state that “Within a year, you will be able to meet Tilda’s unique fashion products and art works that embody the philosophy of Tilda both online and offline”, and the institute plans on creating more expert AI ‘humans’ (their words) that help and cooperate with humans in various fields such as manufacturing, research, service, education, and finance.  We are also planning to venture into the Metaverse, where we can communicate with the Gen Z and take part in more creative processes.”
John McCarthy, a professor emeritus at Stanford, coined the term “Artificial Intelligence” in 1955 in a proposal for a 2 month, 10 person summer research conference, according to the college, and went on to invent LISP, an early programming language that is still used for AI today, and founded the MIT Artificial Intelligence Project and the Stanford Artificial Intelligence Lab.  While we don’t know for sure, we doubt John was thinking about how artificial intelligence could be used to create a stir at Fashion Week 2022, but we expect he would have liked to spend time speaking with Tilda once she stopped talking about how drip the new collection was and how jaboni the other designers were.
Picture
'Tilda' - Virtual Fashion Designer - Source: Pulse News
Picture
Image blending by Tilda - Source: Pulse News
Picture
Even More 'Tilda' based fashions - Source: Koreajoongangdaily
0 Comments

AI Goes Gaming

2/11/2022

0 Comments

 

AI Goes Gaming
​

Remember Seymour Cray, the guy that built the CDC 6600 in the mid-1960’s, a ‘popular’ supercomputer that outperformed IBM’s (IBM) 7030, the leader at the time, by a significant margin.  He eventually formed Cray Computer, which he led until his death in a car accident.  But Cray and others who had an undying aspiration to create faster supercomputers laid the groundwork for larger and faster machines that now make those early machines look like car radios.  That said, statistics about supercomputers are bandied about among the AI cognoscenti, but have little relevance to the average global citizen, unless they can equate said machines to something practical, like a game.
Enter Deep Blue (aka Deep Thought) an IBM creation that was the first supercomputer to win against a human player, beating world champion Gary Kasparov in a match in 1996 and then Watson, who beat Jeopardy master Ken Jennings in 2011.  Since then there have been innumerable face offs between humans and computers, with the most recent being a device specifically designed to play the Sony (SNE) Playstaion based game Gran Turismo named Sophy, whose AI algos and training based on collecting data from over 1,000 Playstation 4 consoles, has given it the ability to “…learn an integrated control policy that combines exceptional speed with impressive tactics” according to the company, with Sony adding “In addition, we construct a reward function that enables the agent to be competitive while adhering to racing’s important, but underspecified, sportsmanship rules.”
Sophy has beaten four of the world’s best Gran Turismo drivers in direct contests, proving here computational worth, but without the crushing psychological blow to her opponents that the obvious superiority of the computer could normally bring.  Sophy has a heart, which seemed to spark a flame in the minds of her opponents, a crowd bored after defeating player after player for years, opening them up to a new challenge in a game that already requires considerable skill in balancing judgement against physical constraints.  Sony built in ‘penalties’ to Sophy’s algorithms that keep her from colliding with others to push them off the track, in other words (Sony’s words) “…to embody the subtle nuances of human character.”
So if you have a young one who sits in front of a screen with a controller in his/her hand for hours at a time, perhaps they will learn a bit of humanity if Sony builds a bit of Sophy into the next Gran Turismo iteration., or perhaps they will quickly learn that the only way they can beat such an emotional machine is to let loose more of that ‘killer instinct’ that is a prized commodity in many cirles and drive Sophy off the road.  If it turns out that Little Bobby or Susie is able to defeat Sophy by bending those ‘sportsmanship-like rules’ a bit, perhaps Sony should add back a little of that killer instinct to Sophy II, although a kinder, gentler AI could be a good thing, sort of a C3PO without all the issues.  It’s a strange world we live in…
Picture
Sophy's 'School'- CPUs & GPUs used to collect PS4 data - Source: granturismproject.com
0 Comments
Forward>>

    Author

    We publish daily notes to clients.  We archive selected notes here, please contact us at: ​[email protected] for detail or subscription information.

    Archives

    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    July 2020
    May 2020
    November 2019
    April 2019
    January 2019
    January 2018
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    November 2016
    October 2016
    September 2016

    Categories

    All
    5G
    8K
    Aapl
    AI
    AMZN
    AR
    ASML
    Audio
    AUO
    Autonomous Engineering
    Bixby
    Boe
    China Consumer Electronics
    China - Consumer Electronics
    Chinastar
    Chromebooks
    Components
    Connected Home
    Consumer Electronics General
    Consumer Electronics - General
    Corning
    COVID
    Crypto
    Deepfake
    Deepseek
    Display Panels
    DLB
    E-Ink
    E Paper
    E-paper
    Facebook
    Facial Recognition
    Foldables
    Foxconn
    Free Space Optical Communication
    Global Foundries
    GOOG
    Hacking
    Hannstar
    Headphones
    Hisense
    HKC
    Huawei
    Idemitsu Kosan
    Igzo
    Ink Jet Printing
    Innolux
    Japan Display
    JOLED
    LEDs
    Lg Display
    Lg Electronics
    LG Innotek
    LIDAR
    Matter
    Mediatek
    Meta
    Metaverse
    Micro LED
    Micro-LED
    Micro-OLED
    Mini LED
    Misc.
    MmWave
    Monitors
    Nanosys
    NFT
    Notebooks
    Oled
    OpenAI
    QCOM
    QD/OLED
    Quantum Dots
    RFID
    Robotics
    Royole
    Samsung
    Samsung Display
    Samsung Electronics
    Sanan
    Semiconductors
    Sensors
    Sharp
    Shipping
    Smartphones
    Smart Stuff
    SNE
    Software
    Tariffs
    TCL
    Thaad
    Tianma
    TikTok
    TSM
    TV
    Universal Display
    Visionox
    VR
    Wearables
    Xiaomi

    RSS Feed

Site powered by Weebly. Managed by Bluehost