Tech Industry in Silicon Valley is in a bubble that is about to burst later this year

Getting your Trinity Audio player ready...

We will talk the highlights of this increasingly outdated post of “OpenAI, Humane and Apple’s Shadow” published back in 2023:

Today, we will talk about OpenAI (developer conference), Humane (AI hardware) and Apple’s AI efforts.

OpenAI hosted its first developer day last week.

The leading AI startup announced its new GPT-4 Turbo model, which has a 128k context window (in layman’s terms: users can input up to a 300-page book and prompt the massive text to write their college essay).

GPT-4 Turbo is also faster and cheaper than the previous model that powered ChatGPT.

Another significant launch was the GPT Builder, allowing users to create personalized AI agents (e.g., a paleo Vietnamese food recipe tool) using natural language. There will be a marketplace where users can list their agents, called GPTs, for sale (yes, SatPost readers will get a discount on my paleo Vietnamese recipe tool upon its release).

A viral clip from the presentation featured OpenAI CEO Sam Altman creating a GPT that offers startup advice based on his writings. The demo only took 4 minutes and required no coding.

Combined with audio and visual AI tools — I saw one demo where someone snapped a photo of a landing page and had code spit out for how to recreate it — the possibilities for a GPT Builder feel endless.

Some heralded the keynote as Altman’s “Steve Jobs moment”.

The most salient example for this comparison was when Altman received a calendar notification during another GPT agent demo. He then held up his phone and even nailed the 83° arm bend that Jobs did when he unveiled the iPhone in January 2007.

In a thorough review of the OpenAI keynote, Ben Thompson was impressed by the GPT agents but believes that ChatGPT — which is still just an app (albeit one that is swallowing up other traditional apps) — is the wrong AI interface.

“AI is truly something new and revolutionary and capable of being something more than just a homework aid, but I don’t think the existing interfaces are the right ones,” Thompson highlighted from a previous article he wrote on the topic. “Talking to ChatGPT is better than typing, but I still have to launch the app and set the mode; vision is an amazing capability, but it requires even more intent and friction to invoke.”

When it comes to technology interfaces, no one has been better than Apple. Although its generative AI offerings are currently slim and Siri somehow keeps messing up my alarm clock times, the iPhone maker has all the pieces to create the best AI interface.

To understand why, let’s look at:

  • Lessons from Steve Jobs keynotes
  • Humane’s AI Pin
  • The state of Apple AI

Lessons from Steve Jobs keynotes

Let’s rewind to 2008: back then, Altman presented his social app Loopt in front of Steve Jobs at Apple’s Worldwide Developer Conference (WWDC). The Apple founder said the product was “cool”.

Reflecting on the event years later, Altman — who idolized Jobs — told CNN that it was the only time he had been “frozen out of nervousness in any business context.”

One lesson he drew from the Apple co-founder was the psychology of how people want to interact with technology.

During his appearance on the Lex Fridman podcast, Altman shared an anecdote about Apple’s colorful and translucent iMac desktop:

“A story that has always stuck with me [and] I don’t know if it’s true. I hope it is true.

[The story is] that the reason Steve Jobs put that handle on the back of the first iMac was [because] you should never trust a computer you couldn’t throw out a window.

Of course, not that many people actually throw their computers out a window, but it’s sort of nice to know that you can. And it’s nice to know that this is a tool very much in my control. And this is a tool that does things to help me.”

I did a bit of Googling and found that the “throw out of a window” quote is actually attributed to Apple’s other co-founder, Steve Wozniak. However, there doesn’t seem to be any evidence that Wozniak even said that.

While Altman’s referenced story may be apocryphal, the main lesson is still valid: the user’s comfort level with a computing interface matters a lot.

Apple’s former design chief, Jony Ive, has his own version of why there was a handle on the first iMac. It doesn’t mention throwing hardware out of windows, but instead, it is about personal comfort levels.

Per Ive:

“Back [when we launched iMac in 1998], people weren’t comfortable with technology. If you’re scared of something, then you won’t touch it. I could see my mum being scared to touch it. So I thought, if there’s this handle on it, it makes a relationship possible. It’s approachable. It’s intuitive. It gives you permission to touch. It gives a sense of its deference to you.”

Remember, the iMac is a desktop computer.

Why do you need a handle? To move it 3 feet from one end of your desk to the other? No, the handle is about the comfort factor and forming a connection with technology.

“Deference to you” sounds like another way of saying “I can throw this f**king thing out the window anytime I want”.

As many of you know, Jobs returned to Apple in 1997 — and with the iMac’s success — kicked off the greatest run in corporate history. The Jobs-Ive combo created a series of approachable, intuitive and deference-giving blockbuster consumer products: iPod (2001), iPhone (2007), MacBook Air (2008) and iPad (2010).

I re-watched the keynote unveilings for these products. Jobs and Ive loved tactile designs and emphasized the “fiddle factor”, in which users can touch and feel the technology.

Here is a play-by-play of some classic Jobs announcements:

  • iPod Nano: “Where is this thing? There’s no way Steve is hiding it in his 5th jean pocket. You can’t fit anything in there. Maybe a twenty-dollar bill to grease the bouncer at the nightclub, but nothing else. There’s no way he’s sliding something out of that pocket. There’s no way he’s…HOLY CRAP, IT’S IN HIS 5th JEAN POCKET! THERE’S A F**CKING IPOD NANO IN HIS JEAN POCKET!! AND HE’S HOLDING IT WITH TWO FINGERS!! LOOKS LIKE HE COULD TOSS IT OUT THE WINDOW ON THE OTHER SIDE OF THIS ROOM IF HE HAS TO!
  • iPad: “What’s that leather lounge chair doing on the stage. Kinda looks like a mini-casting couch. That’s gotta be an uncomfortable place to use a laptop. Looks great for a Whiskey and Diet Coke. Not for a computing device, though. No way Steve sits there. Wait. WHAT?!? STEVE IS USING THE IPAD ON THE LEATHER LOUNGE CHAIR!! THAT IS INCREDIBLY INTUITIVE!!!
  • MacBook Air: “Wonder why Steve has a yellow envelope on stage. Probably his taxes. Weird to have during a keynote but, yeah, whatever. Wait, he’s twirling that little red string that keeps envelopes closed. Paper isn’t metallic. PAPER ISN’T METAL GREY! HE’S HOLDING A F**KING LAPTOP!!! SO APPROACHABLE!

Since its founding in 1976, Apple products have aimed to reduce computing complexity while getting more and more personal: Mac (graphical user interface and mouse), iMac (handle, unibody), iPod (wheel), iPhone / iPad (multi-touch) and Wearables (Watches and AirPods that sync with everything else and — mostly — “just work”).

With that background, it is unsurprising that Altman is reportedly in talks with Ive — who left Apple in 2019 to form his own consulting firm LoveFrom — to raise $1B+ from Softbank’s Masayoshi Son and build a hardware AI product.

What will they make? What is the correct generative AI interface?

Existing smartphones? Next-gen AR glasses? Brain implants? AI-assistant earpiece like in the film Her?


Humane’s AI Pin

One new AI hardware example dropped last week.

It is called the AI Pin and was created by Humane. The startup was founded by Imran Chaudhri and Bethany Bongiorno, both former veteran Apple designers.

Chaudhri worked with Steve Jobs on all of his post-comeback hits and was part of an Apple duo known as the “Lennon and McCartney of design”. Bongiorno was instrumental in the development of the iPad.

So, what is AI Pin?

It’s a square computing device — packed with a camera and sensors — that you can pin to your shirt. The main computing input is voice and the device can project digital ink onto your palm.

The AI Pin shows how generative AI enables post-smartphone and post-app interfaces, as described by Om Malik:

So far, we have used mobile apps to get what we want, but the next step is to just talk to the machine. Apps, at least for me, are workflows set to do specific tasks. Tidal is a “workflow” to get us music. Calm or Headspace are workflows for getting “meditation content.” In the not-too-distant future, these workflows leave the confines of an app wrapper and become executables where our natural language will act as a scripting language for the machines to create highly personalized services (or apps) and is offered to us as an experience. […]

The way I see it, the evolution of apps to “experiences” means that we are seeing the end of the line for the App Store as we know it.

“It’s not about declaring app stores obsolete; it’s about moving forward because we have the capability for new ways,” Chaudhri argued.

Humane’s idea is to make these workflows (aka apps in smartphone terms) available to us through its myriad interfaces — primarily voice. 

Another term tossed around during Humane’s launch was “ambient intelligence”, as in the AI is just kind of there when you need it.

The AI Pin is an interesting idea but the 10-minute demo video did not leave me thinking “I need this”, especially at the current price of $699 (+ $24 a month for text/talk/data).

Avi Schiffmann — who works on AI wearables — noted design problems with Humane in relation to human psychology:

  • New form-factor: The iPhone, AirPod and Watch were all products that fell into established categories where people immediately knew how to use them. The AI Pin requires completely new behaviour.
  • Socially acceptable? People don’t wear pins in everyday life (I think). Also, the Pin has privacy concerns with its audio and visual inputs (the camera has a “trust” feature — a green light indicates it’s on and a red light indicates it’s off — which looks a bit creepy). Speaking out loud to an inanimate objects in public also seems like a mental leap.
  • Decision fatigue: The daily exercise of “where do I stick this thing on my shirt” is another thing to worry about. (I can barely figure out which socks to wear each morning. This is why Jobs only wore black turtlenecks and Levis jeans with a fifth pocket that could fit iPod Nanos.)

These issues do not make the AI Pin sound very “approachable”, “intuitive” or “deference-giving”. One alternative that Schiffmann offers is a necklace form factor. I do not have strong feelings about necklaces, but at least people do wear them and know how to put them on the same way every time.

Another hiccup with the Humane launch is its competitive positioning.

I like the idea of not carrying dopamine-dripping smartphones everywhere and my solution is much cheaper than $699: I started using a Kale Phone and a Cocaine Phone.

Who is Humane actually for?

It is obviously unfair to compare most things to the iPhone as the device is literally the greatest product in the history of capitalism with 2B+ units sold and $1T+ in lifetime sales.

But it is worth noting that this crystal clear 2×2 matrix was used by Steve Jobs in the iPhone launch keynote.

The axes on the matrix were “smart / not so smart” and “hard to use / easy to use”. In the least shocking corporate decision ever, Jobs slotted the iPhone in the upper right corner.

The iPad also had a great positioning slide, which highlighted what Jobs thought the tablet form factor did better than a smartphone or laptop. Even though the iPad didn’t fulfill the vision, it was at least clear how the tablet could substitute or augment existing devices.


The state of Apple AI

Unlike the iPhone and iPad slides, the AI Pin’s demo left more questions than answers.

After the Humane launch, any future AI hardware offering has to answer this question: If someone is spending up to $699 on an “ambient intelligent” device in addition to a smartphone — because this is not enough to replace a smartphone — then why wouldn’t they just wait for Apple’s version?

I imagine that is the calculation that the 1B+ people who already own an iPhone will make. Out of those, 200m+ have bought a Watch and 400m+ have bought AirPods (I am responsible for 10% of these purchases because I keep losing the left bud).

There are valid questions as to whether Apple actually wants to live in a post-App world. Multiple sources familiar with the matter tell me that the iPhone and App Store combo makes quite a bit of money.

Tim Cook & Co. have been complacent with generative AI since the launch of ChatGPT. According to Bloomberg’s Mark Gurman, the company has been caught “flatfooted” and their “only significant AI release” in the past year was an improved auto-correct system.

However, the market is forcing Apple into action and it is committing at least $1B a year — which is a drop in the bucket — to spread generative AI across its products:

  • Apple GPT: Apple has its own large language model (LLM) called Ajax and could release it soon.
  • iOS: Ajax will improve auto-complete and suggestions within Siri and the Messaging app.
  • Xcode: Integrating AI into the platform for Apple-product app developers (similar to Microsoft with GitHub Copilot).
  • AI across consumer products including Apple Music (auto-generated playlists like Spotify) and the productivity suite (writing tools for Pages, auto-generated slides for Keynote)

For those who are wondering “Trung, what about Google, Android and its various AI projects?”

Yes, they are in the mix (on a side note: my 2010 Samsung Galaxy Note with the stylus was so baller).

Meta has an open-source LLM (LLAMA) and wearables hardware (Quest, Meta Raybans). Another under-the-radar contender could be a cross-pollination of Elon’s companies, including Teslabot/xAI/X/Neuralink.

However, Apple has always been the best at integrating hardware and software 

Last month, Microsoft CEO Satya Nadella — who has invested $10B+ into OpenAI and is the startup’s major strategic partner — said one of his biggest regrets was shutting down Windows Phone and leaving the mobile market. Nadella says it was a missed opportunity to define the next-generation of computing, which almost certainly includes personalized AI-first devices.

And do you know what personalized AI-first devices need? Powerful chips that are energy efficient and can optimize compute, power and memory trade-offs.

Kind of like Apple’s line of custom chips.

Case in point: the new M3 chip that powers Mac an MacBook. While the M3 is designed for desktops and laptops, its performance— which Apple claims is 3-4 years ahead of competitors on key metrics — will find its way into A-series (iPhone, iPad), H-series (headphones) and S-series (Watch) chips.

Here is another article from Om Malick on the M3 chip and Apple’s silicon efforts:

AI algorithms function with extreme parallelism. While adding more compute power (and GPUs) can address this, the real challenge lies in how quickly data can be moved, how promptly and extensively memory can be accessed, and the amount of energy required to operate these algorithms efficiently. Apple’s strategy with its Silicon has been remarkably prescient, taking into account these realities even in their latest GPU updates. […]

Apple has a substantial opportunity to integrate generative AI into its core platform, mainly because of its chip and hardware-level integration. For example, by actively incorporating open-source generative AI models into their SDK and developer tools, Apple can leverage the evolving nature of the interaction between humans and machines in the digital world.

Apple’s silicon gives the company options for powerful on-device processing, which offers more privacy than running everything through the cloud. Privacy will be important for any “ambient intelligent” device and Apple seems to have the edge on this front.

However, both Apple and Nvidia has a deadly competitor from Shenzhen with a vertically integrated business in Edge computing to Edge AI of Internet of Things with on-device AI towards setting industrial standards as an horizontally integrated open-source market with leading Chinese and global consortiums and would not be able to just walk into this forray easily directly without acknowledging it’s most dangerous competitor in the tech industry, which one of them has acknoweldged.

  • HiSilicon: Huawei produces Ascend AI chips for servers targeted for businesses and consumer products with Kirin chips that includes NPU processing alongside in-house custom microarchitecture CPU cores and GPU like Apple Silicon with transition from ARM to in-house Lingxi instruction set that takes a leap ahead of Apple to advance their chips beyond US sanctions with ARM. This year Huawei is ramping up priorities from frist half to second half on 7nm chips for Ascend AI sequel to 910 while they are preparing for 5nm debut consumer chips on their flagship products from their high end Mate series smartphones to MateBooks and MateStation PCs that will play a massive role in it’s vertically integrated software and hardware. Huawei’s silicon gives the company options for powerful on-device processing, which offers more privacy than running everything through the cloud. Privacy will be important for any smart “ambient intelligent” device and Huawei today, seems to have the edge on this front prepared for second half of 2024 against Apple’s vision with next version of Apple OSes.
  • Huawei GPT: Huawei has its own large language model (LLM) called PanGu and it’s out already being used in business applications such as European Centre for Medium-Range Weather Forecasts in the UK and was first used in consumer products back in August 2023, with upgraded Chinese Celia called Xiaoyi packs with a LLM model called PanGu-Σ 3.0 AI on HarmonyOS 4.0 major upgrade improvements from Celia, making the assistant smarter and more advanced compared to when it was launched in 2020 on EMUI handsets in China and internationally, surpassing Apple and Google by the being the first in the AI industry. Also, will be embedded into it’s horizontal platform initiator OpenHarmony and mirroed Oniro OS for global IoT devices. All the way to it’s flagship HarmonyOS NEXT core operating system for their upcoming new HarmonyOS version for China by Q4 2024 and global markets afterwards with MindSpore framework that includes the NNRt Neural Network Runtime of AI related APIs for both closed commercial and open source operating systems.
  • HarmonyOS: PanGu will improve auto-complete and suggestions within native Celia and the Messaging app, keyboard of Huawei’s in-house operating system.
  • DevEco Studio: Integrating AI with NNRt (Neural Network Runtime) that is part of the OS with it’s ArkCompiler and Runtime into the platform for Huawei-product app developers (similar to Microsoft with GitHub Copilot) for it’s in-house integrated development environment. That can play a role of HarmonyOS, OpenHarmony and Oniro OS on PC development on HarmonyOS applications itself with TypeScript based ArkTS and in-house Cangjie programming language.
  • AI across consumer products: including Huawei Music (auto-generated playlists like Spotify) and the productivity suites (writing tools, and other areas around it such as auto-generated slides). Faster OS performance in tasks etc. Including third-party mobile, wearable, PC to IoT native applications. Also, install-free native mini HarmonyOS applications that replaces replaces constant web search with search engines with the browser as part of the OS, the WeChat of an operating system where AppGallery is the centre. Instead of being used as a “promotion” of apps, that is part of an OS natively, that can change the way we browse on mobile and use apps. Huawei has a bigger vision with it, Web 3.0, Metaverse, that could spell trouble for other Silicon Valley players such as Google with IoT. This same feature, will go into headsets, cars, PCs, tablets, TVs as well, not just phones.