Gavin Guo (国振)

Logo

zguo0525@berkeley.edu
Google Scholar
Github
LinkedIn
Twitter

← Home

Jumping from Apple to Meta: When Everything Changed

In June 2025, Apple published “The Illusion of Thinking”, questioning if AI models actually reason. Days later, WWDC 2025 announced Liquid Glass—a visual redesign, not AI advancement. The stock dropped 1.2% (CNBC).

That’s when I realized: the company I joined to build AI products was moving in a different direction. I’d spent a year shipping Visual Intelligence to millions of users, but the strategic shift was clear. The question wasn’t whether to stay or go—it was whether I wanted to build the future or watch it get built elsewhere.

The Challenge of Building AI Products at Scale

At Apple, I worked on Visual Intelligence—on-screen search and smart event creation that launched at WWDC 2025. We shipped features, but the deeper challenge was organizational: how do you maintain urgency when a company has been dominant for over a decade?

Apple’s smartphone dominance created comfort. When the AI revolution accelerated, the cultural readiness wasn’t there. The response was telling: rather than building AI that works, the company published research questioning whether AI works at all.

The Competitive Landscape

While Apple was publishing critiques, DeepSeek R1, OpenAI’s Gibili, and Gemini Nano Banana were pushing boundaries. The competition in AI is unforgiving: you either ship something remarkable, or you fall behind.

Visual Intelligence shipped, and I’m proud of what we built. But the signals were clear: when your company publishes papers questioning whether AI works instead of building AI that works, that’s not a technical problem—it’s a strategic one.

My research background (MIT-IBM Watson, papers like JetMoE and API Pack) showed me how fast the field moves. My product experience at Apple showed me what it takes to ship. The combination made the decision clear.

The Strategic Pivot

Meta’s approach was different. In 2025, Mark Zuckerberg entered “founder’s mode”—personally spearheading AGI and ASI (Bloomberg).

The commitment was visible: Zuckerberg established Meta Superintelligence Labs (MSL), recruited Alexandr Wang (Scale AI) and Nat Friedman (GitHub), and offered $200-300M compensation packages to elite researchers (Bloomberg, CNBC, AM World Group). Infrastructure matched: “Prometheus” (1 GW by 2026) and “Hyperion” (up to 5 GW) data centers, plus custom silicon via MTIA (Tom’s Hardware, Meta Engineering). The company is building models from scratch, training Llama 3, and aligning teams with AGI objectives (Axios).

I joined Meta’s Superintelligence Labs. At Apple, I learned what it takes to ship AI products at scale. At Meta MSL, I’m learning what it takes to push capability frontiers. We’re working on multimodal embodiment—fundamental questions about how AI systems understand and interact with the physical world. This positions me for where AI is heading.

What I Learned

My experience spans research (MIT, MIT-IBM Watson, papers like JetMoE, API Pack) and product (Apple’s Visual Intelligence, notification summaries). What works:

1. You need both capability and integration, but most teams optimize for only one. At MIT-IBM Watson, I pushed capability frontiers (synthetic data, model training). At Apple, I learned integration (privacy, latency, organizational alignment). The teams that win do both. Apple had integration but wasn’t pushing capability—Visual Intelligence shipped but wasn’t groundbreaking. Meta builds models from scratch while also building integration systems.

2. Constraints teach optimization, but they can limit ambition. JetMoE taught me efficiency (Llama2 performance with $0.1M). Apple’s on-device constraints forced similar optimization. But constraints can become excuses. The best teams use constraints to force creativity, not limit ambition. Meta’s approach—infrastructure first, then capability—avoids this trap.

3. Integration is harder than capability. Notification summaries seemed straightforward but exposed the real challenge: making AI work reliably across edge cases, privacy boundaries, and organizational silos. Demos wow, but shipping is harder. At Apple, I learned benchmarks don’t matter if features break in production. At Meta MSL, I’m pushing capability frontiers, but now I understand what it takes to turn capability into product.

4. The gap between research and product is organizational, not technical. At Apple, split ownership between research (training models) and product (shipping features) created delays. At Meta MSL, research and product are integrated—we’re building capabilities that will become products. This alignment matters more than model size.

5. The best career moves align with where the industry is heading, not where it is. I left Apple not because the work was bad—Visual Intelligence shipped—but because I saw the strategic shift. When Apple published “The Illusion of Thinking” and announced Liquid Glass instead of AI breakthroughs, the signal was clear. Meta was going all-in. My research background gave me technical foundation; my product experience gave me judgment to recognize the moment.

The Alpha vs. Beta Play

If you want to create something great, you need to train your own models and build new capabilities from the ground up. You cannot win by adding rewrite or summarization features to existing apps.

The market rewards first-movers who create eye-catching products. It does not reward incremental add-ons. In an exponentially growing industry, the gap between the best product and everything else widens with each passing month.

In AI, either you are the best, or you are nothing. Companies that build foundational capabilities capture value. Companies that add AI features to existing products get left behind. This is why Meta’s approach matters: building models from scratch, investing in infrastructure, and assembling top talent isn’t just about competing—it’s about having a chance to win.

Looking Forward

At Meta MSL, we’re building capabilities that don’t exist yet, not improving features that already do. We’re asking fundamental questions about multimodal understanding, embodiment, and how AI systems interact with the world.

We’re positioned to build something that matters—something that could be the best, not just good enough. The difference between shipping features and building the future: are you training models and creating new capabilities, or adding AI to what already exists?

That’s what I’m betting on.

What This Means for Founders and Career Decisions

For founders: The alpha vs. beta play matters more in AI. Companies building foundational capabilities (training models, creating new architectures) capture value. Companies adding AI features to existing products get left behind. JetMoE taught me efficiency matters, but so does building something new. At Apple, incremental features shipped but didn’t change the game. The companies that win are building capabilities that don’t exist yet.

For career decisions: Look for strategic signals, not role descriptions. When deciding to leave Apple, I looked at three things: (1) What is the company actually doing? Apple published “The Illusion of Thinking” and announced Liquid Glass—actions signaling a shift away from AI. (2) Where is the industry heading? Meta was going all-in on AGI; Apple was questioning whether AI works. (3) Does the culture match the ambition? Apple optimized for polish and stability; Meta MSL optimizes for speed and exploration.

Signals to watch: Are they publishing papers questioning AI or building AI that works? Investing in infrastructure or cutting costs? Recruiting aggressively or slowing hiring? These signal strategic commitment.

For evaluating opportunities: Capability and integration both matter. When I evaluate AI companies, I look for teams that do both. Apple had integration but wasn’t pushing capability. Some research labs push capability but can’t integrate. The teams that win do both. Ask: are they training models and building new capabilities, or adding AI to existing products? Building infrastructure to support ambitions, or constrained by existing systems?

For timing: In an exponentially growing industry, the gap widens fast. I left Apple because I saw the strategic shift happening. In AI, timing matters more. The difference between joining a company going all-in versus one questioning fundamentals compounds over time. The question isn’t just “is this a good role?” but “is this where the industry is heading?”

For founders building AI companies or anyone making career decisions in this field: align with where the industry is going, not where it is.


These are my own views, not those of Apple or Meta. I’m grateful for both experiences and the exceptional people I’ve worked with at each company. If you’re building an AI startup or navigating a career transition in this field, I’m always open to conversations: zguo0525@berkeley.edu.

Liked this? Follow on X: @Zhen4good. Collaborations/advising: zguo0525@berkeley.eduLinkedIn


Comments