ArticleChatGPTThematic Research

What Investing in AI Actually Means, In Plain English

By Felipe SinisterraSeptember 11, 202527 min read
What Investing in AI Actually Means, In Plain English

You’ve probably seen the photo by now.

A grand table stretching across the White House, high ceilings overhead, and subtle, carefully controlled smiles all around. In the center, the seat of power itself, represented by President Trump and the First Lady. Surrounding them, the most influential names shaping our digital lives. A who’s who roster of the heavy hitters in Big Tech. Tim Cook (Apple), Mark Zuckerberg (Meta), Sundar Pichai (Google), Satya Nadella and Bill Gates (Microsoft), Sam Altman (OpenAI), and others. The headlines quickly labeled it a “tech dinner,” but that’s a shallow interpretation.

This wasn’t just dinner. It was a signal, a coordination point. Build in America. Now.

Beneath the optics and handshakes, there was a deeper alignment taking shape. Tech rivals, often bitter competitors, were suddenly pulling in the same direction: pledging unprecedented billions toward AI investments in America. Apple and Meta each threw out eye-popping figures: $600 billion. Sundar Pichai from Google followed closely with a $250 billion promise.

But why does this matter?

Because the race to dominate AI isn’t just about scientific breakthroughs or model parity. It’s about infrastructure. The tangible assets that turn AI potential into reality. Steel, servers, power plants, and data centers. The next phase of the AI race will be defined more by physical construction than by scientific discovery. It’s about who can build fastest and most reliably, not just who can innovate best. The future of American AI hinges not only on software but also on concrete and electricity.

How to go from headlines to action:

But let’s pause.

Simply knowing that “AI investment is increasing” is not an investment case. Neither are 30,000-foot conversations between industry and government. So, how do we unpack flashy announcements and turn them into something genuinely useful for our portfolios? How do we bridge the gap between headlines and investment decisions?

Here’s the actionable path we’ll follow:

  1. Figure out the facts. What, specifically, was pledged, proposed, and launched?
  2. Understand the value chain. AI isn’t just models. It’s not just the headline players like OpenAI and NVIDIA. It’s data, model training, chips & equipment, cloud & data infrastructure, applications and services. Plus all the activities upstream enabling this AI build: land, power, cooling, and more.
  3. Corroborate with public filings. Do words match capital allocation and company strategy as discussed in earnings calls? Where is everyone spending dollars?
  4. Sequencing the dominoes: Map out first-, second-, and third-order impacts of massive capex announcements.
  5. Generate a targeted idea map. How do we turn this thematic discovery into actionable ideas for longs? What specific opportunities can we identify from these first, second, and third-order effects?

Of course we’ll use AI to help us uncover the truth: what this should actually mean for our portfolio.

TLDR: here's what you need to know. If you have limited time, read the below sequence from ChatGPT. Otherwise, keep reading for the meat and potatoes of our process to find investment ideas.

Without further ado, let’s get into it.

Step 1: Understanding the Facts

There’s a lot of fluff in meetings like this. We all know that the US wants AI leadership. It’s not news to anyone that President Trump and every Big Tech CEO considers it a tier 1 priority to invest in AI in America. But that won’t do anything for us. We need to know actions and numbers: programs, task forces, permitting changes, and concrete spending signals.

So let’s start by extracting a clean fact base. Run the below on ChatGPT (make sure to enable Web Search).

# ROLE
You are an investigative journalist.
# TASK
Discuss the specific comments, policy proposals, and actionable outcomes from the input event.
# INPUT
White House dinner with Big Tech Executives. Date: September 4
# OUTPUT
A markdown table with one row per responsible party (government entities or specific companies) as the key. In the values, have quantifiable or actionable takeaways, including but not limited to:
- Investment pledges
- Hiring targets
- Specific strategic initiatives or projects
- New regulations
- Task forces announced
# WARNINGS
Focus on hard facts and agreements, not general discussion points.
# AUDIENCE & TONE
- Audience: Professional public market investors with a vested interested in learning that had facts about this event.
- Tone: Professional, yet explanatory and conversational

Result (read the first response):

Here’s what we found when distilling the meeting down to actionable commitments:

  • White House (President Trump): Clear signal on expedited permitting and power interconnects for AI infrastructure, making it easier to secure electricity and permits for data centers.
  • Meta (Mark Zuckerberg): Pledged at least $600B through 2028 in US infrastructure and data centers, implying enormous GPU demand (~1.3M GPUs by year‑end).
  • Apple (Tim Cook): Reiterated a $600B US commitment, tied to its American Manufacturing Program (server builds, chip packaging in Arizona, Corning glass expansion, rare‑earth sourcing, plus 20,000 new U.S. jobs).
  • Google (Sundar Pichai): Committed $250B in US spend over two years; separately pledged $1B for education/job training, with $150M for AI education and free Gemini for Education access.
  • Microsoft (Satya Nadella): Said $75–80B annual US spend; plus free Copilot/LinkedIn Learning programs and educator prizes under the AI Challenge.

The trillion‑dollar‑scale pledges ($600B/$250B) are envelopes, not incremental capex guidance. Treat them as directional. The single clearest near‑term policy tailwind is permitting and power support, which could accelerate grid interconnects, power equipment orders, and data‑center development. Education pledges are strategically useful for workforce building and platform adoption, but less material to near‑term P&L.

Step 2: Understanding the Value Chain

What does it mean to invest in AI? Well, it can mean lots of things considering how deep the value chain goes. The number of beneficiaries from the tailwinds is much greater than is obvious.

Samuel Brannan became California’s first millionaire not by mining gold, but by selling supplies to miners. Levi Strauss struck it rich selling durable jeans, not swinging a pickaxe. The California Gold Rush taught a timeless lesson: the greatest fortunes often come from understanding the ecosystem around the boom, not chasing the boom itself. AI is no different. Before we can figure out what to invest in, we need to understand the entire value chain: who provides the steel, servers, and electricity as much as who builds the models. That perspective will help us later map first, second, and third-order effects.

We can now use ChatGPT Deep Research to do this mapping work for us, surfacing the players, contracts, and bottlenecks that matter most.

# ROLE
You are a McKinsey consultant specializing in industry analysis and competitive intelligence. Your expertise lies in deconstructing complex industries into their core value chains, identifying key players, mapping economic flows, and pinpointing strategic opportunities and risks. Your analysis is clear, structured, and insightful, suitable for strategic decision-makers.
#TASK
Perform a comprehensive value chain analysis for the specified industry. The goal is to map the entire process from raw material inputs to the final product or service delivery to the end consumer. For each major stage of the value chain, you must identify the key activities, major players, economic drivers, and where value is created.
# INPUT
Target Industry: US AI industry
# OUTPUT STRUCTURE
Please generate the analysis in a structured report format with the following sections:
1. Executive Summary
- A concise, high-level overview of the industry’s value chain.
- Briefly describe the flow of value and identify the most and least profitable segments.
2. Industry Primer
Provide a brief (5-6 sentence) description of the industry, its primary function, and its overall market size/significance.
3. Core Value Chain Analysis
Break down the industry into its primary stages.
For each stage, provide the following details in a clear, bulleted format:
- Stage Name: (e.g., Raw Material Sourcing, R&D and Design, Manufacturing, Distribution & Logistics, Marketing & Sales, After-Sales Support). Use the nomenclature common to this industry
- Key Activities: What specific actions and processes occur in this stage?
- Major Players: List 3-5 key companies or types of companies that dominate this stage.
- Economic Drivers & Profitability: How do companies make money here? What are the main cost drivers? Is this a high-margin or low-margin segment? Explain why.
4. Visualization: Value Chain Map
Create a simple, text-based flow diagram or a markdown table to visually represent the stages of the value chain and the key players in each.
5. Key Trends & Disruptors
Identify 3-5 major trends (technological, regulatory, consumer behavior, etc.) that are currently impacting or disrupting the entire value chain. For each trend, briefly explain its impact.
# WARNINGS
This analysis should be based on publicly available information and general industry knowledge.The value chain is a simplified model; in reality, many companies are vertically integrated across multiple stages.This is an educational overview and does not constitute investment or financial advice.
# AUDIENCE & TONE
- Audience: A public markets investors who is knowledgeable about business but is not an expert in this specific industry.
- Tone: Analytical, educational, and professional. Use clear and precise language, avoiding overly technical jargon where possible. Structure the response for maximum clarity and scannability.

Result (read the second and third responses):

Here’s a simple way to think about the AI value chain, broken into six layers:

  • Data Acquisition & Preparation: The starting point of AI is gathering and refining raw data: text, images, audio, and more. Companies like Scale AI provide annotation services, while Google and Meta rely on proprietary user data. This stage is essential but labor-intensive, with lower margins because it often depends on human labeling. The firms that can automate or access unique, high-quality datasets can create powerful moats.
  • Model R&D & Training: This is where raw data and compute turn into intelligence. It’s what most people think of when they think of AI companies. Labs like OpenAI, Anthropic, and Google DeepMind, along with big tech research arms, design and train frontier models. Training requires top-tier researchers and massive infrastructure: tens of millions of dollars per model. It’s risky and capital-intensive, but successful models become strategic assets that can be monetized via APIs, licensing, or integration into broader ecosystems.
  • Hardware (Chips & Equipment): At the core of AI performance are GPUs, TPUs, and accelerators. These are specialized semiconductor chips designed to run complex algorithms in parallel. NVIDIA dominates, with AMD, Intel, and in-house chips from Google and Amazon playing key roles. On the manufacturing side, leading semiconductor foundries such as TSMC are critical for producing advanced AI chips (even though not US-based, they supply US industry demand). Revenue in this layer is driven by volume sales of chips and equipment to cloud providers and large enterprises building AI capacity. This layer is currently the most profitable due to differentiated technology and relatively low competition. As of now, however, the compute and hardware layer is a lucrative choke point in the AI value chain, capturing a significant share of value as every AI model and service ultimately runs on these physical processors
  • Cloud & Data Infrastructure: Hyperscalers AWS, Azure, Google Cloud rent out compute power and operate the vast data centers where AI runs. They provide the cloud computing platforms and data infrastructure that host AI development and deployment. This stage involves operating hyperscale data centers packed with AI hardware, and offering on-demand computing power, storage, and AI services to customers. They’re joined by platforms like Snowflake and Databricks that manage data at scale. Once built, these services enjoy economies of scale and sticky, usage-based revenue. The cloud layer is critical for scaling adoption, capturing value by aggregating demand across industries.
  • Applications & Solutions: This is where AI meets consumers and enterprises, through copilots, assistants, and industry-specific tools. Big firms like Microsoft, Google, and Adobe compete here, as do vertical players like Hebbia and Rogo for finance or Harvey for law. The TAM is massive, but competition is fierce and many offerings risk commoditization. Differentiation comes from proprietary data and deep workflow integration.
  • Services & Integration: Beyond products, companies need help adopting AI responsibly. Consultants like Accenture and Deloitte, plus cloud providers’ professional services teams, handle integration, training, and compliance. It’s less scalable and margins are lower than software, but demand is strong as enterprises lean on outside expertise. These services are crucial for turning AI technology into real business outcomes.

Equally important is the massive data center infrastructure: servers, storage systems, networking, and cooling solutions built by tech giants like AWS, Azure, and Google. These facilities are the backbone of AI, consuming enormous amounts of power and requiring sophisticated thermal management. The build-out doesn’t just stay within tech: it ripples into heavy industries like construction, steel, and electricity generation, since each new site demands concrete, steel, and huge amounts of reliable power. In other words, the AI boom is driving an industrial boom alongside it, linking digital growth directly to America’s physical economy.

So now we know how the industry value chain works. What do we do next? We need to see how the heavy hitters, Big Tech, are actually spending to assess where the most focus is along the chain.

Step 3: Corroborating with Public Filings

Now that we know how the value chain works, we also know what to look for in the expenditures of the best‑capitalized players in tech. Let’s look at the heavy hitters from the White House dinner: NVDA and AMD on the compute side, and the giants Google, Meta, Microsoft, Apple that span models, data, and applications. The question is: what did Zuckerberg and Tim Cook really mean when they each said they would invest $600B in the U.S.? To answer that, we need to see how those pledges line up against what management teams have already told us in past earnings calls and filings. Where are they actually allocating capex and P&L dollars? By following the money, we can understand what to expect, how the dominoes will fall, and later map the second‑ and third‑order effects to uncover the less obvious opportunities.

So let’s run a deep research prompt on GPT‑5 to dig in. Gemini also works well for this type of prompt

# ROLE
You are a Senior Equity Research Analyst with a specialization in the TMT (Technology, Media, and Telecom) sector. Your core competency is performing deep forensic analysis of public SEC filings (10-K, 10-Q) and earnings call transcripts to uncover specific, data-driven insights into a company’s strategic initiatives and capital allocation. You excel at finding direct figures, making well-reasoned estimates from broader data, and synthesizing management commentary into a clear investment narrative.
# TASK
Analyze the public disclosures for the listed companies to identify and quantify their investments in Artificial Intelligence (AI) and related strategic initiatives. You must extract specific dollar amounts where available, provide direct quotes from management, and structure the findings into a detailed brief for each company.
# INPUT
- Target Companies (Tickers):NVIDIA (NVDA)Advanced Micro Devices (AMD)Apple (AAPL)Meta Platforms (META)Microsoft (MSFT)Alphabet (GOOGL)
- Primary Data Sources: For each company, analyze the: Most recent 10-K Annual Report. All subsequent 10-Q Quarterly Reports. Earnings call transcripts from the last four fiscal quarters.
# OUTPUT STRUCTURE
Generate a detailed, separate brief for each company using the following strict format. If a specific data point is not explicitly disclosed, state “Not Explicitly Disclosed” and provide the closest available context or data.
[Company Name] ([Ticker]) - AI Investment Brief
1. Executive Summary
A concise paragraph summarizing the company’s overall AI investment strategy, the scale of its spending (e.g., billions in infrastructure), and the primary focus of its initiatives as described by management.
2. Quantitative Analysis: Estimated AI-Related Expenditures
Present the findings in a markdown table. Focus on figures from the last twelve months (LTM) or the most recent reporting period.
Expenditure CategoryReported / Estimated Value ($)Source & Filing ReferenceDirect Management Quote or Commentary:—:—:—:—Capex: Data Centers & Infrastructuree.g., $10.5B10-K, p. 45“Our capital expenditures were primarily for servers and data center infrastructure…”Capex: GPUs, TPUs, Custom SiliconEstimate or specify if included aboveEarnings Call Q4“We significantly increased our investment in compute to support our AI development…”Personnel (AI/ML Specific)Not Explicitly DisclosedN/ANote any commentary on hiring trends for AI talent.Model Training & Compute CostsNot Explicitly DisclosedN/ANote any qualitative statements on the rising cost of training foundation models.Electricity & UtilitiesNot Explicitly DisclosedN/ANote any discussion of energy efficiency or data center power usage.Other AI-Related Investments (e.g., M&A)e.g., $500M10-Q, Note 3“Acquisition of AI startup ‘InnovateAI’ to bolster our research division.”| Expenditure Category | Reported / Estimated Value ($) | Source & Filing Reference | Direct Management Quote or Commentary | | :--- | :--- | :--- | :--- | | **Capex: Data Centers & Infrastructure** | e.g., $10.5B | 10-K, p. 45 | "Our capital expenditures were primarily for servers and data center infrastructure..." | | **Capex: GPUs, TPUs, Custom Silicon** | *Estimate or specify if included above* | Earnings Call Q4 | "We significantly increased our investment in compute to support our AI development..."| | **Personnel (AI/ML Specific)** | Not Explicitly Disclosed | N/A | *Note any commentary on hiring trends for AI talent.* | | **Model Training & Compute Costs**| Not Explicitly Disclosed | N/A | *Note any qualitative statements on the rising cost of training foundation models.*| | **Electricity & Utilities** | Not Explicitly Disclosed | N/A | *Note any discussion of energy efficiency or data center power usage.* | | **Other AI-Related Investments (e.g., M&A)** | e.g., $500M | 10-Q, Note 3 | "Acquisition of AI startup 'InnovateAI' to bolster our research division." |
3. Qualitative Analysis: Strategic Commentary from Management
Synthesize key themes from earnings calls and filings. Use direct quotes where possible.
- Core Strategic Narrative on AI:What is management’s high-level story about AI’s importance to the company’s future?How are they framing their competitive advantage in the “AI race”?
- Key AI Products & Initiatives Mentioned:List the specific AI-driven products or services that management highlighted (e.g., Copilot, Gemini, Meta AI).Provide any commentary on the performance or adoption of these products.
- Forward-Looking Guidance & Outlook:What has management guided for future Capex related to AI?Are they signaling an acceleration or deceleration in AI-related spending? Provide direct quotes on their outlook.

Result (read the fourth response):

If you don’t want to read all of it, you could just ask the LLM to synthesize where the industry is spending most. We go back over to ChatGPT and run the following prompt.

Based on the previous response, your task is to synthesize where in the AI value chain each player and the industry in general is spending most and strategically focused. Answer in full sentences. Make sure to use an explanatory tone, avoiding industry jargon, such that investors who are not super familiar with AI industry terminology are able to follow.

The picture is straightforward: most dollars are flowing into infrastructure. Across the sector, the biggest checks are for (1) specialized AI chips, (2) the servers and networking gear that house them, and (3) the data‑center buildings and power needed to run them. Think of this as “paving the highways” for AI. Companies are also locking in future chip supply (long‑term purchase commitments) and signing multi‑year cloud‑compute contracts to train and run models. By comparison, spending on headcount, acquisitions, and data labeling is meaningful but small relative to the infrastructure build‑out.

The chess game that Big Tech is playing is to secure scarce compute, expand capacity fast, and push AI features into core products. Basically expand compute and power today, and reap the rewards with better models and applications in the future. The near‑term constraint is not ideas; it’s capacity and power. That’s why you see eye‑popping capital budgets for servers and data centers, plus prepayments to foundries and packaging partners to guarantee chip availability.

Step 4: Sequencing the Dominoes

We now clearly understand that most investment today flows upstream into the physical infrastructure of AI: chips, data centers, and power. But what we really need to grasp next is the ripple effect of these investments. AI represents a huge economic tailwind for America, and it won’t just benefit those building chips or data centers; the impact will extend far beyond the core infrastructure.

To really capitalize on this, we need to map out exactly how and when these investments will influence other sectors, who will benefit next, and who will benefit after that. By understanding the timing and nature of these ripple effects, we can develop a clear framework to evaluate whether a company truly has an advantage. The goal is to clearly understand how opportunities shift over time, allowing us to identify winners early and see beyond the obvious first-order beneficiaries.

Explain the first, second, and third-order effects of this event: Big tech companies are spending heavily on the infrastructure layer needed to run artificial intelligence: things like chips, servers, data centers, power. Use the context from this chat.
For each stage (first, second, and third-order effects), do the following:
1. Explain in plain English what this stage means (first-order = the direct impact, second-order = the ripple effects that follow, third-order = the long-term transformation).
2. Describe the types of companies that are impacted, what they actually do, and why they benefit (or lose). Assume the reader does not know how this industry works; be very clear and simple. For example, don’t just say “data center builders”; explain that these are companies that design and construct the massive warehouse-like facilities where AI servers are housed.
3. Explain the nature of the impact. Is it a short-term surge in demand? A new source of recurring revenue? A risk to existing business models?
4. Give key ways for an investor to think about it. What should they watch for? How do they know if a company really has an edge? What risks or bottlenecks might appear?
The goal is to make the whole chain reaction understandable to a smart investor who is not an industry expert; showing who wins at each stage, why they win, and how the opportunity shifts over time.

Result (read the fifth response):

Here’s how the dominoes will fall as Big Tech pours billions into AI infrastructure.

  • First-order effects: Big Tech is writing huge checks for the “picks and shovels” of AI: specialized chips, the racks of servers that hold them, the warehouse‑like buildings that house those servers, and the electricity and cooling to run them. The immediate beneficiaries of Big Tech’s spending: the chip designers, memory and networking suppliers, server builders, data center developers, and power/cooling vendors. This immediately drives orders, backlogs, and prices across the physical stack. They benefit because scarcity gives them pricing power and long‑term visibility through multi‑year contracts and prepayments. On the flip side, the ones left behind are general‑purpose server vendors with little AI exposure and data centers that can’t secure enough power to scale—both risk losing relevance as the industry races ahead.
  • Second-order effects: Once the first wave of hardware orders hits, the shock spreads to the grid, construction, real estate, and connectivity. Companies upstream and downstream reorganize to serve AI data centers. Utilities, independent power producers, EPC contractors, fiber carriers, and permitting specialists. Financing, permitting, and long‑lead equipment become as critical as chips. In practical terms, second‑order players benefit from long‑term power contracts, multi‑year equipment order books, and rising demand for high‑capacity connectivity. Utilities lock in recurring revenue through PPAs, contractors and equipment vendors secure backlog visibility, and fiber providers see traffic growth. The flip side is that delays in grid interconnections, regional water scarcity, or tougher permitting rules can slow projects and push revenue out. This stage is less about immediate scarcity pricing and more about recurring infrastructure economics, and the execution risks that come with it.
  • Third-order effects: As capacity scales, AI becomes woven into daily software and business workflows. Compute prices may gradually fall, models get more efficient, and the center of gravity shifts from “can we get capacity?” to “who turns capacity into durable, high‑margin products?” Winners move up the stack, from hardware scarcity to software, data advantages, and distribution.

Step 5: Generating Specific Investment Ideas

Finally, armed with clarity from previous steps, we can pinpoint specific investment opportunities. We’ve now mapped the ripple effects across the value chain and identified the types of companies that stand to benefit at each stage. The next step is to focus on finding concrete long ideas: names that can capture upside from the first‑, second‑, and third‑order effects we outlined.

To do this, we’ll run a deep research prompt to sift through potential candidates.

# ROLE
You are a skeptical, thesis-driven Hedge Fund Analyst. Your job is to synthesize detailed industry analysis into a curated list of high-conviction, actionable investment ideas. You focus on identifying companies with a clear, material exposure to a trend and a defensible edge.
# CONTEXT
You will analyze the following detailed breakdown of the chain reaction from Big Tech’s heavy spending on AI infrastructure. This is your primary source material.
First‑order effects (direct impact: what happens now)
* What this stage means: Big Tech is writing huge checks for the “picks and shovels” of AI—specialized chips, servers, data centers, and the electricity/cooling to run them. This immediately drives orders and backlogs across the physical stack.
* Who is impacted: AI chip designers, memory/networking suppliers, server manufacturers, data center developers, and power/cooling equipment vendors.
* Nature of the impact: A short‑term surge in demand and pricing power for scarce components. Multi‑year visibility from long-term agreements. Key bottlenecks are chip packaging, transformers, and grid interconnects.
Second‑order effects (ripples: the ecosystem rearranges)
* What this stage means: The shock spreads to the grid, construction, and real estate. Financing, permitting, and long‑lead equipment become as critical as chips.
* Who is impacted: Electric utilities, independent power producers, grid equipment makers, EPC contractors, and telecom/fiber backbones.
* Nature of the impact: Recurring revenue via long‑dated power purchase agreements (PPAs). Cyclicality and schedule risk in construction. High regulatory exposure.
Third‑order effects (long‑term transformation: who captures value)
* What this stage means: AI becomes woven into daily software. The center of gravity shifts from hardware scarcity to who can turn compute capacity into durable, high‑margin software products.
* Who is impacted: AI‑powered software platforms (productivity, CRM, etc.) and enterprises with proprietary data.
* Nature of the impact: A shift from scarcity pricing to scale economics. Business model risks from platform bundling. Policy and sustainability concerns (carbon, water) become more prominent. —
# TASK
Your task is to identify 8–10 compelling LONG investment ideas that will directly benefit from the effects described in the CONTEXT.
- Be Selective: Focus on public companies with clear, material exposure to the trend.
- Span the Orders: Ensure your list includes a mix of First, Second, and Third-order beneficiaries to capture the trend at different stages.
- Use Plain English: Assume the reader is a generalist investor. Avoid jargon.
# OUTPUT STRUCTURE
For each of the 8-10 companies, generate a concise brief using the following strict format:
[Company Name] ([Ticker])
Market Cap:Order of Effect: (First / Second / Third)
What They Do: (A simple, 1-2 sentence description of their business).
Long Thesis: (2-3 bullets explaining why they benefit from the trend and what their specific competitive edge is).
Potential Catalysts: (1-2 bullets on specific, near-term events that could cause the stock to re-rate, e.g., major contract wins, backlog updates, product launches, regulatory approvals).
Key Risks: (1-2 bullets on the most significant risks to the thesis, e.g., customer concentration, project delays, competition, a slowdown in tech capex).
Key Diligence Questions: (1-2 bullets on the most critical questions an analyst must answer to validate the investment thesis).

Result (read the last response):

Here are a few concrete examples of long ideas across the first, second, and third‑order effects:

  • Nvidia (NVDA): Nvidia is central to AI growth as the primary designer of GPUs: the essential chips for running AI models. Its industry-leading performance, software ecosystem (CUDA), and strategic chip supply deals give it exceptional pricing power and order visibility, making it an indispensable supplier in the AI boom.
  • Arista Networks (ANET): Arista provides high-speed networking hardware critical for connecting servers within AI data centers. With increasing market share and strong relationships with cloud giants, it directly benefits from the rapid build-out of AI infrastructure.
  • Super Micro Computer (SMCI): Supermicro specializes in assembling servers optimized for AI workloads, particularly leveraging advanced GPUs and efficient cooling technologies. Its agility in quickly adopting the latest components makes it a preferred vendor for large hyperscalers and government-backed AI projects.
  • Equinix (EQIX): As the world’s largest provider of data center colocation services, Equinix benefits from surging demand driven by AI workloads. Its partnership with Nvidia offers turnkey AI infrastructure, positioning it strongly as companies rapidly expand their AI compute capacity.
  • Vertiv (VRT): Vertiv manufactures essential power and cooling equipment needed to operate high-density AI data centers. Its strong backlog growth and specialized expertise in efficient liquid cooling solutions provide a strategic advantage in an increasingly tight market.
  • NextEra Energy (NEE): NextEra is a leader in renewable energy generation, with a large pipeline of wind and solar projects. Growing AI-related electricity demand from data centers positions NextEra as a key supplier of renewable power through long-term power purchase agreements, benefiting from increased sustainability goals in the tech industry.
  • Quanta Services (PWR): Quanta specializes in constructing the critical electrical infrastructure and high-capacity telecom networks that support new AI data centers. Its significant scale and proven expertise in high-voltage transmission and fiber deployment uniquely position it to benefit from expanding grid and telecom requirements driven by AI.
  • Microsoft (MSFT): Microsoft integrates advanced AI capabilities across its Office productivity suite and Azure cloud platform, monetizing these enhancements through recurring subscriptions and increased cloud usage. Its strategic partnership with OpenAI provides a competitive first-mover advantage, making it a clear beneficiary in the AI-driven software transformation.
  • Adobe (ADBE): Adobe has successfully embedded generative AI (“Firefly”) within its creative software tools like Photoshop and Illustrator. Its control over proprietary, licensed training data and deep integration within user workflows create significant competitive moats, enabling Adobe to drive additional subscription revenue and retain customers.
  • Palantir (PLTR): Palantir provides secure, enterprise-grade AI platforms tailored for sensitive and regulated environments, such as government and commercial clients. Its Artificial Intelligence Platform (AIP) is gaining substantial adoption due to its robust security, governance, and ease of integration, positioning Palantir uniquely to capitalize on enterprise AI adoption.

These names represent the kinds of companies that could benefit directly from AI infrastructure build‑outs, the ripple effects into power and connectivity, and the long‑term shift toward software and data moats. But it’s important to emphasize: this thematic process generates top‑of‑funnel ideas, not final investment decisions. The work doesn’t stop here; each company needs to be diligenced one by one, digging into financials, competitive positioning, and key diligence questions before putting real capital to work. How you think about the universe of opportunities will also differ depending on whether you lean more toward value or growth investing. Thematic frameworks like this are about surfacing the less obvious potential winners; the diligence and conviction building comes next.

Conclusion: The Bottom Line

The White House dinner was more than a photo‑op; it was a turning point that made clear America’s AI push is real. By working through a structured framework and leaning on AI research tools, we went from vague headlines to a grounded understanding of what investing in AI actually means. The money is flowing upstream into chips, data centers, servers, and power. And that’s where the story begins.

The next phase of the AI race isn’t just about new breakthroughs in labs. It’s about the physical build‑out: the servers humming in warehouses, the power lines feeding them, the cooling systems keeping them alive. In many ways, chips and data centers are today’s steel mills and railroads: the infrastructure on which the digital economy will be built. The future of AI isn’t only about smarter algorithms; it’s about construction, energy, and the heavy industries that have always powered America forward.

Enjoyed this article?

Get more AI for finance content delivered to your inbox.