
Tesla’s chip roadmap just got clearer. Elon Musk said the design review for the next-generation AI5 chip is finished, and Tesla plans production with both TSMC and Samsung. At the same time, Tesla is already mapping out AI6, a follow-on chip aimed at roughly twice the performance of AI5 and expected for volume production around mid-2028. These moves matter because Tesla uses its own chips for self-driving, data centres, and Optimus robots, so faster, domestic supply of chips is central to its strategy.
Elon Musk: At Tesla, we basically had two different chip programs: one Dojo and one. Dojo on the training side, and then what we call AI4, it's just our inference chip
— X Freeze (@XFreeze) November 5, 2025
The AI4 is what's currently shipping in all vehicles, and we're finalizing the design of AI5, which will be an… pic.twitter.com/9YM2KJBXK9
Tesla’s chip roadmap just got clearer. Elon Musk said the design review for the next-generation AI5 chip is finished, and Tesla plans production with both TSMC and Samsung. At the same time, Tesla is already mapping out AI6, a follow-on chip aimed at roughly twice the performance of AI5 and expected for volume production around mid-2028. These moves matter because Tesla uses its own chips for self-driving, data centres, and Optimus robots, so faster, domestic supply of chips is central to its strategy.
Elon Musk: At Tesla, we basically had two different chip programs: one Dojo and one. Dojo on the training side, and then what we call AI4, it's just our inference chip
— X Freeze (@XFreeze) November 5, 2025
The AI4 is what's currently shipping in all vehicles, and we're finalizing the design of AI5, which will be an… pic.twitter.com/9YM2KJBXK9
Why Tesla builds its own AI chips
Tesla designs chips in-house, matching hardware and software tightly. This helps them run large neural networks efficiently for Full Self-Driving (FSD) and AI model training. When hardware and software are co-designed, says Tesla, you get much more performance per watt. Using internal chips also helps control costs and supply, which is vital when demand for AI compute is booming across the industry. Its AI5 and planned AI6 chips are part of that push.
Production: two foundries, slightly different parts
Reporting and Elon Musk indicate that Tesla will split AI5 production between TSMC and Samsung. Each foundry will be making slightly different versions-the same design translated to each fab’s process-so Tesla can scale volume and reduce single-source risk. Limited or sample production is expected in 2026, while mass production ramps in 2027. For AI6, the aim by Tesla and its partners is for mid-2028 volume output, leveraging the same foundries for continuity of supply. This split approach also helps Tesla leverage both suppliers’ strengths and avoid bottlenecks.
What AI6 promises, compared to AI5
Tesla claims that AI6 will deliver about 2× the performance of AI5. The implication of this, simply stated, is that models will run faster, training will be quicker, and on-device tasks will handle larger or more complex networks. For cars, that could mean richer perception stacks and better real-time decision making; for robots like Optimus, it could translate to smoother motion control and more advanced behaviors. Doubling performance per generation is a common industry goal, and it keeps Tesla competitive as model sizes and compute needs expand.
Why the timeline and partners matter for industry and policy
The decision on foundries and timing has wider implications. By selecting Samsung’s Texas fab and TSMC’s US facilities, Tesla taps foundries located in or near the US, in line with supply-chain and policy priorities that tend to focus on home soil for chipmaking. A consequence of these choices is market dynamics: huge orders for AI5/AI6 chips alter capacity planning at Samsung and TSMC and can shift pricing or availability across the industry. For governments, large domestic contracts like these justify investments in local fabs and skills.
The practical implications for Tesla products
- Full Self-Driving(FSD): More compute power allows more complex neural nets or extra sensors, or redundancy layers to enhance capability and safety.
- Optimus robots: Higher-throughput chips help with perception, balance, and real-time control needed for humanoid robots.
- Dojo/training fleets: Faster chips speed up model training cycles and reduce cloud costs.
- In-car features: Improved on-device AI makes it possible to offer richer voice or gesture interfaces without sending data to the cloud.
All these product gains depend on software that uses the hardware well, Tesla’s advantage is its integrated stack.
Risks and open questions
A few risks remain. First, production delays or yield problems at the foundry can push timelines. Musk hinted that AI5 sample runs in 2026, mass in 2027-but that can shift. Second, translating a design between the two fabs requires careful validation to ensure identical behavior. Third, competition for advanced nodes is fierce: other companies also need leading-edge fabs. And lastly, the real performance gain depends not just on chip speed but also on memory, interconnect, and software efficiency; if any of those lag, the full benefit of AI6 may not materialize.
What to watch next
- Sample validation: Look for early AI5 samples in 2026 and benchmark reports.
- Foundry updates: Samsung and TSMC statements on yields and fab readiness.
- Musk posts, Tesla calls: Elon often gives timing updates; investor calls may show production targets.
- Industry reaction: How Tesla’s fab choices and order volumes change how both suppliers and competitors produce.
Watch to know more
Conclusion
Tesla’s completion of the AI5 design review and the public roadmap to AI6 show that Tesla is serious about owning the hardware layer of future AI systems. It reduces risk and increases scale by partnering with Samsung and TSMC. If timelines hold and yields are healthy, AI5 will enter limited production in 2026 and mass volumes in 2027, while AI6 could deliver a 2× step up by mid-2028. At Tesla, that’s not just faster chips; it’s infrastructure for the next generation of self-driving cars and robots.

