Peak XV Partners has led a 15 million dollar Series A round in Bengaluru based semiconductor startup C2i, placing a calculated bet on what may become the defining constraint of the AI era. The firm’s thesis is blunt: the next bottleneck in AI infrastructure is not GPU supply but power delivery efficiency.
If that view proves correct, C2i’s attempt to redesign how electricity moves from the grid to the accelerator rack could matter far beyond its current size. The company is positioning itself not as another component vendor but as a system level power architecture player. That is an ambitious lane to choose.
C2i Semiconductors has raised 15 million dollars in a Series A round led by Peak XV Partners, with participation from Yali Deeptech and TDK Ventures. The round brings the company’s total funding to about 19 million dollars since its founding in 2024.
The Bengaluru headquartered startup has built a team of roughly 65 engineers and is expanding customer facing operations in the United States and Taiwan. The goal is straightforward: get close to hyperscalers and large data center operators early in the validation cycle.
• Series A size: 15 million dollars
• Total funding to date: 19 million dollars
• Headquarters: Bengaluru
• Team size: about 65 engineers
• Focus: end to end power delivery for AI data centers
C2i, short for Control, Conversion and Intelligence, is working on plug and play power delivery systems that span the full path from the data center power bus to the GPU. That positioning is important because most current solutions optimize only individual stages such as power supply units or board level converters.
In modern AI data centers, electricity typically goes through multiple voltage conversion steps before it reaches accelerators. Each step introduces losses. According to the company’s leadership, the current stack can waste roughly 15 to 20 percent of incoming power before it even reaches the GPU.
C2i’s integrated approach aims to reduce end to end losses by about 10 percent. On paper, that translates to roughly 100 kilowatts saved for every megawatt consumed. In hyperscale environments, those numbers compound quickly.
The investment reflects a broader shift in how large AI infrastructure is being evaluated. For years, the industry narrative focused almost entirely on compute scarcity. That conversation is now widening.
Several macro signals are driving the change:
• Data center electricity demand is projected to surge sharply over the next decade
• Some hyperscale projects have already faced delays due to grid capacity limits
• Major cloud players are increasingly constrained by local power availability
• Cooling and thermal management costs continue to climb alongside GPU density
BloombergNEF projections suggest global data center electricity use could nearly triple by 2035. Separately, Goldman Sachs Research estimates power demand from data centers could rise about 175 percent by 2030 compared with 2023 levels.
In that context, even modest efficiency gains become economically meaningful.
Peak XV managing director Rajan Anandan framed the opportunity in practical terms. After the capital expense of servers and facilities, energy becomes the dominant recurring cost in AI data centers. Any meaningful reduction in energy loss scales into very large dollar savings at global deployment levels.
From an investor perspective, several elements likely made C2i attractive:
• The company is targeting a structural bottleneck rather than incremental optimization
• Its approach spans silicon, packaging, and system architecture
• Power delivery remains deeply inefficient in many deployments
• Hyperscalers are actively searching for efficiency gains
The flip side is equally clear. Power delivery infrastructure is conservative, qualification cycles are long, and incumbents are well entrenched. This is not a quick validation market.

C2i expects its first two silicon designs to return from fabrication between April and June 2026. That milestone will mark the beginning of real world validation rather than the end of the story.
The company’s near term roadmap includes:
• Benchmarking power loss improvements
• Demonstrating thermal and cooling benefits
• Working with hyperscalers on total cost of ownership models
• Validating reliability under production conditions
Even with strong early results, widespread adoption in data center power stacks typically unfolds over multiple years. Operators are cautious by design, especially when core infrastructure is involved.
Industry observers increasingly describe C2i’s approach as grid to GPU optimization. The phrase captures the company’s central argument: meaningful efficiency gains require redesigning the entire conversion chain, not just polishing individual components.
That systems level framing aligns with a broader trend across AI infrastructure. As compute density rises, second order constraints such as power delivery, cooling, and physical footprint are becoming first order problems.
C2i is entering the market at a moment when those pressures are becoming difficult to ignore.
The startup’s work sits within a wider scramble to stretch limited power capacity. Across the industry, multiple strategies are emerging in parallel:
• On site energy generation including solar and gas
• Exploration of small modular nuclear options
• Advanced liquid cooling and heat reuse
• More efficient accelerator and server designs
• Power delivery optimization, where C2i is focused
No single lever will solve the coming energy crunch. The companies that win will likely be those that improve efficiency across multiple layers of the stack.
For now, C2i remains an early stage infrastructure bet. The core questions are still ahead.
Key milestones to monitor:
• Silicon performance once samples return in 2026
• Validation results with hyperscale partners
• Evidence of sustained loss reduction in production settings
• Design wins inside major data center architectures
If the company can demonstrate real world efficiency gains at scale, the upside is significant. If integration proves difficult, adoption timelines could stretch.
Peak XV’s investment in C2i reflects a growing recognition that the AI boom is colliding with physical infrastructure limits, especially around power. The startup’s grid to GPU approach is technically ambitious and strategically well timed.
Whether it becomes a meaningful part of future data center design will depend less on theory and more on measured performance over the next several years. In AI infrastructure, efficiency claims are easy. Proven watts saved in production environments are what ultimately matter.
Be the first to post comment!
A quiet but consequential shift is underway in the entertain...
by Vivek Gupta | 16 hours ago
OpenAI has pulled off a strategically interesting move. Pete...
by Vivek Gupta | 17 hours ago
In a development that is quickly drawing attention in Washin...
by Vivek Gupta | 3 days ago
In a move that has quickly rippled across the AI industry, A...
by Vivek Gupta | 3 days ago
At a time when headlines are dominated by fears of AI replac...
by Vivek Gupta | 3 days ago
Two of the world’s largest technology companies, Amazon and...
by Will Robinson | 1 week ago