Збирач потоків

AI workloads demand smarter SoC interconnect design

EDN Network - Птн, 01/16/2026 - 11:39

Artificial intelligence (AI) is transforming the semiconductor industry from the inside out, redefining not only what chips can do but how they are created. This impacts designs from data centers to the edge, including endpoint devices such as autonomous driving, drones, gaming systems, robotics, and smart homes. As complexity pushes beyond the limits of conventional engineering, a new generation of automation is reshaping how systems come together.

Instead of manually placing every switch, buffer, and timing pipeline stage, engineers can now use automation algorithms to generate optimal network-on-chip (NoC) configurations directly from their design specifications. The result is faster integration and shorter wirelengths, driving lower power consumption and latency, reduced congestion and area, and a more predictable outcome.

Below are the key takeaways of this article about AI workload demands in chip design:

  1. AI workloads have made existing SoC interconnect design impractical.
  2. Intelligent automation applies engineering heuristics to generate and optimize NoC architectures.
  3. Physically aware algorithms enhance timing closure, reduce power consumption, and shorten design cycles.
  4. Network topology automation is enabling a new class of AI system-on-chips (SoCs).

 

Machine learning guides smarter design decisions

As SoCs become central to AI systems, spanning high-performance computing (HPC) to low-power devices, the scale of on-chip communication now exceeds what traditional methods can manage effectively. Integrating thousands of interconnect paths has created data-movement demands that make automation essential.

Engineering heuristics analyze SoC specifications, performance targets, and connectivity requirements to make design decisions. This automation optimizes the resulting interconnect for throughput and latency within the physical constraints of the device floorplan. While engineers still set objectives such as bandwidth limits and timing margins, the automation engine ensures the implementation meets those goals with optimized wirelengths, resulting in lower latency and power consumption and reduced area.

This shift marks a new phase in automation. Decades of learned engineering heuristics are now captured in algorithms that are designing silicon that enables AI itself. By automatically exploring thousands of variations, NoC automation determines optimal topology configurations that meet bandwidth goals within the physical constraints of the design. This front-end intelligence enables earlier architectural convergence and provides the stability needed to manage the growing complexity of SoCs for AI applications.

Accelerating design convergence

In practice, automation generates and refines interconnect topologies based on system-level performance goals, eliminating the need for laborious repeated manual engineering adjustments, as shown in Figure 1. These automation capabilities enable rapid exploration and convergence of multiple different design configurations, shortening NoC iteration times by up to 90%. The benefits compound as designs scale, allowing teams to evaluate more options within a fixed schedule.

Figure 1 Automation replaces manual NoC generation, reducing power and latency while improving bandwidth and efficiency. Source: Arteris

Equally important, automation improves predictability. Physically aware algorithms recognize layout constraints early, minimizing congestion and improving timing closure. Teams can focus on higher-level architectural trade-offs rather than debugging pipeline delays or routing conflicts late in the flow.

AI workloads place extraordinary stress on interconnects. Training and inference involve moving vast amounts of data between compute clusters and high-bandwidth memory, where even microseconds of delay can affect throughput. Automated topology optimization ensures traffic flow to maintain consistent operation under heavy loads.

Physical awareness drives efficiency

In 3-nm technologies and beyond, routing wire parasitics are a significant factor in energy use. Automated NoC generation incorporates placement and floorplan awareness, optimizing wirelength and minimizing congestion to improve overall power efficiency.

Physically guided synthesis accelerates final implementation, allowing designs to reach timing closure faster, as Figure 2 illustrates. This approach provides a crucial advantage as interconnects now account for a large share of total SoC power consumption.

Figure 2 Smart NoC automation optimizes wirelength, performance, and area, delivering faster topology generation and higher-capacity connectivity. Source: Arteris

The outcome is silicon optimized for both computation and data movement. Automation enables every signal to take the best route possible within physical and electrical limits, maximizing utilization and overall system performance.

Additionally, automation delivers measurable gains in AI architectures. For example, in data centers, automated interconnect optimization manages multi-terabit data flows among heterogeneous processors and high-bandwidth memory stacks.

At the edge, where latency and battery life are critical, automation enables SoCs to process data locally without relying on the cloud. Across both environments, interconnect fabric automation ensures that systems meet escalating computational demands while remaining within realistic power envelopes.

Automation in designing AI

Automation has become both the architect and the workload. Automated systems can be used to explore multiple design options, optimize for power and performance simultaneously, and reuse verified network templates across derivative products. These advances redefine productivity, allowing smaller engineering teams to deliver increasingly complex SoCs in less time.

By embedding intelligence into the design process, automation transforms the interconnect from a passive conduit into an active enabler of AI performance. The result is a new generation of optimized silicon, where the foundation of computing evolves in step with the intelligence it supports.

Automation has become indispensable for next-generation SoCs, where the pace of architectural change exceeds traditional design capacity. By combining data analysis, physical awareness, and adaptive heuristics, engineers can build systems that are faster, leaner, and more energy efficient. These qualities define the future of AI computing.

Rick Bye is director of product management and marketing at Arteris, overseeing the FlexNoC family of non-coherent NoC IP products. Previously, he was a senior product manager at Arm, responsible for a demonstration SoC and compression IP. Rick has extensive product management and marketing experience in semiconductors and embedded software.

Special Section: AI Design

The post AI workloads demand smarter SoC interconnect design appeared first on EDN.

SST & UMC Release 28nm SuperFlash Gen 4 for Next-Gen Automotive Controllers

ELE Times - Птн, 01/16/2026 - 08:01
Silicon Storage Technology (SST), a subsidiary of Microchip Technology and United Microelectronics Corporation, a leading global semiconductor foundry, announced the full completion of qualification and release to production of SST’s embedded SuperFlash Gen 4 (ESF4) with full automotive grade 1 (AG1) capability on UMC’s 28HPC+ foundry process platform, to cater to the automotive industry’s requirements for increasingly performant vehicle controllers relentlessly drives ahead.
SST developed ESF4 in close partnership with UMC to deliver enhanced embedded non-volatile memory (eNVM) performance and demonstrated reliability for automotive controllers while simultaneously significantly reducing the number of additional masking steps versus other foundries’ 28nm High-k/Metal-Gate Stack (HKMG) eFlash offerings, bringing customers cost advantages and greater manufacturing efficiency.
Customers currently manufacturing automotive controller products using foundry 40nm ESF3 AG1 platforms are encouraged to explore the UMC 28nm ESF4 AG1 platform as they look to scale to the next process node.
“As automotive requirements accelerate, developers need solutions that drive efficiency, speed up time to market and satisfy stringent industry standards. To meet these needs, UMC and SST have delivered a robust 28nm AG1 solution which is now ready for the production of customer designs,” said Mark Reiten, Vice President of Microchip’s licensing business unit. “UMC has been a valuable partner for SST and SuperFlash innovation, and the companies continue to jointly address the rapidly evolving market requirements and deliver technically and economically advanced offerings.”
“As the automotive industry rapidly advances toward more connected, autonomous, and shared vehicles, the demand for highly reliable data storage and high-capacity data updates continues to grow. This has driven customer demand for scaling SuperFlash to the 28nm process,” said Steven Hsu, Vice President of Technology Development at UMC. “Through our close collaboration with SST, we have successfully launched the ESF4 solution, which has been fully integrated into the widely adopted 28HPC+ platform. This enables our customers to leverage the extensive models and IP available in our portfolio to address key markets while simultaneously scaling to a more advanced process node.”
Key SuperFlash performance and reliability metrics for UMC’s 28HPC+ ESF4 AG1 platform include:
  • Automotive Electronics Council (AEC) Q-100 Grade 1 qualified for operating temperatures of -40°C to +150°C (Tj)
  • Read access time < 12.5ns
  • 100K+ endurance cycles
  • Data retention of > 10 years @ 125°C
  • Only 1-bit ECC required
  • Qualification of 32Mb macro at auto grade 1 conditions:
    • Zero bit failures (no ECC applied)
    • Peak yield reached 100%
Automotive controller shipment volumes continue to rapidly increase year after year, as the transportation industry demands innovative solutions for a widening variety of vehicle applications. Embedding a performant and highly reliable eNVM for code and data storage within the controller is essential to effectively serve this expanding market. SST’s ESF4 solution on UMC’s 28HPC+  AG1 platform is designed to support customers seeking a solution which may include supporting high-capacity controller firmware that requires over-the-air (OTA) update flexibility.

The post SST & UMC Release 28nm SuperFlash Gen 4 for Next-Gen Automotive Controllers appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів