Збирач потоків

Mythic and Microchip Partner to Redefine AI Processing with Next-Gen Analogue Compute-in-Memory Technology.

ELE Times - 59 хв 1 секунда тому
Mythic has chosen memBrain neuromorphic hardware intellectual property (IP) from Microchip Technology’s Silicon Storage Technology (SST) subsidiary for its next-generation edge-to-enterprise Analogue Processing Units (APUs). Mythic will utilise SST’s SuperFlash embedded non-volatile memory (eNVM) bitcells to deliver high levels of analogue compute-in-memory (aCIM) performance per watt. The partnership enables Mythic to achieve 120 TOPS/watt inference processing for power-efficient AI acceleration at the edge and in the data centre: Mythic’s APUs are targeted to be up to 100 times more energy-efficient than conventional digital Graphics Processing Units (GPUs).

One hundred fifty billion units of SST SuperFlash technology that Mythic is licensing have been shipped to date. SuperFlash technology is the de facto eNVM solution for a broad spectrum of industries, including industrial, automotive, consumer, and computing, for critical data and code storage, and is licensed by all of the top 10 semiconductor foundries worldwide.

“Mythic is pioneering innovative solutions in AI inference processing and AI sensor fusion for industrial, automotive and data centre applications, effectively overcoming current AI power limitations,” said Mark Reiten, vice president of Microchip’s Edge AI business unit. “As the core memory technology for Mythic’s next-generation products, memBrain delivers significant power efficiency and high performance for both edge and data centre applications.”

The memBrain cell features:

  • Up to 8 data bits per bitcell (8 bpc) storage
  • Single-digit nanoamp (nA) bitcell read current
  • 10-year data retention at operating temperature
  • 100,000 endurance cycles
  • Full state machine control of the 8 bpc multi-state write operation
  • Single-cycle multiply-and-accumulate operations for aCIM

“Mythic selected SST after an industry-wide search of eNVM technologies and determined the memBrain cell technology best enabled us to achieve the ultra-low-power and high performance required by our customers,” said Dr Taner Ozcelik, Mythic’s chief executive officer. “Additionally, the wide foundry availability of its industry-proven SuperFlash technology, coupled with the outstanding support of the SST engineering team, has been invaluable during our product development cycle.”

SST’s memBrain technology has been developed and deployed in 40 nm and 28 nm foundry processes using production-ready SuperFlash memory. 22 nm memBrain development is planned to extend the technology roadmap. Designed to provide reliable, high-performance and low-power non-volatile storage directly on the chip, SuperFlash memory is widely used in applications that require fast access times, high endurance and data retention without the need for external memory components.

The post Mythic and Microchip Partner to Redefine AI Processing with Next-Gen Analogue Compute-in-Memory Technology. appeared first on ELE Times.

🐣 Запрошуємо на виставку "Темарі: Розмаїття барв"!

Новини - 1 година 55 хв тому
🐣 Запрошуємо на виставку "Темарі: Розмаїття барв"!
Image
kpi чт, 03/19/2026 - 11:38
Текст

Розпочала свою роботу виставка "Темарі: Розмаїття барв"! Темарі - це старовинне японське мистецтво вишивки на кульках з ниток, яке прийшло до нас з глибини століть.

Перемога КПІшників у Säkerhets-SM CTF 2026!

Новини - 2 години 5 хв тому
Перемога КПІшників у Säkerhets-SM CTF 2026!
Image
kpi чт, 03/19/2026 - 11:27
Текст

🏆 Команда dcua Навчально-наукового фізико-технічного інституту (НН ФТІ) Київської політехніки блискуче виступила у фіналі кіберзамагань у Стокгольмі, заявивши про себе як про найсильнішу не лише в Україні, а й у світі. Наші студенти випередили однолітків зі Швеції, Данії, Фінляндії, Ісландії, Норвегії, Естонії, Латвії, Литви, а також інших українських команд.

Guerrilla RF expands aerospace & defense focus with new SatCom initiative

Semiconductor today - 2 години 34 хв тому
Guerrilla RF Inc (GRF) of Greensboro, NC, USA — which develops and manufactures radio-frequency integrated circuits (RFICs) and monolithic microwave integrated circuits (MMICs) for wireless applications — says that it has expanded its focus and readiness to support the rapidly evolving satellite communications (SatCom) market. With a portfolio spanning low-noise small-signal devices through high-power RF power amplifiers, Guerrilla RF now offers more than 100 component solutions engineered for mission-critical SatCom applications across both ground-based infrastructure and spaceborne platforms...

Scoping out the chiplet-based design flow

EDN Network - 2 години 1 хв тому

Today, the design of most monolithic SoCs follows a familiar pattern. Requirements definition leads to an architectural design. Then, the design team selects and qualifies the necessary IP blocks, assembles them into the architecture, and floorplans the die. Functional verification and early power and timing estimation can begin at this point.

The team can now begin RTL synthesis, rough placement, and at least preliminary routing. As these tasks finish, most SoC design teams will bring in physical-design specialists to complete the work until signoff.

But what about a multi-die design based on chiplets? At first glance, the sequence of tasks seems nearly identical to the one for a monolithic SoC. Just substitute chiplets for IP blocks and interposer design for physical chip design, right?

Well, no. Issues and corresponding tasks in chiplet-based design diverge significantly from the flow of most monolithic chip designs. Unless you intend to build a great deal of specialized multi-die expertise in-house, these issues make it vitally important to engage, from the beginning of the project, with a design partner experienced in both chiplets and interposer design and one with deep relationships across the multi-die, global supply chain.

The chiplet path

The two paths diverge early in the design project. In concept, selecting chiplets sounds much like IP selection. However, the IP market is mature: there are sources for almost any common IP function, and specialist IP firms are willing to undertake nearly anything. And usually, IP is highly configurable, either by setting parameters for an RTL generator or by working with the provider.

Only when the SoC requirements demand a unique function or unusual operating constraints, such as market-leading performance or extreme low power, would the SoC team consider designing its own IP internally.

In contrast, the chiplet market, while growing, is still immature. Some combinations of functions may not be available. And chiplets—which are finished dies, after all—cannot be as flexible as an RTL generator tool. You may find an I/O hub chiplet with the right kinds of inputs and outputs, but you may not find one with the correct configuration, the right power, or the proper pad placement for your design.

For these reasons, chiplet-based designs often require the design of one or more chiplets, and chiplets can have very different constraints from stand-alone ICs—they aren’t just little SoCs. Chiplets usually have very high I/O densities, high-speed drivers or serial transceivers tuned to the very short interconnect runs on interposers, and precise pad placement requirements dictated by an interposer layout.

Also, because the finished module will have to be tested when test equipment has limited access to the dies, chiplets often emphasize built-in self-test (BiST) more than a conventional chip. Having a design partner familiar with these issues from the outset can save time and energy.

Memory has issues, too

One type of die in chiplet-based design deserves special mention: memory. In this era of AI everywhere, many chiplet-based architectures will include high-bandwidth memory (HBM). This is undoubtedly true for datacenter processors, but increasingly just as true for edge AI applications such as vision processing or robotics.

Unfortunately, HBM interface design, placement on the interposer, routing, and thermal analysis are all challenges that differ significantly from the issues with logic chiplets. Requirements vary from generation to generation of the HBM standard, and even vendor to vendor. In the intense competition for supply, securing a stable supply of HBM dies or die stacks is essential before locking down the interposer design.

A design partner with deep HBM experience and strong supply-chain connections can ensure your design delivers the memory bandwidth you need with HBM dies you can acquire without having to respin an interposer design.

Interposer design

That brings us to the interposer. Conceptually, interposer design is not unlike IP placement and routing on an SoC. But here, we are talking about placing physical dies on a piece of silicon—usually—and routing between physical pads that can’t be moved. In practice, the constraints and analysis tools differ from those for chip design.

Also, decisions made at this stage can impact earlier and later stages in the design flow. The limited bandwidth between chiplets may influence how the architecture is partitioned across the dies. Even spatial issues, such as how close processor chiplets may be placed to HBM stacks and how far away they may be, can influence architectural partitioning and chiplet designs.

Interposer design also includes tasks that are unfamiliar to most chip design teams. These include signal and power integration analysis, 3D electromagnetic field modeling, and thermal and mechanical analysis of the 3D structure. Furthermore, design-for-test becomes an issue. A test strategy for the completed model must reasonably achieve the required coverage and be consistent with the assembly power budget. The test strategy will also influence the choice of OSAT vendors for the assembly.

Finally, the package must be designed, not chosen off the shelf. This will require yet another set of tools and analyses. Packaging decisions will echo up and down the supply chain: interposer design, availability of materials, geographic location of capable OSAT facilities, and more will be influenced by packaging choices.

It takes a platform

The range of tasks and specialized skills necessary to bring a chiplet-based design to a global market is significantly broader than the set required for a modest SoC design. The fact that many tasks interact up and down the design flow further complicates the project. If too many specialist parties are involved, communications and change management can become a nightmare.

The best solution is not a go-it-alone determination, nor is it a scramble to pull together a horde of best-in-class specialist consultants. Nor is it necessary to turn the whole challenge over to a powerful foundry partner with limited global flexibility.

We have found that the optimum solution is a consolidation platform. This organization combines rich IP access, chiplet design experience, interposer expertise, strong relationships with HBM suppliers, multiple interposer foundries, and chip-on-wafer-capable OSATs worldwide. You need a partner with a platform to address the global challenge of chiplet-based products.

The consolidated platform is an ecosystem solution offering a global ecosystem of IP and design expertise with foundry and OSAT service partners. Source: Faraday Technology Corp.

This consolidated platform combines rich IP access, chiplet design experience, interposer expertise, strong relationships with HBM suppliers, multiple interposer foundries, and chip-on-wafer-capable OSATs worldwide.

Kenneth Lu, marketing manager at Faraday Technology, has over 20 years of experience in the semiconductor industry, spanning product engineering, IP design, and marketing for various application ICs. He currently focuses on business development in advanced packaging, processes, and related innovations.

Special Section: Chiplets Design

The post Scoping out the chiplet-based design flow appeared first on EDN.

Halo selects Eyelit to power scalable SiC wafering production with composable MES

Semiconductor today - Срд, 03/18/2026 - 23:12
Eyelit Technologies of Holmdel, N.J, USA (which provides AI-powered optimized planning, scheduling and execution systems) says that its software solution suite has been selected by laser-based silicon carbide (SiC) wafering firm Halo Industries Inc of Santa Clara, CA, USA (a 2014 spin-out from Stanford University) to support its rapidly scaling production needs...

NVIDIA and ST present new delivery boards for 800VDC architectures

Semiconductor today - Срд, 03/18/2026 - 22:21
NVIDIA of Santa Clara, CA, USA and STMicroelectronics of Geneva, Switzerland is presenting two new delivery boards for 800VDC architectures...

Infineon introduces CoolGaN-based high-voltage intermediate bus converter reference designs

Semiconductor today - Срд, 03/18/2026 - 22:12
Infineon Technologies AG of Munich, Germany has introduced two new high-voltage intermediate bus converter (HV IBC) reference designs to help customers accelerate the transition to AI server power architectures powered by ±400V and 800V DC. Enabled by Infineon’s 650V CoolGaN switches, the designs target hyperscalers, power architects, and server OEMs seeking higher rack power, lower power distribution losses, and improved thermal performance at rising AI workloads...

Built a online stripboard layout editor with live net colouring and conflict checking

Reddit:Electronics - Срд, 03/18/2026 - 17:35
Built a online stripboard layout editor with live net colouring and conflict checking

About once a year or so I have to solder up a smallish stripboard. I designed them on paper, which is kind of annoying if you make a mistake or want to change something. So this time I tried finding a simple stripboard editor but couldn't really find one that's easy and fast to use for simple projects. Therefore I just decided to create my own.

It uses a split-screen layout with a very basic schematic editor on the left and a stripboard editor on the right. You first design a schematic and then place the components on the stripboard. Having the schematic allows for conflict detection, strip colouring and checking for unfinished nets on the stripboard.

You can check it out here: https://stripboard-editor.com

My goal was to create a fast, simple to use editor for small projects where it's not worth the trouble to use a complex editor but hard enough where using paper or your head only would be annoying. (I dont make any money of this in anyway, its just a personal hobby project I think could be useful)

If you have any feedback, Id love to hear it.

Greetings, Karl

submitted by /u/Karlomatiko
[link] [comments]

Teradyne launches Photon 100 opto-electric automated test platform

Semiconductor today - Срд, 03/18/2026 - 16:10
Automated test equipment and advanced robotics provider Teradyne Inc of North Reading, MA, USA has launched the Photon 100, a comprehensive opto-electric automated test platform purpose-built to accelerate high-volume silicon photonics (SiPh) and co-packaged optics (CPO) manufacturing...

The new performance bottleneck: How more GPU memory unlocks next-gen gaming and AI PCs

ELE Times - Срд, 03/18/2026 - 14:31

Courtesy: Micron

The next era of PC performance will be defined not by more compute, but by memory scale. The rising size of game assets and AI models has outpaced GPU memory capacity until now. Micron’s latest evolution of GDDR7 marks a pivotal shift for next-generation GPUs by combining higher memory density with the scalability that modern gaming and AI workloads now demand. With expanded capacity options built to support configurations up to 96GB of graphics memory, this generation of GDDR empowers systems to keep vastly larger worlds, richer textures, and growing AI models resident in memory, reducing bottlenecks and unlocking more consistent real-time performance across high-fidelity games and AI-enhanced applications.

Visual computing: The convergence of graphics and intelligence

Visual computing is entering a new era as graphics and intelligence converge. Modern systems must not only render high-fidelity scenes in real time, but also interpret, enhance, and generate content using increasingly complex AI models. Two forces are accelerating this shift: the push toward cinematic quality gaming and the rapid emergence of AI-powered PCs. As worlds grow larger, textures more detailed, and on-device AI more integral to responsiveness and personalisation, the demands placed on GPU memory have surged. What that means is, memory capacity and efficiency now determine how smoothly a system can deliver immersive gameplay, intelligent creation tools, and real-time simulation, making memory a foundational enabler of next-generation visual computing.

Delivering unprecedented performance for high-resolution gaming

Modern games are pushing GPU architectures harder than ever. Real-time ray tracing demands continuous access to massive datasets, geometry, materials, lighting maps, and shadows, while high refresh rate displays and ultra-resolution textures multiply the data the GPU must process each frame. Add in sprawling open worlds and increasingly AI-assisted rendering techniques, and the result is a workload that easily overwhelms traditional memory limits.

The problem is that when GPU memory can’t hold all this data at once, the system is forced to constantly swap assets in and out. That leads to the issues gamers know too well: texture pop-in, mid-frame stutters, uneven frame times, and sudden drops during intense ray-traced scenes. AI-generated frames and upscaling pipelines also become less consistent when memory is constrained, because the models and intermediate buffers they rely on are constantly competing for space.

This is where next-generation GDDR capacity and bandwidth become critical. By enabling far larger datasets to remain resident in memory, GDDR7 keeps the entire visual pipeline fed: textures, lighting data, geometry sets, and AI inference models, without the bottlenecks that cause visual artefacts or performance instability. The result is smoother, more predictable real-time rendering at 4K, 5K, and 8K, even in the most demanding scenes.

To keep these visual pipelines running efficiently, the memory subsystem must deliver data rapidly and consistently.

Enabling larger, more detailed worlds with 24Gb die density

As game environments expand and visual assets grow, memory capacity becomes critical to maintaining seamless, artefact-free experiences. Micron’s new 24Gb die density enables up to 96GB of graphics memory, giving GPUs significantly more space for high-resolution textures, expansive worlds, and advanced visual effects.

This increased capacity matters to gamers because:

  • Reduces asset swapping and texture pop-in
  • Supports larger frame buffers for high-resolution displays
  • Enables richer, more detailed environments with fewer loading transitions

Creators and professional users also benefit from faster real-time rendering, more responsive GPU-accelerated workflows, and improved handling of large datasets.

Fueling AI-enhanced graphics and the rise of AI PCs

AI is rapidly becoming integral to personal computing. Neural rendering, real-time media enhancement, content generation, and AI-assisted workflows place new demands on system memory. Micron GDDR7 is built to support these emerging workloads with increased bandwidth, lower latency, and improved efficiency.

Why GDDR7 matters for AI PCs

AI-driven graphics and compute tasks rely on continuous movement of large datasets. GDDR7 accelerates these operations by improving throughput and responsiveness across GPU pipelines.

Systems built with GDDR7 benefit from:

  • Faster on-device AI inference for creation, media, and collaboration
  • Lower-latency performance across hybrid CPU-GPU-NPU workflows
  • Higher throughput for neural graphics and generative AI models
  • Improved power efficiency thanks to architectural refinements and reduced operating voltages

As AI becomes embedded into everyday PC tasks from writing, coding, editing, presenting, and gaming, memory performance will heavily influence the immediacy, intelligence, and fluidity of the experience.

Enabling the future of immersive and intelligent computing

Micron GDDR7 is more than a performance improvement; it is a foundational technology for the next decade of visual and AI computing. With 36 Gbps bandwidth, 24Gb die density, and improved efficiency, GDDR7 empowers GPU and AI PC vendors to deliver richer, more dynamic, and more intelligent computing experiences.

While NPUs are becoming essential for power-efficient, on-device AI acceleration, the most demanding visual and AI workloads still rely on the scale and parallelism of a discrete GPU. NPUs excel at sustained, low-power inference, but GPUs deliver significantly higher throughput for large models, neural graphics, advanced rendering, and gaming workloads. By pairing NPUs with discrete GPUs equipped with GDDR7, AI PCs can intelligently distribute tasks, assigning lightweight inference to the NPU while leveraging the GPU’s computing power and memory bandwidth for operations that require maximum performance. This combination unlocks capabilities far beyond what NPUs can achieve alone.

Together, Micron GDDR7 and the next wave of discrete GPUs set the stage for a new era of immersive graphics and high-performance AI computing.

The post The new performance bottleneck: How more GPU memory unlocks next-gen gaming and AI PCs appeared first on ELE Times.

Improve 555 frequency linearity

EDN Network - Срд, 03/18/2026 - 14:00

After more than fifty years of continuous production in bipolar and fully half that in CMOS, there’s really neither room nor reason to question the value and versatility of the venerable 555 analog timer. But if it has any significant limitation, it probably lies in the category of raw speed.  Still, the LMC555 datasheet tells (albeit in a rather obscure footnote) of an impressive 3MHz capability.  The details (including the 3MHz test circuit) appear in Figure 6-2 on page 6 of this 2024 datasheet.

Wow the engineering world with your unique design: Design Ideas Submission Guide

3 MHz for a decades-old, low-power, geriatric analog part isn’t too shabby. It suggests the delightfully simple topology of Figure 1 for a precision 5-decade 1-MHz, current-controlled oscillator, where: 

F = 1/(Vth Ct/Ic) = 1/(3.33vCt/Ic) = 1000MHz Ic

Figure 1 A super simple, 5-decade LMC555 current-controlled oscillator.

Figure 1’s LMC555 is doing duty as a current-controlled oscillator with only two external components. It boasts a frequency range spanning 5 decades from 10Hz to (approximately) 1MHz. Cool!

But wait. What’s this “approximately” thing? How problematic is it, and mainly, how can we fix it if it is a problem? Here’s how.

The usual data sheet expression for LMC555 frequency of oscillation (FOO) is:

FOO = 1/(ln(2)RC) = 1/( ln(2) (Ra + 2Rb)C)

 But in Figure 6-2, the 3-MHz test circuit, they show Ra = 470, Rb = 200, and C = 200 pF. Those numbers, when plugged into the data sheet arithmetic, yield an RC time constant of 121 ns and therefore predict that the oscillator frequency should hit, not just 3 MHz, but a figure nearly three times faster.

FOO = 1/( ln(2) ((470 + 400)200pF) = 1/120.8ns = 8.28 MHz.

Hold the phone! If 3 MHz is as fast as they can really go, what happened to the missing 5 MHz?

What’s happening is simply that, besides the explicit 121 ns external RC time constant, there’s an implicit time delay (Td) internal to the device of:

Td = 1/3MHz – 1/8.28MHz = 333ns – 121ns = 212ns

These 212 ns of internal delay, while short enough to keep the datasheet cookbook arithmetic accurate for low to moderate frequency, need attention if we want to push things anywhere near pedal-to-the-metal multi-MHz limits. A formula for usefully accurate high-frequency FOO prediction thus becomes more like:

FOO = 1/(Vth Ct/Ic + Td) = 1/(3.33v Ct/Ic + 212ns)

When plotted out, this equation generates the droopy red curve in Figure 2 with a >20% error at 10 mA, which should be 1 MHz but is really only ~800 kHz.  Okay. That is pretty pitiful.

Figure 2 Nonlinear red curve versus ideal black shows ~20% error from LMC555 internal delay. The y-axis is the output frequency. The x-axis is the control current.

Luckily, a fix is both available and absurdly easy.  It consists of merely a single resistor Rlin added between the Dch (discharge) and Thr (threshold) pins. It works to linearize the current vs frequency function by biasing the Thr pin upward by a voltage = IcRlin. This abbreviates the duration of the sawtooth timing ramp by:

T = IcRlin/(Ic/Ct) = RlinCt = Td 

Thus, cancelling the 555 internal delays.

Therefore, if Rlin is chosen so that RlinCt = Td as shown in Figure 3, nonlinearity compensation will be (at least theoretically) complete over the full range of control current as shown in Figure 4. Note: 

FOO = 1/(Vth Ct/Ic + 212ns – Td) = 1/(3.33vCt/Ic + 212ns – 212ns) = 1/(3.33vCt/Ic) = Ic 1000MHz

Figure 3 Nonlinearity compensation for 555 internal delays when RlinCt = Td = 212ns.

Figure 4 Frequency of oscillation nonlinearity is foregone and forgotten if Rlin = Td/Ct  = 212 ns/300 pF = 706 ohms.

Theoretically.

So the question arises: Can anything practical be made of this theory? More on this soon.

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

Related Content

The post Improve 555 frequency linearity appeared first on EDN.

Thermal Management in 3D-IC: Modelling Hotspots, Materials, & Cooling Strategies

ELE Times - Срд, 03/18/2026 - 13:49

Courtesy: Cadence

As three-dimensional integrated circuit (3D-IC) technology becomes the architectural backbone of AI, high-performance computing (HPC), and advanced edge systems, thermal management has shifted from a downstream constraint to a fundamental design driver. The dense vertical integration that enables unprecedented performance also concentrates heat at levels that traditional two-dimensional design methodologies cannot anticipate or mitigate. In fact, the temperatures and heat fluxes inside localised 3D-IC hotspots can approach fractions of those encountered in rocket-launching thermal zones, only here the challenge unfolds on a microscopic silicon landscape rather than within a combustion chamber. This extreme thermal intensity makes early and predictive planning essential rather than optional.

Effective thermal management now begins at the architecture definition stage, where designers evaluate stack feasibility, power distribution, and allowable thermal envelopes before committing to partitioning decisions. These early insights directly shape block placement, power-delivery topology, and the choice of materials, interposers, and packaging technologies. As the industry increasingly relies on vertically integrated systems to achieve performance-per-watt gains, thermal awareness emerges as an architectural discipline in its own right, one that guides every subsequent stage of the 3D-IC design flow.

This article guides modelling, estimating, and mitigating thermal challenges in dense stacks and interposer-based 3D-ICs, with an emphasis on early electrothermal strategies that scale with complexity.

Sources of Heat in Stacked Architectures

Heat in 3D-ICs arises from a combination of device activity, vertical power density, and material constraints. When logic, memory, and accelerators are stacked, the total power per unit footprint increases dramatically. Upper dies, which are furthest from the heatsink, experience higher thermal resistance and reduced cooling efficiency, creating natural hotspots even when their individual power numbers appear modest.

The placement of through-silicon via (TSV) arrays, micro-bumps, and interconnect pillars also shapes the heat landscape. These structures act not only as electrical conduits but also as thermal conduits, depending on the material and density. Die-to-die interfaces with bonding layers often introduce thermal bottlenecks, and when chiplets operate at different power states, steep thermal gradients can trigger stress and reliability concerns. Understanding these interactions early is essential for setting realistic thermal limits and performance expectations.

Early Compact Models and Power Map Estimation

Thermal analysis must begin in parallel with the architectural definition itself. Early-stage compact models enable architects to approximate temperature distributions using only high-level power budgets, long before physical implementation. By capturing the combined influence of die thickness, material stacks, bonding interfaces, and interposer conductivity, these models reveal whether planned power densities or proposed die-stack configurations are thermally realistic. They help flag infeasible assumptions early, ensuring that functional partitioning and stacking choices are guided by thermally credible boundaries rather than late-stage surprises.

Creating usable power maps at this stage does not require full register transfer level (RTL) activity vectors. Coarse workload profiles can yield first-order estimates of dynamic and leakage power. When combined with simplified geometry models, they highlight thermally sensitive regions, enabling design teams to adjust block partitioning, die assignment, and approximate placement before entering the detailed implementation phase.

Cadence’s multiphysics system analysis ecosystem connects power estimation, compact thermal model (CTM) modelling, and system-level thermal analysis, ensuring that signal, power, electromagnetic (EM), and thermal assumptions remain aligned throughout the early design phase. This early visibility reduces late-stage thermal surprises, which are often the costliest to rectify.

Heat Paths Through Dies, Interposers, and Package

Heat does not follow a single escape route in a 3D-IC. Instead, it propagates through a network of vertical and lateral paths whose efficiency depends on materials, die arrangement, and the package environment. Lower dies may benefit from direct contact with the heatsink, while upper dies rely on indirect conduction through intermediate layers. Thermal resistance builds cumulatively across each interface.

Interposers, whether made of silicon, glass, or organic materials, play a significant role in the heat flow picture. Silicon interposers offer superior thermal conductivity, enabling heat spreading but also concentrating thermal load where chiplets cluster. Organic interposers introduce more thermal resistance but offer other integration advantages. Achieving the correct tradeoff means modelling these layers as active participants in heat distribution, not static mechanical components.

The entire package, including substrate layers, heat spreaders, and lid materials, must also be included in thermal simulation. When package effects are omitted in early analysis, temperature predictions often skew optimistic, masking hotspots that emerge only after assembly-level modelling is performed.

Materials, TIMs, and Cooling Options for Stacks

Thermal simulation heavily relies on the structural definition of a product because the geometry, material properties, and assembly details directly dictate how heat is generated, transferred, and dissipated.

High-conductivity silicon, optimised interconnect materials, and improved underfill or bonding layers can lower the vertical thermal resistance of a stack. Thermal interface materials (TIMs) exhibit significant variations in performance, and even slight differences in thickness or coverage can result in substantial temperature differences across dies.

Cooling strategies for 3D-ICs are evolving rapidly. Traditional air cooling can be sufficient for moderate power budgets, but high-performance AI and HPC systems often require advanced approaches such as direct liquid cooling or vapour chamber solutions. The choice of cooling strategy should align with the power roadmap, not just the current generation’s requirements. Once a die stack is assembled, cooling options become constrained, so decisions made early influence the thermal feasibility of future product iterations.

Co-Optimisation with Placement and PDN Design

Thermal constraints directly influence floorplanning, macro placement, and power delivery network (PDN) topology in 3D-ICs. Efficient heat spreading is achieved when high-power blocks are positioned to maximise vertical conduction paths and lateral spreading through metal layers. If a block is placed too far from major thermal conduits, even robust cooling cannot compensate for the heat.

The PDN adds additional complexity. Power delivery structures, including TSVs, bumps, and interposer redistribution layers, introduce their own resistive heating. When modelled jointly with thermal effects, the combined electro-thermal behaviour reveals interactions that neither analysis can capture alone. Co-optimisation across these domains ensures that thermal mitigation does not compromise power integrity and vice versa.

A tightly integrated workflow enables round-trip refinement as power, placement, and package assumptions evolve. Without this iterative co-design, late-stage violations become inevitable, requiring disruptive redesigns.

Electro-Thermal Readiness for Signoff

Before committing a 3D-IC to final signoff, teams must verify that the design can withstand realistic thermal stress across operating modes and process corners. This includes validating that estimated power profiles align with actual activity, ensuring that predicted peak temperatures remain within safe limits, and confirming that no layer or interface exceeds its thermal reliability threshold.

Die-to-die boundaries, micro-bump arrays, TSV clusters, and package interconnects must be evaluated holistically, since minor thermal mismatches can accumulate into significant mechanical strain. Long-term reliability also depends on understanding how temperature interacts with electromigration, ageing, and performance drift over the product lifetime.

A complete electro-thermal signoff process provides the confidence needed before entering manufacturing, reducing field failures and ensuring long-term stability.

Designing for Thermal Scalability

3D-ICs deliver unprecedented performance, but they require a disciplined and predictive approach to thermal management. Success depends on treating heat as a first-order design variable, not a late-stage correction. Early modelling, accurate power estimation, careful material and stack selection, and co-optimisation across placement, PDN, interposer, and package all contribute to thermal resilience.

As system complexity continues to climb, teams that embed electro-thermal planning into their architecture and implementation flows will deliver higher-performing, more reliable, and scalable 3D-IC designs. Thermal awareness is no longer a specialisation; it is a foundational competency for next-generation semiconductor design.

The post Thermal Management in 3D-IC: Modelling Hotspots, Materials, & Cooling Strategies appeared first on ELE Times.

ROHM introduces reference designs for three-phase inverters featuring new SiC power modules

Semiconductor today - Срд, 03/18/2026 - 13:32
Japan-based ROHM has released, via its website, the reference designs REF68005, REF68006 and REF68004 for three-phase inverter circuits featuring EcoSiC-brand silicon carbide (SiC) molded modules HSDIP20, DOT-247 and TRCDRIVE pack. Designers can use the data provided in the reference designs to create the drive circuit boards. When combined with ROHM's SiC modules, the designs help to reduce the person-hours required for device evaluation...

Keysight Launches AI Inference Emulation Platform to Validate and Optimise AI Infrastructure

ELE Times - Срд, 03/18/2026 - 13:09

Keysight Technologies has introduced Keysight AI Inference Builder (KAI Inference Builder), an emulation and analytics platform designed to validate inference-optimised AI infrastructure at scale. Keysight will demonstrate the solution at NVIDIA GTC, showcasing operation within NVIDIA DSX Air AI factory simulation environments to model and optimise AI data centre infrastructure, architectures, and performance.

As the AI industry shifts from training large language models (LLMs) to deploying them, optimising inference has become a crucial factor for ROI. However, inference behaviour is highly dynamic and difficult to emulate. Traditional testing methods like synthetic traffic generation or GPU benchmarks cannot accurately reproduce the latency-sensitive workload behaviour of AI inferencing across compute, networking, memory, storage, and security layers.

KAI Inference Builder closes that gap by recreating realistic inference workload patterns and modelling industry-specific usage patterns to validate AI infrastructure, applications, and data centre deployments. The platform gives AI cloud providers, hardware vendors, and application developers a scalable solution for measuring, validating, and optimising real-world inference performance.

Key benefits of KAI Inference Builder include:

Built for the Inference Era: As part of the Keysight Artificial Intelligence (KAI) portfolio, KAI Inference Builder emulates AI inference workloads at scale and validates full-stack deployments under real-world conditions to optimise performance, scale, and security.

  • Industry- and Application-Specific Benchmarking: Instead of generic emulations, KAI Inference Builder emulates industry-specific usage patterns and LLM architectures for AI models seen in finance, healthcare, and other verticals, enabling organisations to model and analyse infrastructure and application behaviour across different types of AI data centre deployments.
  • End-to-End Validation and Optimisation: KAI Inference Builder evaluates inference workflows from user request to model response, helping teams reduce costly rework by identifying and resolving bottlenecks early across compute, network, and security layers.
  • Subsystem Isolation and Root-Cause Precision: KAI Inference Builder can also do client-only emulation, which identifies where performance bottlenecks emerge across the AI infrastructure stack under load, enabling targeted optimisation that reduces overprovisioning, lowers costs, and improves overall efficiency.
  • NVIDIA DSX Air Integration and Live GTC Demo: Keysight will showcase KAI Inference Builder’s turnkey integration with NVIDIA Air at NVIDIA GTC, generating realistic inference workloads throughout NVIDIA’s data centre simulation environment so operators can validate inference infrastructure before deploying physical equipment.

Ram Periakaruppan, Vice President and General Manager, Network Test & Security Solutions at Keysight, said: “Inference is the key to unlocking AI’s ROI, but that can be challenging to achieve when system resources aren’t optimised for capacity and performance. KAI Inference Builder provides visibility into real-world inference performance across the full stack, enabling customers to validate and optimise deployments before hardware reaches the rack. Showcasing this capability at NVIDIA GTC using NVIDIA’s Air platform demonstrates how organisations can accelerate the path to production while reducing risk and cost.”

Amit Katz, VP of Networking at NVIDIA, said: “As AI data centres scale to unprecedented levels, pre-deployment validation has transitioned from a best practice to a mission-critical requirement. The integration of KAI Inference Builder with NVIDIA DSX Air provides the essential environment needed to eliminate performance volatility and enables NVIDIA AI Factory partners and customers to emulate real inference workloads and preemptively resolve bottlenecks, ensuring optimised AI services reach the market quickly.”

The post Keysight Launches AI Inference Emulation Platform to Validate and Optimise AI Infrastructure appeared first on ELE Times.

POET and LITEON to co-develop optical modules for AI applications

Semiconductor today - Срд, 03/18/2026 - 12:20
POET Technologies Inc of Toronto, Ontario, Canada –– which designs and implementats highly integrated optical engines and light sources for artificial intelligence networks –– has announced a strategic collaboration with optoelectronic and power management firm LITEON Technology of Hsinchu, Taiwan. The partnership aims to co-develop next-generation optical communication modules built on POET’s patented optical interposer technology and integration platform...

STMicroelectronics accelerates global adoption and market growth of Physical AI with NVIDIA

ELE Times - Срд, 03/18/2026 - 12:17

STMicroelectronics announced the acceleration of global development and adoption of physical AI systems, including humanoid, industrial, service and healthcare robots. ST is integrating its comprehensive portfolio for advanced robotics into the reference set of components compatible with the NVIDIA Holoscan Sensor Bridge (HSB). In parallel, high-fidelity NVIDIA Isaac Sim models of ST components are being integrated into both companies’ robotics ecosystems to support faster, more accurate sim-to-real research and development. The first deliverables available to developers today include the integration of Leopard’s depth camera enabled by ST with the NVIDIA HSB and the high-fidelity model of an ST IMU into NVIDIA’s Isaac Sim ecosystem.

“ST is well engaged within the robotics community, providing robust support and a well-established ecosystem,” said Rino Peruzzi, Executive Vice President, Sales & Marketing, Americas & Global Key Account Organization at STMicroelectronics. “Our collaboration with NVIDIA aims to unleash the next wave of cutting-edge robotics innovation with developer and customer experience streamlined at every step, from the inception of AI algorithms to the seamless integration of sensors and actuators. This will accelerate the evolution of sophisticated AI-driven physical platforms.”

“Accelerating the development of next-generation autonomous systems requires high-fidelity simulation and seamless hardware integration to bridge the gap between virtual training and real-world deployment,” said Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA. “The integration of STMicroelectronics’ sensor and actuator technologies with NVIDIA Isaac Sim, Holoscan Sensor Bridge and Jetson platforms provides developers with a unified foundation to build, simulate and deploy physical AI at scale.”

Simplifying sensor and actuator integration with the Holoscan Sensor Bridge

With the NVIDIA HSB, developers can unify, standardise, synchronise, and streamline data acquisition and logging from multiple ST sensors and actuators, a critical foundation for building high-fidelity NVIDIA Isaac models, accelerating learning, and minimising the sim-to-real gap.

The goal is to simplify the process of connecting ST sensors and actuators to NVIDIA Jetson platforms through pre-integrated solutions for the combination of STM32 MCUs, advanced sensors (including IMUs, imagers, and ToF devices) and motor‑control solutions, particularly for humanoid robot designs. Leopard Imaging’s stereo depth camera for robots is the perfect example. Using ST imaging, depth and motion-sensing technologies, it is expected to support a broad wave of designs across Physical AI OEMs, academic research groups and the industrial robotics community.

Reducing cost, complexity, and challenges with high-fidelity modelling for Omniverse Isaac

Advanced robotics developers face high development costs, in addition to modelling challenges. High‑fidelity simulations with extensive randomisation demand substantial GPU and CPU resources and large datasets. Selecting which parameters to randomise, and over what ranges, requires deep domain expertise. Poor choices can result in unrealistic scenarios or inefficient training. Finally, excessive variability can confuse models, slow convergence, and degrade real‑world performance when randomisation no longer reflects plausible conditions.

ST and NVIDIA’s objective is to provide accurate, hardware-calibrated models for the comprehensive portfolio of ST components, matching the requirements of advanced robotics. Following the availability of the first model of an IMU, ST is working to bring developers models of ToF sensors, actuators and other ICs derived from benchmark data collected on real ST hardware, using ST tools to capture accurate parameters and realistic behaviour, resulting in models optimised to NVIDIA’s Isaac Sim ecosystem. NVIDIA HSB is being integrated into ST’s toolchain collaboratively.

As a result, ST and NVIDIA envision that more accurate models will significantly improve robot learning. With models that closely mirror real-world device behaviour, robots can learn from simulations that better reflect actual conditions, shortening training cycles and lowering the cost of building and refining humanoid robotics applications.

The post STMicroelectronics accelerates global adoption and market growth of Physical AI with NVIDIA appeared first on ELE Times.

Coherent demos InP technology innovation at OFC

Semiconductor today - Срд, 03/18/2026 - 12:00
In booth 1401 at the Optical Fiber Communications Conference and Exhibition (OFC 2026) at the Los Angeles Convention Center (17–19 March), materials, networking and laser technology firm Coherent Corp of Saxonburg, PA, USA is highlighting the breadth and scalability of its indium phosphide (InP) innovations, showcasing a broad portfolio of lasers, modulators, photodiodes and subsystems for powering next-generation data-center architectures...

Chiplet innovation isn’t waiting for perfect standards

EDN Network - Срд, 03/18/2026 - 11:50

Across markets such as AI, high-performance computing (HPC), and automotive, the demand for computational power continues to accelerate. This demand spans everything from compact edge devices to massive data center servers. Traditionally, that capacity was delivered by monolithic systems-on-chip (SoCs) implemented on a single silicon die. While manufacturing trade-offs can ease some pressures, a large die still limits optimization, forcing designers to balance power and performance across the entire chip rather than fine-tuning each function individually.

The problem is structural. Monolithic SoCs have reached physical and economic limits. As shown in Figure 1, reticle size is fixed, yields decline as die size grows, and the cost of large devices is prohibitively high.

Figure 1 Multi-die architectures are emerging as monolithic scaling reaches its limits. Source: Arteris Inc.

Multi-die systems offer a practical path forward. By breaking a large SoC into smaller chips, teams gain better yields, leverage proven components, and combine diverse process technologies in a single package. Additionally, chiplets can be reused across product lines, improving scalability and reducing cost.

The semiconductor industry has long envisioned chiplets as modular and interoperable, backed by fully proven standards. Companies are not waiting for that vision to materialize fully. They are already moving ahead with chiplet adoption while standards remain in flux.

Why chiplets, and why now?

Until recently, the world’s largest semiconductor companies were the predominant users of chiplet technology. These companies could control every aspect of the design, integration, and packaging processes.

Mid-size and startup companies also long for this future to be realized. However, lacking the resources of industry giants, they must adapt and take incremental steps today, even as the whole framework evolves.

Disaggregating a monolithic design into chiplets offers multiple advantages. By mounting these components on a common silicon substrate, the resulting multi-die systems can be manufactured at the most appropriate technology node.

For example, memory at 28 nm, a high-performance processor at 7 nm, and a cutting-edge CPU at 2 nm. Combining all dies into a single package creates a multi-die system that outperforms a monolithic design.

Standards: Ideal vs. actual

One of the issues is that the standards needed to make chiplets broadly interchangeable are not yet fully baked. They still need to be implemented, validated, and tested across different pieces of silicon before designers can count on them.

Even when two companies follow the exact specification, small details such as sideband signals or initialization steps can differ enough to cause unexpected failures. Until compatibility is proven at scale, design teams need to remain pragmatic in their approach to developing multi-die systems.

The ideal case is often described as chiplets that fit together like Lego bricks, highlighting the requirement that they are straightforward to combine and verified so that they work reliably together. Achieving that vision will ultimately depend on widely adopted industry standards that enable dies from different sources to function as one system.

Initiatives such as AMBA CHI Chip-to-Chip (C2C), Bunch of Wires (BoW), and Universal Chiplet Interconnect Express (UCIe) are helping to define the physical and protocol layers for die-to-die (D2D) links. Yet many challenges remain in areas such as system-level verification, latency optimization, power efficiency, security, and ensuring that chiplets from different vendors perform cohesively, as shown in Figure 2.

Figure 2 Multi-die SoC adoption is expanding across multiple markets. Source: Arteris Inc.

Companies can turn to multi-die systems

Progress can’t be delayed until standards are finalized, so design teams are advancing with innovation. Some of the ways system architects are tackling multi-die design are as follows:

  • Design for modularity: Partition compute, memory, and IO into reusable blocks. Utilize silicon-proven network-on-chip (NoC) interconnect IP that supports multiple device-to-device (D2D) protocols and topologies.
  • Build with interoperability in mind: Utilize tools and IP that are co-validated with major electronic design automation (EDA), physical layer (PHY), and foundry partners to align chiplet workflows and ensure IP, tool, and foundry compatibility.
  • Automate integration: Hand-stitching chiplets together is a time-consuming and error-prone nightmare. Employ tools that automate HW/SW interface definition and assembly, which is essential for fast iteration and derivative design creation.
  • Use coherency only where it matters: Certain functions, such as CPU and GPU clusters, may require coherent chiplets and D2D interfaces that necessitate the use of a coherent NoC. By comparison, functions like AI/ML accelerators may be satisfied by non-coherent chiplets and D2D interfaces. These are simpler and more power-efficient and can be addressed with a non-coherent NoC.
  • Reuse what works: Adopt chiplet templates that can scale across product families and incorporate proven monolithic dies alongside new multi-die IP in derivative designs.
  • Accept that the ecosystem is co-evolving: Standards are years away from full maturity. And companies are just beginning to explore building modular, standard-aware designs, laying the groundwork for the ecosystem’s future.

Build now, don’t wait

Multi-die system development teams should adopt modular design principles, utilize proven IP blocks with flexible D2D support, implement automated integration tools, and embrace ecosystem-aware development flows. Designers should also collaborate with like-minded innovators, partners, and customers to deliver tomorrow’s complex systems today.

Chiplets design solutions show how multi-die architectures can be built and deployed now. They enable companies to address today’s performance and scalability needs while laying the groundwork for seamless interoperability in the future.

Andy Nightingale, VP of Product Management and Marketing at Arteris, has over 39 years of experience in the high-tech industry, including 23 years in various engineering and product management roles at Arm.

 

Special Section: Chiplets Design

The post Chiplet innovation isn’t waiting for perfect standards appeared first on EDN.

Socomec Expands Power Solutions Portfolio in India, Launches MASTERYS GP4 UPS and ATyS a M Automatic Transfer Switch

ELE Times - Срд, 03/18/2026 - 11:16

Socomec has announced the launch of its new advanced MASTERYS GP4 UPS and ATyS a M Automatic Transfer Switch. This further strengthens their portfolio of reliable power management solutions. With over 25 years in the industry, the launch reinforces the company’s focus on innovative, efficient technologies for modern infrastructure.

Mr. Meenu Singhal, Regional Managing Director, Socomec Innovative Power Solutions, said,
The launch of the MASTERYS GP4 UPS and ATyS a M Automatic Transfer Switch strengthens our portfolio with solutions that drive operational continuity and efficiency. From data centres and IT rooms to commercial buildings, organisations require resilient power infrastructure to ensure uninterrupted operations and protect critical systems. These products help optimise power supply while supporting reliable performance. We remain focused on innovation and committed to delivering dependable, future-ready power solutions for our customers.”

MASTERYS GP4 UPS, Designed for Critical Power Environments:

 

 

 

 

 

 

 

The Socomec MASTERYS GP4 200–250 kVA UPS is a high-performance uninterruptible power supply designed to ensure reliable power continuity for mission-critical environments. Built with advanced power protection technology and high-efficiency SiC technology, it delivers superior energy efficiency, consistent power quality, and reliable performance for data centres, industrial operations, and commercial infrastructure requiring uninterrupted operations.

•Reliable power protection: Ensures uninterrupted power for critical infrastructure such as data centres, IT rooms, industrial processes, and commercial facilities, helping maintain operational continuity during grid disturbances.

•Advanced double-conversion technology: Provides stable and high-quality power output while minimising energy losses and supporting lower CO₂ emissions.

• High efficiency and robust design: Combines high efficiency levels with a resilient architecture, leveraging advanced Sic technology to reduce downtime and support continuous operations in demanding environments.

•Optimised for modern digital infrastructure: Designed to meet the growing power reliability needs of expanding digital ecosystems and industrial facilities.

 

ATyS a M Automatic Transfer Switch: Compact, Reliable Source Switching:

 

Socomec’s ATyS a M Automatic Transfer Switch enables automatic and seamless switching between two power sources, such as the main utility supply and a backup generator, ensuring uninterrupted power for commercial buildings, industrial facilities and other critical installations where continuous operations are essential.

•Automatic Source Transfer: Automatically switches between the main power source and backup supply, ensuring continuity of operations during power interruptions.

•Compact Modular Design: More compact than similar solutions, enabling easier integration within electrical panels and helping save valuable installation space.

•Quick & Easy Commissioning: Integrated pre-configured controller automatically manages parameters and source transfers, reducing setup time and risk of manual error.

•Proven Reliability for Low-Voltage Installations: Designed and tested according to international standards, supporting reliable switching for commercial and industrial facilities.

 

Socomec offers support in design and commissioning, ensuring high-performing and sustainable electrical installations that are compliant. These solutions improve continuous power supply and strengthen resilience across data centres, industrial facilities, commercial buildings, and other critical infrastructure.

The post Socomec Expands Power Solutions Portfolio in India, Launches MASTERYS GP4 UPS and ATyS a M Automatic Transfer Switch appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів