Українською
  In English
Збирач потоків
BluGlass partners with US government relations, corporate advisory and public affairs firm
MACOM’s microwave and optical solutions on display at SATShow Week
Mythic and Microchip Partner to Redefine AI Processing with Next-Gen Analogue Compute-in-Memory Technology.
One hundred fifty billion units of SST SuperFlash technology that Mythic is licensing have been shipped to date. SuperFlash technology is the de facto eNVM solution for a broad spectrum of industries, including industrial, automotive, consumer, and computing, for critical data and code storage, and is licensed by all of the top 10 semiconductor foundries worldwide.
“Mythic is pioneering innovative solutions in AI inference processing and AI sensor fusion for industrial, automotive and data centre applications, effectively overcoming current AI power limitations,” said Mark Reiten, vice president of Microchip’s Edge AI business unit. “As the core memory technology for Mythic’s next-generation products, memBrain delivers significant power efficiency and high performance for both edge and data centre applications.”
The memBrain cell features:
- Up to 8 data bits per bitcell (8 bpc) storage
- Single-digit nanoamp (nA) bitcell read current
- 10-year data retention at operating temperature
- 100,000 endurance cycles
- Full state machine control of the 8 bpc multi-state write operation
- Single-cycle multiply-and-accumulate operations for aCIM
“Mythic selected SST after an industry-wide search of eNVM technologies and determined the memBrain cell technology best enabled us to achieve the ultra-low-power and high performance required by our customers,” said Dr Taner Ozcelik, Mythic’s chief executive officer. “Additionally, the wide foundry availability of its industry-proven SuperFlash technology, coupled with the outstanding support of the SST engineering team, has been invaluable during our product development cycle.”
SST’s memBrain technology has been developed and deployed in 40 nm and 28 nm foundry processes using production-ready SuperFlash memory. 22 nm memBrain development is planned to extend the technology roadmap. Designed to provide reliable, high-performance and low-power non-volatile storage directly on the chip, SuperFlash memory is widely used in applications that require fast access times, high endurance and data retention without the need for external memory components.
The post Mythic and Microchip Partner to Redefine AI Processing with Next-Gen Analogue Compute-in-Memory Technology. appeared first on ELE Times.
🐣 Запрошуємо на виставку "Темарі: Розмаїття барв"!
Розпочала свою роботу виставка "Темарі: Розмаїття барв"! Темарі - це старовинне японське мистецтво вишивки на кульках з ниток, яке прийшло до нас з глибини століть.
Перемога КПІшників у Säkerhets-SM CTF 2026!
🏆 Команда dcua Навчально-наукового фізико-технічного інституту (НН ФТІ) Київської політехніки блискуче виступила у фіналі кіберзамагань у Стокгольмі, заявивши про себе як про найсильнішу не лише в Україні, а й у світі. Наші студенти випередили однолітків зі Швеції, Данії, Фінляндії, Ісландії, Норвегії, Естонії, Латвії, Литви, а також інших українських команд.
Guerrilla RF expands aerospace & defense focus with new SatCom initiative
Scoping out the chiplet-based design flow

Today, the design of most monolithic SoCs follows a familiar pattern. Requirements definition leads to an architectural design. Then, the design team selects and qualifies the necessary IP blocks, assembles them into the architecture, and floorplans the die. Functional verification and early power and timing estimation can begin at this point.
The team can now begin RTL synthesis, rough placement, and at least preliminary routing. As these tasks finish, most SoC design teams will bring in physical-design specialists to complete the work until signoff.
But what about a multi-die design based on chiplets? At first glance, the sequence of tasks seems nearly identical to the one for a monolithic SoC. Just substitute chiplets for IP blocks and interposer design for physical chip design, right?
Well, no. Issues and corresponding tasks in chiplet-based design diverge significantly from the flow of most monolithic chip designs. Unless you intend to build a great deal of specialized multi-die expertise in-house, these issues make it vitally important to engage, from the beginning of the project, with a design partner experienced in both chiplets and interposer design and one with deep relationships across the multi-die, global supply chain.
The chiplet path
The two paths diverge early in the design project. In concept, selecting chiplets sounds much like IP selection. However, the IP market is mature: there are sources for almost any common IP function, and specialist IP firms are willing to undertake nearly anything. And usually, IP is highly configurable, either by setting parameters for an RTL generator or by working with the provider.
Only when the SoC requirements demand a unique function or unusual operating constraints, such as market-leading performance or extreme low power, would the SoC team consider designing its own IP internally.
In contrast, the chiplet market, while growing, is still immature. Some combinations of functions may not be available. And chiplets—which are finished dies, after all—cannot be as flexible as an RTL generator tool. You may find an I/O hub chiplet with the right kinds of inputs and outputs, but you may not find one with the correct configuration, the right power, or the proper pad placement for your design.
For these reasons, chiplet-based designs often require the design of one or more chiplets, and chiplets can have very different constraints from stand-alone ICs—they aren’t just little SoCs. Chiplets usually have very high I/O densities, high-speed drivers or serial transceivers tuned to the very short interconnect runs on interposers, and precise pad placement requirements dictated by an interposer layout.
Also, because the finished module will have to be tested when test equipment has limited access to the dies, chiplets often emphasize built-in self-test (BiST) more than a conventional chip. Having a design partner familiar with these issues from the outset can save time and energy.
Memory has issues, too
One type of die in chiplet-based design deserves special mention: memory. In this era of AI everywhere, many chiplet-based architectures will include high-bandwidth memory (HBM). This is undoubtedly true for datacenter processors, but increasingly just as true for edge AI applications such as vision processing or robotics.
Unfortunately, HBM interface design, placement on the interposer, routing, and thermal analysis are all challenges that differ significantly from the issues with logic chiplets. Requirements vary from generation to generation of the HBM standard, and even vendor to vendor. In the intense competition for supply, securing a stable supply of HBM dies or die stacks is essential before locking down the interposer design.
A design partner with deep HBM experience and strong supply-chain connections can ensure your design delivers the memory bandwidth you need with HBM dies you can acquire without having to respin an interposer design.
Interposer design
That brings us to the interposer. Conceptually, interposer design is not unlike IP placement and routing on an SoC. But here, we are talking about placing physical dies on a piece of silicon—usually—and routing between physical pads that can’t be moved. In practice, the constraints and analysis tools differ from those for chip design.
Also, decisions made at this stage can impact earlier and later stages in the design flow. The limited bandwidth between chiplets may influence how the architecture is partitioned across the dies. Even spatial issues, such as how close processor chiplets may be placed to HBM stacks and how far away they may be, can influence architectural partitioning and chiplet designs.
Interposer design also includes tasks that are unfamiliar to most chip design teams. These include signal and power integration analysis, 3D electromagnetic field modeling, and thermal and mechanical analysis of the 3D structure. Furthermore, design-for-test becomes an issue. A test strategy for the completed model must reasonably achieve the required coverage and be consistent with the assembly power budget. The test strategy will also influence the choice of OSAT vendors for the assembly.
Finally, the package must be designed, not chosen off the shelf. This will require yet another set of tools and analyses. Packaging decisions will echo up and down the supply chain: interposer design, availability of materials, geographic location of capable OSAT facilities, and more will be influenced by packaging choices.
It takes a platform
The range of tasks and specialized skills necessary to bring a chiplet-based design to a global market is significantly broader than the set required for a modest SoC design. The fact that many tasks interact up and down the design flow further complicates the project. If too many specialist parties are involved, communications and change management can become a nightmare.
The best solution is not a go-it-alone determination, nor is it a scramble to pull together a horde of best-in-class specialist consultants. Nor is it necessary to turn the whole challenge over to a powerful foundry partner with limited global flexibility.
We have found that the optimum solution is a consolidation platform. This organization combines rich IP access, chiplet design experience, interposer expertise, strong relationships with HBM suppliers, multiple interposer foundries, and chip-on-wafer-capable OSATs worldwide. You need a partner with a platform to address the global challenge of chiplet-based products.

The consolidated platform is an ecosystem solution offering a global ecosystem of IP and design expertise with foundry and OSAT service partners. Source: Faraday Technology Corp.
This consolidated platform combines rich IP access, chiplet design experience, interposer expertise, strong relationships with HBM suppliers, multiple interposer foundries, and chip-on-wafer-capable OSATs worldwide.
Kenneth Lu, marketing manager at Faraday Technology, has over 20 years of experience in the semiconductor industry, spanning product engineering, IP design, and marketing for various application ICs. He currently focuses on business development in advanced packaging, processes, and related innovations.
Special Section: Chiplets Design
- What the special section on chiplets design has to offer
- Chiplet innovation isn’t waiting for perfect standards
The post Scoping out the chiplet-based design flow appeared first on EDN.
Halo selects Eyelit to power scalable SiC wafering production with composable MES
NVIDIA and ST present new delivery boards for 800VDC architectures
Infineon introduces CoolGaN-based high-voltage intermediate bus converter reference designs
Built a online stripboard layout editor with live net colouring and conflict checking
| About once a year or so I have to solder up a smallish stripboard. I designed them on paper, which is kind of annoying if you make a mistake or want to change something. So this time I tried finding a simple stripboard editor but couldn't really find one that's easy and fast to use for simple projects. Therefore I just decided to create my own. It uses a split-screen layout with a very basic schematic editor on the left and a stripboard editor on the right. You first design a schematic and then place the components on the stripboard. Having the schematic allows for conflict detection, strip colouring and checking for unfinished nets on the stripboard. You can check it out here: https://stripboard-editor.com My goal was to create a fast, simple to use editor for small projects where it's not worth the trouble to use a complex editor but hard enough where using paper or your head only would be annoying. (I dont make any money of this in anyway, its just a personal hobby project I think could be useful) If you have any feedback, Id love to hear it. Greetings, Karl [link] [comments] |
Teradyne launches Photon 100 opto-electric automated test platform
The new performance bottleneck: How more GPU memory unlocks next-gen gaming and AI PCs
Courtesy: Micron
The next era of PC performance will be defined not by more compute, but by memory scale. The rising size of game assets and AI models has outpaced GPU memory capacity until now. Micron’s latest evolution of GDDR7 marks a pivotal shift for next-generation GPUs by combining higher memory density with the scalability that modern gaming and AI workloads now demand. With expanded capacity options built to support configurations up to 96GB of graphics memory, this generation of GDDR empowers systems to keep vastly larger worlds, richer textures, and growing AI models resident in memory, reducing bottlenecks and unlocking more consistent real-time performance across high-fidelity games and AI-enhanced applications.
Visual computing: The convergence of graphics and intelligence
Visual computing is entering a new era as graphics and intelligence converge. Modern systems must not only render high-fidelity scenes in real time, but also interpret, enhance, and generate content using increasingly complex AI models. Two forces are accelerating this shift: the push toward cinematic quality gaming and the rapid emergence of AI-powered PCs. As worlds grow larger, textures more detailed, and on-device AI more integral to responsiveness and personalisation, the demands placed on GPU memory have surged. What that means is, memory capacity and efficiency now determine how smoothly a system can deliver immersive gameplay, intelligent creation tools, and real-time simulation, making memory a foundational enabler of next-generation visual computing.
Delivering unprecedented performance for high-resolution gaming
Modern games are pushing GPU architectures harder than ever. Real-time ray tracing demands continuous access to massive datasets, geometry, materials, lighting maps, and shadows, while high refresh rate displays and ultra-resolution textures multiply the data the GPU must process each frame. Add in sprawling open worlds and increasingly AI-assisted rendering techniques, and the result is a workload that easily overwhelms traditional memory limits.
The problem is that when GPU memory can’t hold all this data at once, the system is forced to constantly swap assets in and out. That leads to the issues gamers know too well: texture pop-in, mid-frame stutters, uneven frame times, and sudden drops during intense ray-traced scenes. AI-generated frames and upscaling pipelines also become less consistent when memory is constrained, because the models and intermediate buffers they rely on are constantly competing for space.
This is where next-generation GDDR capacity and bandwidth become critical. By enabling far larger datasets to remain resident in memory, GDDR7 keeps the entire visual pipeline fed: textures, lighting data, geometry sets, and AI inference models, without the bottlenecks that cause visual artefacts or performance instability. The result is smoother, more predictable real-time rendering at 4K, 5K, and 8K, even in the most demanding scenes.
To keep these visual pipelines running efficiently, the memory subsystem must deliver data rapidly and consistently.
Enabling larger, more detailed worlds with 24Gb die density
As game environments expand and visual assets grow, memory capacity becomes critical to maintaining seamless, artefact-free experiences. Micron’s new 24Gb die density enables up to 96GB of graphics memory, giving GPUs significantly more space for high-resolution textures, expansive worlds, and advanced visual effects.
This increased capacity matters to gamers because:
- Reduces asset swapping and texture pop-in
- Supports larger frame buffers for high-resolution displays
- Enables richer, more detailed environments with fewer loading transitions
Creators and professional users also benefit from faster real-time rendering, more responsive GPU-accelerated workflows, and improved handling of large datasets.
Fueling AI-enhanced graphics and the rise of AI PCs
AI is rapidly becoming integral to personal computing. Neural rendering, real-time media enhancement, content generation, and AI-assisted workflows place new demands on system memory. Micron GDDR7 is built to support these emerging workloads with increased bandwidth, lower latency, and improved efficiency.
Why GDDR7 matters for AI PCs
AI-driven graphics and compute tasks rely on continuous movement of large datasets. GDDR7 accelerates these operations by improving throughput and responsiveness across GPU pipelines.
Systems built with GDDR7 benefit from:
- Faster on-device AI inference for creation, media, and collaboration
- Lower-latency performance across hybrid CPU-GPU-NPU workflows
- Higher throughput for neural graphics and generative AI models
- Improved power efficiency thanks to architectural refinements and reduced operating voltages
As AI becomes embedded into everyday PC tasks from writing, coding, editing, presenting, and gaming, memory performance will heavily influence the immediacy, intelligence, and fluidity of the experience.
Enabling the future of immersive and intelligent computing
Micron GDDR7 is more than a performance improvement; it is a foundational technology for the next decade of visual and AI computing. With 36 Gbps bandwidth, 24Gb die density, and improved efficiency, GDDR7 empowers GPU and AI PC vendors to deliver richer, more dynamic, and more intelligent computing experiences.
While NPUs are becoming essential for power-efficient, on-device AI acceleration, the most demanding visual and AI workloads still rely on the scale and parallelism of a discrete GPU. NPUs excel at sustained, low-power inference, but GPUs deliver significantly higher throughput for large models, neural graphics, advanced rendering, and gaming workloads. By pairing NPUs with discrete GPUs equipped with GDDR7, AI PCs can intelligently distribute tasks, assigning lightweight inference to the NPU while leveraging the GPU’s computing power and memory bandwidth for operations that require maximum performance. This combination unlocks capabilities far beyond what NPUs can achieve alone.
Together, Micron GDDR7 and the next wave of discrete GPUs set the stage for a new era of immersive graphics and high-performance AI computing.
The post The new performance bottleneck: How more GPU memory unlocks next-gen gaming and AI PCs appeared first on ELE Times.
Improve 555 frequency linearity

After more than fifty years of continuous production in bipolar and fully half that in CMOS, there’s really neither room nor reason to question the value and versatility of the venerable 555 analog timer. But if it has any significant limitation, it probably lies in the category of raw speed. Still, the LMC555 datasheet tells (albeit in a rather obscure footnote) of an impressive 3MHz capability. The details (including the 3MHz test circuit) appear in Figure 6-2 on page 6 of this 2024 datasheet.
Wow the engineering world with your unique design: Design Ideas Submission Guide
3 MHz for a decades-old, low-power, geriatric analog part isn’t too shabby. It suggests the delightfully simple topology of Figure 1 for a precision 5-decade 1-MHz, current-controlled oscillator, where:
F = 1/(Vth Ct/Ic) = 1/(3.33vCt/Ic) = 1000MHz Ic
Figure 1 A super simple, 5-decade LMC555 current-controlled oscillator.
Figure 1’s LMC555 is doing duty as a current-controlled oscillator with only two external components. It boasts a frequency range spanning 5 decades from 10Hz to (approximately) 1MHz. Cool!
But wait. What’s this “approximately” thing? How problematic is it, and mainly, how can we fix it if it is a problem? Here’s how.
The usual data sheet expression for LMC555 frequency of oscillation (FOO) is:
FOO = 1/(ln(2)RC) = 1/( ln(2) (Ra + 2Rb)C)
But in Figure 6-2, the 3-MHz test circuit, they show Ra = 470, Rb = 200, and C = 200 pF. Those numbers, when plugged into the data sheet arithmetic, yield an RC time constant of 121 ns and therefore predict that the oscillator frequency should hit, not just 3 MHz, but a figure nearly three times faster.
FOO = 1/( ln(2) ((470 + 400)200pF) = 1/120.8ns = 8.28 MHz.
Hold the phone! If 3 MHz is as fast as they can really go, what happened to the missing 5 MHz?
What’s happening is simply that, besides the explicit 121 ns external RC time constant, there’s an implicit time delay (Td) internal to the device of:
Td = 1/3MHz – 1/8.28MHz = 333ns – 121ns = 212ns
These 212 ns of internal delay, while short enough to keep the datasheet cookbook arithmetic accurate for low to moderate frequency, need attention if we want to push things anywhere near pedal-to-the-metal multi-MHz limits. A formula for usefully accurate high-frequency FOO prediction thus becomes more like:
FOO = 1/(Vth Ct/Ic + Td) = 1/(3.33v Ct/Ic + 212ns)
When plotted out, this equation generates the droopy red curve in Figure 2 with a >20% error at 10 mA, which should be 1 MHz but is really only ~800 kHz. Okay. That is pretty pitiful.

Figure 2 Nonlinear red curve versus ideal black shows ~20% error from LMC555 internal delay. The y-axis is the output frequency. The x-axis is the control current.
Luckily, a fix is both available and absurdly easy. It consists of merely a single resistor Rlin added between the Dch (discharge) and Thr (threshold) pins. It works to linearize the current vs frequency function by biasing the Thr pin upward by a voltage = IcRlin. This abbreviates the duration of the sawtooth timing ramp by:
T = IcRlin/(Ic/Ct) = RlinCt = Td
Thus, cancelling the 555 internal delays.
Therefore, if Rlin is chosen so that RlinCt = Td as shown in Figure 3, nonlinearity compensation will be (at least theoretically) complete over the full range of control current as shown in Figure 4. Note:
FOO = 1/(Vth Ct/Ic + 212ns – Td) = 1/(3.33vCt/Ic + 212ns – 212ns) = 1/(3.33vCt/Ic) = Ic 1000MHz

Figure 3 Nonlinearity compensation for 555 internal delays when RlinCt = Td = 212ns.

Figure 4 Frequency of oscillation nonlinearity is foregone and forgotten if Rlin = Td/Ct = 212 ns/300 pF = 706 ohms.
Theoretically.
So the question arises: Can anything practical be made of this theory? More on this soon.
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included best Design Idea of the year in 1974 and 2001.
Related Content
- Tune 555 frequency over 4 decades
- 555 VCO revisited
- Inverted MOSFET helps 555 oscillator ignore power supply and temp variations
- Gated 555 astable hits the ground running
The post Improve 555 frequency linearity appeared first on EDN.
Thermal Management in 3D-IC: Modelling Hotspots, Materials, & Cooling Strategies
Courtesy: Cadence
As three-dimensional integrated circuit (3D-IC) technology becomes the architectural backbone of AI, high-performance computing (HPC), and advanced edge systems, thermal management has shifted from a downstream constraint to a fundamental design driver. The dense vertical integration that enables unprecedented performance also concentrates heat at levels that traditional two-dimensional design methodologies cannot anticipate or mitigate. In fact, the temperatures and heat fluxes inside localised 3D-IC hotspots can approach fractions of those encountered in rocket-launching thermal zones, only here the challenge unfolds on a microscopic silicon landscape rather than within a combustion chamber. This extreme thermal intensity makes early and predictive planning essential rather than optional.

Effective thermal management now begins at the architecture definition stage, where designers evaluate stack feasibility, power distribution, and allowable thermal envelopes before committing to partitioning decisions. These early insights directly shape block placement, power-delivery topology, and the choice of materials, interposers, and packaging technologies. As the industry increasingly relies on vertically integrated systems to achieve performance-per-watt gains, thermal awareness emerges as an architectural discipline in its own right, one that guides every subsequent stage of the 3D-IC design flow.
This article guides modelling, estimating, and mitigating thermal challenges in dense stacks and interposer-based 3D-ICs, with an emphasis on early electrothermal strategies that scale with complexity.
Sources of Heat in Stacked Architectures
Heat in 3D-ICs arises from a combination of device activity, vertical power density, and material constraints. When logic, memory, and accelerators are stacked, the total power per unit footprint increases dramatically. Upper dies, which are furthest from the heatsink, experience higher thermal resistance and reduced cooling efficiency, creating natural hotspots even when their individual power numbers appear modest.
The placement of through-silicon via (TSV) arrays, micro-bumps, and interconnect pillars also shapes the heat landscape. These structures act not only as electrical conduits but also as thermal conduits, depending on the material and density. Die-to-die interfaces with bonding layers often introduce thermal bottlenecks, and when chiplets operate at different power states, steep thermal gradients can trigger stress and reliability concerns. Understanding these interactions early is essential for setting realistic thermal limits and performance expectations.
Early Compact Models and Power Map Estimation
Thermal analysis must begin in parallel with the architectural definition itself. Early-stage compact models enable architects to approximate temperature distributions using only high-level power budgets, long before physical implementation. By capturing the combined influence of die thickness, material stacks, bonding interfaces, and interposer conductivity, these models reveal whether planned power densities or proposed die-stack configurations are thermally realistic. They help flag infeasible assumptions early, ensuring that functional partitioning and stacking choices are guided by thermally credible boundaries rather than late-stage surprises.

Creating usable power maps at this stage does not require full register transfer level (RTL) activity vectors. Coarse workload profiles can yield first-order estimates of dynamic and leakage power. When combined with simplified geometry models, they highlight thermally sensitive regions, enabling design teams to adjust block partitioning, die assignment, and approximate placement before entering the detailed implementation phase.

Cadence’s multiphysics system analysis ecosystem connects power estimation, compact thermal model (CTM) modelling, and system-level thermal analysis, ensuring that signal, power, electromagnetic (EM), and thermal assumptions remain aligned throughout the early design phase. This early visibility reduces late-stage thermal surprises, which are often the costliest to rectify.
Heat Paths Through Dies, Interposers, and Package
Heat does not follow a single escape route in a 3D-IC. Instead, it propagates through a network of vertical and lateral paths whose efficiency depends on materials, die arrangement, and the package environment. Lower dies may benefit from direct contact with the heatsink, while upper dies rely on indirect conduction through intermediate layers. Thermal resistance builds cumulatively across each interface.
Interposers, whether made of silicon, glass, or organic materials, play a significant role in the heat flow picture. Silicon interposers offer superior thermal conductivity, enabling heat spreading but also concentrating thermal load where chiplets cluster. Organic interposers introduce more thermal resistance but offer other integration advantages. Achieving the correct tradeoff means modelling these layers as active participants in heat distribution, not static mechanical components.
The entire package, including substrate layers, heat spreaders, and lid materials, must also be included in thermal simulation. When package effects are omitted in early analysis, temperature predictions often skew optimistic, masking hotspots that emerge only after assembly-level modelling is performed.
Materials, TIMs, and Cooling Options for Stacks
Thermal simulation heavily relies on the structural definition of a product because the geometry, material properties, and assembly details directly dictate how heat is generated, transferred, and dissipated.
High-conductivity silicon, optimised interconnect materials, and improved underfill or bonding layers can lower the vertical thermal resistance of a stack. Thermal interface materials (TIMs) exhibit significant variations in performance, and even slight differences in thickness or coverage can result in substantial temperature differences across dies.
Cooling strategies for 3D-ICs are evolving rapidly. Traditional air cooling can be sufficient for moderate power budgets, but high-performance AI and HPC systems often require advanced approaches such as direct liquid cooling or vapour chamber solutions. The choice of cooling strategy should align with the power roadmap, not just the current generation’s requirements. Once a die stack is assembled, cooling options become constrained, so decisions made early influence the thermal feasibility of future product iterations.
Co-Optimisation with Placement and PDN Design
Thermal constraints directly influence floorplanning, macro placement, and power delivery network (PDN) topology in 3D-ICs. Efficient heat spreading is achieved when high-power blocks are positioned to maximise vertical conduction paths and lateral spreading through metal layers. If a block is placed too far from major thermal conduits, even robust cooling cannot compensate for the heat.

The PDN adds additional complexity. Power delivery structures, including TSVs, bumps, and interposer redistribution layers, introduce their own resistive heating. When modelled jointly with thermal effects, the combined electro-thermal behaviour reveals interactions that neither analysis can capture alone. Co-optimisation across these domains ensures that thermal mitigation does not compromise power integrity and vice versa.
A tightly integrated workflow enables round-trip refinement as power, placement, and package assumptions evolve. Without this iterative co-design, late-stage violations become inevitable, requiring disruptive redesigns.
Electro-Thermal Readiness for Signoff
Before committing a 3D-IC to final signoff, teams must verify that the design can withstand realistic thermal stress across operating modes and process corners. This includes validating that estimated power profiles align with actual activity, ensuring that predicted peak temperatures remain within safe limits, and confirming that no layer or interface exceeds its thermal reliability threshold.

Die-to-die boundaries, micro-bump arrays, TSV clusters, and package interconnects must be evaluated holistically, since minor thermal mismatches can accumulate into significant mechanical strain. Long-term reliability also depends on understanding how temperature interacts with electromigration, ageing, and performance drift over the product lifetime.
A complete electro-thermal signoff process provides the confidence needed before entering manufacturing, reducing field failures and ensuring long-term stability.
Designing for Thermal Scalability
3D-ICs deliver unprecedented performance, but they require a disciplined and predictive approach to thermal management. Success depends on treating heat as a first-order design variable, not a late-stage correction. Early modelling, accurate power estimation, careful material and stack selection, and co-optimisation across placement, PDN, interposer, and package all contribute to thermal resilience.
As system complexity continues to climb, teams that embed electro-thermal planning into their architecture and implementation flows will deliver higher-performing, more reliable, and scalable 3D-IC designs. Thermal awareness is no longer a specialisation; it is a foundational competency for next-generation semiconductor design.
The post Thermal Management in 3D-IC: Modelling Hotspots, Materials, & Cooling Strategies appeared first on ELE Times.
ROHM introduces reference designs for three-phase inverters featuring new SiC power modules
Keysight Launches AI Inference Emulation Platform to Validate and Optimise AI Infrastructure
Keysight Technologies has introduced Keysight AI Inference Builder (KAI Inference Builder), an emulation and analytics platform designed to validate inference-optimised AI infrastructure at scale. Keysight will demonstrate the solution at NVIDIA GTC, showcasing operation within NVIDIA DSX Air AI factory simulation environments to model and optimise AI data centre infrastructure, architectures, and performance.
As the AI industry shifts from training large language models (LLMs) to deploying them, optimising inference has become a crucial factor for ROI. However, inference behaviour is highly dynamic and difficult to emulate. Traditional testing methods like synthetic traffic generation or GPU benchmarks cannot accurately reproduce the latency-sensitive workload behaviour of AI inferencing across compute, networking, memory, storage, and security layers.
KAI Inference Builder closes that gap by recreating realistic inference workload patterns and modelling industry-specific usage patterns to validate AI infrastructure, applications, and data centre deployments. The platform gives AI cloud providers, hardware vendors, and application developers a scalable solution for measuring, validating, and optimising real-world inference performance.
Key benefits of KAI Inference Builder include:
Built for the Inference Era: As part of the Keysight Artificial Intelligence (KAI) portfolio, KAI Inference Builder emulates AI inference workloads at scale and validates full-stack deployments under real-world conditions to optimise performance, scale, and security.
- Industry- and Application-Specific Benchmarking: Instead of generic emulations, KAI Inference Builder emulates industry-specific usage patterns and LLM architectures for AI models seen in finance, healthcare, and other verticals, enabling organisations to model and analyse infrastructure and application behaviour across different types of AI data centre deployments.
- End-to-End Validation and Optimisation: KAI Inference Builder evaluates inference workflows from user request to model response, helping teams reduce costly rework by identifying and resolving bottlenecks early across compute, network, and security layers.
- Subsystem Isolation and Root-Cause Precision: KAI Inference Builder can also do client-only emulation, which identifies where performance bottlenecks emerge across the AI infrastructure stack under load, enabling targeted optimisation that reduces overprovisioning, lowers costs, and improves overall efficiency.
- NVIDIA DSX Air Integration and Live GTC Demo: Keysight will showcase KAI Inference Builder’s turnkey integration with NVIDIA Air at NVIDIA GTC, generating realistic inference workloads throughout NVIDIA’s data centre simulation environment so operators can validate inference infrastructure before deploying physical equipment.
Ram Periakaruppan, Vice President and General Manager, Network Test & Security Solutions at Keysight, said: “Inference is the key to unlocking AI’s ROI, but that can be challenging to achieve when system resources aren’t optimised for capacity and performance. KAI Inference Builder provides visibility into real-world inference performance across the full stack, enabling customers to validate and optimise deployments before hardware reaches the rack. Showcasing this capability at NVIDIA GTC using NVIDIA’s Air platform demonstrates how organisations can accelerate the path to production while reducing risk and cost.”
Amit Katz, VP of Networking at NVIDIA, said: “As AI data centres scale to unprecedented levels, pre-deployment validation has transitioned from a best practice to a mission-critical requirement. The integration of KAI Inference Builder with NVIDIA DSX Air provides the essential environment needed to eliminate performance volatility and enables NVIDIA AI Factory partners and customers to emulate real inference workloads and preemptively resolve bottlenecks, ensuring optimised AI services reach the market quickly.”
The post Keysight Launches AI Inference Emulation Platform to Validate and Optimise AI Infrastructure appeared first on ELE Times.
POET and LITEON to co-develop optical modules for AI applications
STMicroelectronics accelerates global adoption and market growth of Physical AI with NVIDIA
STMicroelectronics announced the acceleration of global development and adoption of physical AI systems, including humanoid, industrial, service and healthcare robots. ST is integrating its comprehensive portfolio for advanced robotics into the reference set of components compatible with the NVIDIA Holoscan Sensor Bridge (HSB). In parallel, high-fidelity NVIDIA Isaac Sim models of ST components are being integrated into both companies’ robotics ecosystems to support faster, more accurate sim-to-real research and development. The first deliverables available to developers today include the integration of Leopard’s depth camera enabled by ST with the NVIDIA HSB and the high-fidelity model of an ST IMU into NVIDIA’s Isaac Sim ecosystem.
“ST is well engaged within the robotics community, providing robust support and a well-established ecosystem,” said Rino Peruzzi, Executive Vice President, Sales & Marketing, Americas & Global Key Account Organization at STMicroelectronics. “Our collaboration with NVIDIA aims to unleash the next wave of cutting-edge robotics innovation with developer and customer experience streamlined at every step, from the inception of AI algorithms to the seamless integration of sensors and actuators. This will accelerate the evolution of sophisticated AI-driven physical platforms.”
“Accelerating the development of next-generation autonomous systems requires high-fidelity simulation and seamless hardware integration to bridge the gap between virtual training and real-world deployment,” said Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA. “The integration of STMicroelectronics’ sensor and actuator technologies with NVIDIA Isaac Sim, Holoscan Sensor Bridge and Jetson platforms provides developers with a unified foundation to build, simulate and deploy physical AI at scale.”
Simplifying sensor and actuator integration with the Holoscan Sensor Bridge
With the NVIDIA HSB, developers can unify, standardise, synchronise, and streamline data acquisition and logging from multiple ST sensors and actuators, a critical foundation for building high-fidelity NVIDIA Isaac models, accelerating learning, and minimising the sim-to-real gap.
The goal is to simplify the process of connecting ST sensors and actuators to NVIDIA Jetson platforms through pre-integrated solutions for the combination of STM32 MCUs, advanced sensors (including IMUs, imagers, and ToF devices) and motor‑control solutions, particularly for humanoid robot designs. Leopard Imaging’s stereo depth camera for robots is the perfect example. Using ST imaging, depth and motion-sensing technologies, it is expected to support a broad wave of designs across Physical AI OEMs, academic research groups and the industrial robotics community.
Reducing cost, complexity, and challenges with high-fidelity modelling for Omniverse Isaac
Advanced robotics developers face high development costs, in addition to modelling challenges. High‑fidelity simulations with extensive randomisation demand substantial GPU and CPU resources and large datasets. Selecting which parameters to randomise, and over what ranges, requires deep domain expertise. Poor choices can result in unrealistic scenarios or inefficient training. Finally, excessive variability can confuse models, slow convergence, and degrade real‑world performance when randomisation no longer reflects plausible conditions.
ST and NVIDIA’s objective is to provide accurate, hardware-calibrated models for the comprehensive portfolio of ST components, matching the requirements of advanced robotics. Following the availability of the first model of an IMU, ST is working to bring developers models of ToF sensors, actuators and other ICs derived from benchmark data collected on real ST hardware, using ST tools to capture accurate parameters and realistic behaviour, resulting in models optimised to NVIDIA’s Isaac Sim ecosystem. NVIDIA HSB is being integrated into ST’s toolchain collaboratively.
As a result, ST and NVIDIA envision that more accurate models will significantly improve robot learning. With models that closely mirror real-world device behaviour, robots can learn from simulations that better reflect actual conditions, shortening training cycles and lowering the cost of building and refining humanoid robotics applications.
The post STMicroelectronics accelerates global adoption and market growth of Physical AI with NVIDIA appeared first on ELE Times.



