Feed aggregator

CGD wins Hyundai Open Innovation challenge

Semiconductor today - 2 hours 13 min ago
Fabless firm Cambridge GaN Devices Ltd (CGD) — which was spun out of the University of Cambridge in 2016 to design, develop and commercialize power transistors and ICs that use GaN-on-silicon substrates — has been named as one of the winners of Hyundai’s Open Innovation Challenge on the future of sporty driving. Out of nearly 50 companies, CGD’s one-chip ICeGaN solution has been selected for its robustness and ease of use, showing high potential for utilization in power modules for EV traction inverters. This technology enables GaN to be considered as a cost-effective alternative to expensive silicon carbide (SiC) solutions in the high-power EV inverter market, says CGD...

Just Finished Some Automated PCBA Test Fixtures!

Reddit:Electronics - Wed, 12/17/2025 - 23:50
Just Finished Some Automated PCBA Test Fixtures!

One of 4 Automated PCBA test fixtures I have just completed, entire design is from scratch and pretty much everything you see is 3D printed or Laser Cut!

I have 2x PCBAs inside, lots of wires and an additional switching PSU with Dummy load to simulate a battery for the UUT!

submitted by /u/Future_Ball_9094
[link] [comments]

Just made my first 4 layer design

Reddit:Electronics - Wed, 12/17/2025 - 23:27
Just made my first 4 layer design

Hello, this is a radiophone project I'm working on while in my second year of ECE.

I came up with this new design this time on 4 layers as impedances are really smaller.

First part of the circuit (bottom left) is an LC that will tune close to 1Mhz using an old-school variable capacitor. On next the signal gets demodulated, amplified, given power and outputted (bottom middle) and the rest is a simple power rectifier, with an IC for a cool volume bar using LEDs

Pics are in order of layers, I used GND/SIGNAL - GND - POWER / SIGNAL - GND, and keepout zone below the transformer in order to remove capacitive noise.

Schematics

Layer 1 gnd/signal

Layer 2 GND

Layer 3 power/signal

Layer 4 gnd

submitted by /u/S4vDs
[link] [comments]

DVD Burner Laser w/CC Power Supply

Reddit:Electronics - Wed, 12/17/2025 - 23:07
DVD Burner Laser w/CC Power Supply

This is a small laser module built utilizing a laser diode recovered from a DVD burner. The power supply is based on a ST Microelectronics LM317T adjustable voltage regulator set up in a constant current configuration.

Picture 3 is the output next to a ~5mw laser pointer output. don't think my phone camera liked taking this picture.

Schematic included.

submitted by /u/AwesomeAvocado
[link] [comments]

Looking up what component you have to get a pinout......

Reddit:Electronics - Wed, 12/17/2025 - 22:43
Looking up what component you have to get a pinout......

Why the F did they decide to. No, no lissen, we need 36 different pinouts on the same ic with no id code on it either making it impossible to know wich "style" ic you got. Now that's what we need. Looking for help to identify GDS on the nmos somehow cuircit or instrument no problem.

submitted by /u/Whyjustwhydothat
[link] [comments]

rosco_m68k debugging story — two LEDs on, no boot

Reddit:Electronics - Wed, 12/17/2025 - 19:47
rosco_m68k debugging story — two LEDs on, no boot

I recently assembled a rosco_m68k tht kit version. Took around 4 hours, tried to keep everything as clean and careful as possible.

Ironically, I’m also working on my own soldering-related project called SolderDemon, so this failure was a good reminder that even clean work can hide stupid problems.

After powering it on, the board wouldn’t boot. Only the START and RESET LEDs were on. Measuring the CPU RESET pin showed ~2V, which made no sense.

First suspect was the RESET button, I desoldered it completely. No change.

While reflashing the PLD, I finally noticed the real issue: one of the IC sockets had a bad pin. The chip looked seated properly, but that pin wasn’t making contact at all.

I fixed the contact temporarily just to test it and the system booted immediately.

Lesson learned: don’t just inspect solder joints. Check IC socket pins too.
Even when the board looks clean, a single bad contact can make a system look completely dead.

submitted by /u/kynis45
[link] [comments]

NUBURU secures $25m financing to complete acquisitions

Semiconductor today - Wed, 12/17/2025 - 17:57
NUBURU Inc of Centennial, CO, USA — which was founded in 2015 and developed and previously manufactured high-power industrial blue lasers — has entered into a securities purchase agreement with YA II PN Ltd pursuant to which it will receive a gross cash infusion of $23.25m in exchange for the issuance of a $25m unsecured debenture and related warrant packages...

Enabling a variable output regulator to produce 0 volts? Caveat, designer!

EDN Network - Wed, 12/17/2025 - 15:00

For some time now, many of EDN’s Design Ideas (DIs) have dealt with ground-referenced, single-power-supplied voltage regulators whose outputs can be configured to produce zero or near-zero volts [1][2].

In this mode of operation, regulation in response to an AC signal is problematic. This is because the regulator output voltage can’t be more negative than zero. For the many regulators with totem pole outputs, at zero volts, we could hope for the ground-side MOSFET to be indefinitely enabled, and the high side disabled. But that’s not a regulator, it’s a switch.

Wow the engineering world with your unique design: Design Ideas Submission Guide

There might be some devices that act this way when asked to produce 0 volts, but in general, the best that could be hoped for is that the output is simply disabled. In such a case, a load that is solely an energy sink would pull the voltage to ground (woe unto any that are energy sources!).

But is it lollipops and butterflies all the way down to and including zero volts? I decided to test one regulator to see how it behaves.

Testing the regulator

A TPS54821EVM-049 evaluation module employs a TPS54821 buck regulator. I’ve configured its PCB for 6.3-V out and connected it to an 8-Ω load. I’ve also connected a function generator through a 22 kΩ resistor to the regulator’s V_SNS (feedback) pin.

The generator is set to produce a 360 mVp-p square wave across the load. It also provides a variable offset voltage, which is used to set the minimum voltage Vmin of the regulator output’s square-wave. Figure 1 contains several screenshots of regulator operation while it’s configured for various values of Vmin.

Figure 1 Oscilloscope screenshot with Vmin set to (a) 400 mV, (b) 300 mV, (c) 200 mV, (d) 100 mV, (e) 30 mV, (f) 0 mV, (g)  below 0 mV. See text for further discussion. The scales of each screenshot are 100mV and 1mS per large division. An exception is (g), whose timescale is 100µS per large division.

As can be seen, the output is relatively clean when Vmin is 400 mV, but gets progressively noisier as it is reduced in 100mV steps down to 100mV (Figures 1a – 1d).

But the real problems start when Vmin is set to about 30 mV and some kind of AC signal replaces what would preferably be a DC one; the regulator is switching between open and closed-loop operation (Figure 1e).  

We really get into the swing of things when Vmin is set to 0 mV and intermittent signals of about 150 mVp-p arise and disappear (Figure 1f). As the generator continues to be changed in the direction that would drive the regulator output more negative if it were capable, the amplitude of the regulator’s ringing immediately following the waveform’s falling edge increases (Figure 1g). Additionally, the overshoot of its recovery increases.

Why isn’t it on the datasheet?

This behavior might or might not disturb you. But it exists. And there are no guarantees that things would not be worse with different lots of TPS54821 or other switcher or linear regulator types altogether. These could be operating with different loads, feedback networks, and input voltage supplies with varying DC levels and amounts of noise.

There might be a very good reason that typical datasheets don’t discuss operation with output voltages below their references—it might not be possible to specify an output voltage below which all is guaranteed to work as desired. Or maybe it is.

But if it is, then why aren’t such capabilities mentioned? Where is there an IC manufacturer’s datasheet whose first page does not promise to kiss you and offer you a chocolate before you go to bed? (That is, list every possible feature of a product to induce you to buy it.)

Finding the lowest guaranteed output level

Consider a design whose intent is to allow a regulator to produce a voltage near or at zero. Absent any help from the regulator’s datasheet, I’m not sure I’d know how to go about finding a guaranteed output level below which bad things couldn’t happen.

But suppose this could be done. The “Gold-Plated” [1] DI was updated under this assumption. It provides a link to a spreadsheet that accepts the regulator reference voltage and its tolerance, a minimum allowed output voltage, a desired maximum one, and the tolerance of the resistors to be used in the circuit.

It calculates standard E96 resistor values of a specified precision along with the limits of both the maximum and the minimum output voltage ranges [3].  

“Standard” regulator results

A similar spreadsheet has been created for the more general “standard” regulator circuit in Figure 2. That latter can be found at [4].

Figure 2 The “standard” regulator in which a reference voltage Vext, independent of the regulator, is used in conjunction with Rg2 to drive the regulator output to voltages below its reference voltage. For linear regulators, L1 is replaced with a short.

The spreadsheet [4] was run with the following requirements in Figure 3.

Figure 3 Sample input requirements for the spreadsheet to calculate the resistor values and minimum and maximum output voltage range limits for a Standard regulator design.

The spreadsheet’s calculated voltage limits are shown in Figure 4.

Figure 4 Spreadsheet calculations of the minimum and maximum output voltage range limits for the requirements of Figure 3.

A Monte Carlo simulation was run 10000 times. The limits were confirmed to be close to and within the calculated ones (Figure 5).

Figure 5 Monte Carlo simulation results confirming the limits were consistent with the calculated ones.

A visual of the Monte Carlo results is helpful (Figure 6).

Figure 6 A graph of the Monte Carlo minimum output voltage range and the maximum one for the standard regulator. See text.

The minimum range is larger than the maximum range. This is because two large signals with tolerances are being subtracted to produce relatively small ones. The signals’ nominal values interfere destructively as intended. Unfortunately, the variations due to the tolerances of the two references do not:

OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 ) – Vext · PWM · Rf/Rg2

“Gold-Plated” regulator results

When I released the “Gold-Plated” DI whose basic concept is seen in Figure 7, I did so as a lark. But after applying the aforementioned “standard” regulator’s design criteria to the Gold-Plated design’s spreadsheet [3], it became apparent that the Gold-Plated design has a real value—its ability to more greatly constrain the limits of the minimum output voltage range.

Figure 7 The basic concept of the Gold-Plated regulator. K = 1 + R3/R4 .

The input to the Gold-Plated spreadsheet is shown in Figure 8.

Figure 8 The inputs to the Gold-Plated spreadsheet.

Its calculations of the minimum and maximum output voltage range limits are shown in Figure 9.

Figure 9 The results for the “Gold-Plated” spreadsheet showing maximum and minimum voltage range limits when PWM inputs are at minimum and maximum duty cycles.

The limits resulting from its 10000 run Monte Carlo simulation were again confirmed to be close to and within those calculated by the spreadsheet:

Figure 10 Monte Carlo simulation results of the Gold-Plated spreadsheet, confirming the limits were consistent with the calculated ones.

Again, a visual is helpful, with the Gold-Plated results on the left and the Standard on the right.

 Figure 11 Graphs of the Monte Carlo simulation results of the Gold-Plated (left) and Standard (right) designs. The minimum voltage range of the Gold-Plated design is far smaller than that of the Standard.

The Standard regulator’s minimum range magnitude is 161 mV, while that of the Gold-Plated version is only 33 mV. The Gold-Plated’s advantage will increase as the desired Vmin approaches 0 volts. Its benefits are due to the fact that only a single reference is involved in the subtraction of terms:

OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 · PWM · ( 1 – K ) )

Belatedly, another advantage of the Gold-Plated was discovered: When a load is applied to any regulator, its output voltage falls by a small amount, causing a reduction of ΔV at the Vref feedback pin.

In the Gold-Plated, there is an even larger reduction at the output of its op-amp because of its gain. The result is a reduced drop across Rg2. This acts to increase the output voltage, improving load regulation.

In contrast, while the Standard regulator also sees a ΔV drop at the feedback pin, the external regulator voltage remains steady. The result is an increase in the drop across Rg2, further reducing the output voltage and degrading load regulation.

Summing up

The benefits of the Gold-Plated design are clear, but it’s not a panacea. Whether a Gold-Plated or  Standard design is used, designers still must address the question: How low should you go? Caveat, designer!

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content/References

  1. Gold-Plated PWM-control of linear and switching regulators
  2. Accuracy loss from PWM sub-Vsense regulator programming
  3. Gold-Plated DI Github
  4. Enabling a variable output regulator to produce 0 volts DI Github

The post Enabling a variable output regulator to produce 0 volts? Caveat, designer! appeared first on EDN.

RENA joins UK-funded consortium to strengthen national semiconductor metrology capabilities

Semiconductor today - Wed, 12/17/2025 - 13:55
RENA Technologies GmbH of Gütenbach, Germany (which supplies production machines for wet chemical surface preparation) is a key industrial partner in a new €1.3m (£1.2m) Government-funded project led by the National Physical Laboratory (NPL) – the UK’s National Metrology Institute (NMI) – and supported by the Department for Science, Innovation and Technology (DSIT). The initiative aims to establish critical new metrology capabilities to strengthen the UK’s semiconductor innovation infrastructure and accelerate the development and adoption of next-generation semiconductor materials and processes...

Why memory swizzling is hidden tax on AI compute

EDN Network - Wed, 12/17/2025 - 13:47

Walk into any modern AI lab, data center, or autonomous vehicle development environment, and you’ll hear engineers talk endlessly about FLOPS, TOPS, sparsity, quantization, and model scaling laws. Those metrics dominate headlines and product datasheets. If you spend time with the people actually building or optimizing these systems, a different truth emerges: Raw arithmetic capability is not what governs real-world performance.

What matters most is how efficiently data moves. And for most of today’s AI accelerators, data movement is tangled up with something rarely discussed outside compiler and hardware circles, that is, memory swizzling.

Memory swizzling is one of the biggest unseen taxes paid by modern AI systems. It doesn’t enhance algorithmic processing efficiency. It doesn’t improve accuracy. It doesn’t lower energy consumption. It doesn’t produce any new insight. Rather, it exists solely to compensate for architectural limitations inherited from decades-old design choices. And as AI models grow larger and more irregular, the cost of this tax is growing.

This article looks at why swizzling exists, how we got here, what it costs us, and how a fundamentally different architectural philosophy, specifically, a register-centric model, removes the need for swizzling entirely.

The problem nobody talks about: Data isn’t stored the way hardware needs it

In any AI tutorial, tensors are presented as ordered mathematical objects that sit neatly in memory in perfect layouts. These layouts are intuitive for programmers, and they fit nicely into high-level frameworks like PyTorch or TensorFlow.

The hardware doesn’t see the world this way.

Modern accelerators—GPUs, TPUs, and NPUs—are built around parallel compute units that expect specific shapes of data: tiles of fixed size, strict alignment boundaries, sequences with predictable stride patterns, and arranged in ways that map into memory banks without conflicts.

Unfortunately, real-world tensors never arrive in those formats. Before the processing even begins, data must be reshaped, re-tiled, re-ordered, or re-packed into the format the hardware expects. That reshaping is called memory swizzling.

You may think of it this way: The algorithm thinks in terms of matrices and tensors; the computing hardware thinks in terms of tiles, lanes, and banks. Swizzling is the translation layer—a translation that costs time and energy.

Why hierarchical memory forces us to swizzle

Virtually, every accelerator today uses a hierarchical memory stack whose layers, from the top-down, encompass registers; shared or scratchpad memory; L1 cache, L2 cache, sometimes even L3 cache, high-bandwidth memory (HBM), and, at the bottom of the stack, the external dynamic random-access memory (DRAM).

Each level has different size, latency, bandwidth, access energy consumption, and, rather important, alignment constraints. This is a legacy of CPU-style architecture where caches hide memory latency. See Figure 1 and Table 1.

Figure 1 See the capacity and bandwidth attributes of a typical hierarchical memory stack in all current hardware processors. Source: VSORA

Table 1 Capacity, latency, bandwidth, and access energy dissipation of a typical hierarchical memory stack in all current hardware processors are shown here. Source: VSORA

GPUs inherited this model, then added single-instruction multiple-thread (SIMT) execution on top. That makes them phenomenally powerful—but also extremely sensitive to how data is laid out. If neighboring threads in a warp don’t access neighboring memory locations, performance drops dramatically. If tile boundaries don’t line up, tensor cores stall. If shared memory bank conflicts occur, everything waits.

TPUs suffer from similar constraints, just with different mechanics. Their systolic arrays operate like tightly choreographed conveyor belts. Data must arrive in the right order and at the right time. If weights are not arranged in block-major format, the systolic fabric can’t operate efficiently.

NPUs-based accelerators—from smartphone chips to automotive systems—face the same issues: multi-bank SRAMs, fixed vector widths, and 2D locality requirements for vision workloads. Without swizzling, data arrives “misaligned” for the compute engine, and performance nosedives.

In all these cases, swizzling is not an optimization—it’s a survival mechanism.

The hidden costs of swizzling

Swizzling takes time, sometimes a lot

In real workloads, swizzling often consumes 20–60% of the total runtime. That’s not a typo. In a convolutional neural network, half the time may be spent doing NHWC ↔ NCHW conversions; that is, two different ways of laying out 4D tensors in memory. In a transformer, vast amounts of time are wasted into reshaping Q/K/V tensors, splitting heads, repacking tiles for GEMMs, and reorganizing outputs.

Swizzling burns energy and energy is the real limiter

A single MAC consumes roughly a quarter of a picojoule. Moving a value from DRAM can cost 500 picojoules. Moving data from a DRAM dissipates in the ballpark of 1,000 times more energy than performing a basic multiply-accumulate operation.

Swizzling requires reading large blocks of data, rearranging them, and writing them back. And this often happens multiple times per layer. When 80% of your energy budget goes to moving data rather than computing on it, swizzling becomes impossible to ignore.

Swizzling inflates memory usage

Most swizzling requires temporary buffers: packed tiles, staging buffers, and reshaped tensors. These extra memory footprints can push models over the limits of L2, L3, or even HBM, forcing even more data movement.

Swizzling makes software harder and less portable

Ask a CUDA engineer what keeps him up at night. Ask a TPU compiler engineer why XLA is thousands of pages deep in layout inference code. Ask anyone who writes an NPU kernel for mobile why they dread channel permutations.

It’s swizzling. The software must carry enormous complexity because the hardware demands very specific layouts. And every new model architecture—CNNs, LSTMs, transformers, and diffusion models—adds new layout patterns that must be supported.

The result is an ecosystem glued together by layout heuristics, tensor transformations, and performance-sensitive memory choreography.

How major architectures became dependent on swizzling

  1. Nvidia GPUs

Tensor cores require specific tile-major layouts. Shared memory is banked, avoiding conflicts requires swizzling. Warps must coalesce memory accesses; otherwise, efficiency tanks. Even cuBLAS and cuDNN, the most optimized GPU libraries on Earth, are filled with internal swizzling kernels.

  1. Google TPUs

TPUs rely on systolic arrays. The flow of data through these arrays must be perfectly ordered. Weights and activations are constantly rearranged to align with the systolic fabric. Much of XLA exists simply to manage data layout.

  1. AMD CDNA, ARM Ethos, Apple ANE, and Qualcomm AI engine

Every one of these architectures performs swizzling. Morton tiling, interleaving, channel stacking, etc. It’s a universal pattern. Every architecture that uses hierarchical memory inherits the need for swizzling.

A different philosophy: Eliminating swizzling at the root

Now imagine stepping back and rethinking AI hardware from first principles. Instead of accepting today’s complex memory hierarchies as unavoidable—the layers of caches, shared-memory blocks, banked SRAMs, and alignment rules—imagine an architecture built on a far simpler premise.

What if there were no memory hierarchy at all? What if, instead, the entire system revolved around a vast, flat expanse of registers? What if the compiler, not the hardware, orchestrated every data movement with deterministic precision? And what if all the usual anxieties—alignment, bank conflicts, tiling strategies, and coalescing rules—simply disappeared because they no longer mattered?

This is the philosophy behind a register-centric architecture. Rather than pushing data up and down a ladder of memory levels, data simply resides in the registers where computation occurs. The architecture is organized not around the movement of data, but around its availability.

That means:

  • No caches to warm up or miss
  • No warps to schedule
  • No bank conflicts to avoid
  • No tile sizes to match
  • No tensor layouts to respect
  • No sensitivity to shapes or strides, and therefore no swizzling at all

In such a system, the compiler always knows exactly where each value lives, and exactly where it needs to be next. It doesn’t speculate, prefetch, tile, or rely on heuristics. It doesn’t cross its fingers hoping the hardware behaves. Instead, data placement becomes a solvable, predictable problem.

The result is a machine where throughput remains stable, latency becomes predictable, and energy consumption collapses because unnecessary data motion has been engineered out of the loop. It’s a system where performance is no longer dominated by memory gymnastics—and where computing, the actual math, finally takes center stage.

The future of AI: Why a register-centric architecture matters

As AI systems evolve, the tidy world of uniform tensors and perfectly rectangular compute tiles are steadily falling away. Modern models are no longer predictable stacks of dense layers marching in lockstep. Instead, they expand in every direction: They ingest multimodal inputs, incorporate sparse and irregular structures, reason adaptively, and operate across ever-longer sequences. They must also respond in real time for safety-critical applications, and they must do so within tight energy budgets—from cars to edge devices.

In other words, the assumptions that shaped GPU and TPU architectures—the expectation of regularity, dense grids, and neat tiling—are eroding. The future workloads are simply not shaped the way the hardware wants them to be.

A register-centric architecture offers a fundamentally different path. Because it operates directly on data where it lives, rather than forcing that data into tile-friendly formats, it sidesteps the entire machinery of memory swizzling. It does not depend on fixed tensor shapes.

It doesn’t stumble when access patterns become irregular or dynamic. It avoids the costly dance of rearranging data just to satisfy the compute units. And as models grow more heterogeneous and more sophisticated, such an architecture scale with their complexity instead of fighting against it.

This is more than an incremental improvement. It represents a shift in how we think about AI compute. By eliminating unnecessary data movement—the single largest bottleneck and energy sink in modern accelerators—a register-centric approach aligns hardware with the messy, evolving reality of AI itself.

Memory swizzling is the quiet tax that every hierarchical-memory accelerator pays. It is fundamental to how GPUs, TPUs, NPUs, and nearly all AI chips operate. It’s also a growing liability. Swizzling introduces latency, burns energy, bloats memory usage, and complicates software—all while contributing nothing to the actual math.

One register-centric architecture eliminates swizzling at the root by removing the hierarchy that makes it necessary. It replaces guesswork and heuristics with deterministic dataflow. It prioritizes locality without requiring rearrangement. It lets the algorithm drive the hardware, not vice versa.

As AI workloads become more irregular, dynamic, and power-sensitive, architectures that keep data stationary and predictable—rather than endlessly reshuffling it—will define the next generation of compute.

Swizzling was a necessary patch for the last era of hardware. It should not define the next one.

Lauro Rizzatti is a business advisor to VSORA, a technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.

 

Related Content

The post Why memory swizzling is hidden tax on AI compute appeared first on EDN.

NHanced supporting mixed-material heterogeneous hybrid bonding production with copper or nickel bonds

Semiconductor today - Wed, 12/17/2025 - 12:48
NHanced Semiconductors Inc of Batavia, IL (the first US-based pure-play advanced packaging foundry) says that it uniquely supports mixed-material hybrid bonding with either copper or nickel bonds. Its new Besi bonding system further expands its advanced packaging yield and throughput...

NEHU’s indigenous chip against Red Spider Mite for tea gardens

ELE Times - Wed, 12/17/2025 - 12:12

The North-Eastern Hill University (NEHU) in Meghalaya, developed an innovative indigenous semiconductor chip aimed at repelling the Red Spider Mite, one of the most destructive pests affecting the tea gardens across Northeast and other tea-growing regions in India.

This tech driven and eco-friendly innovation has been entirely developed at the Department of Electronics and communication Engineering, NEHU with collaborative research efforts of Pankaj Sarkar, Sushanta Kabir Dutta, Sangeeta Das, and Bhaiswajyoti Lahon.

The chip’s fabrication was undertaken at the Semiconductor Laboratory, Mohali, a premier Government of India facility for semiconductor manufacturing.

Currently, the semiconductor technology is relevant in the agricultural sector with regards to sensors, drone facilities, edge computing, IoT, and Artificial Intelligence. The development of technology has modernised the agricultural infrastructure to provide precision farming, and predictive analysis to boast yield.

The post NEHU’s indigenous chip against Red Spider Mite for tea gardens appeared first on ELE Times.

Renesas Launches R-Car Gen 5 Platform for Multi-Domain SDVs

ELE Times - Wed, 12/17/2025 - 10:57

Renesas Electronics Corporation is expanding its software-defined vehicle (SDV) solution offerings centered around the fifth-generation (Gen 5) R-Car family. The latest device in the Gen 5 family, the R-Car X5H is the industry’s first multi-domain automotive system-on-chip (SoC) manufactured with advanced 3nm process technology. It is capable of simultaneously running vehicle functions across advanced driver assistance systems (ADAS), in-vehicle infotainment (IVI), and gateway systems.

 

Renesas has begun sampling Gen 5 silicon and now offers full evaluation boards and the R-Car Open Access (RoX) Whitebox Software Development Kit (SDK) as part of the next phase of development. Renesas is also driving deeper collaboration with customers and partners to accelerate adoption. At CES 2026, Renesas will showcase AI-powered multi-domain demonstrations of the R-Car X5H in action.

 

The R-Car X5H leverages one of the most advanced process nodes in the industry to offer the highest level of integration, performance and power efficiency, with up to 35 percent lower power consumption than previous 5nm solutions. As AI becomes integral to next-generation SDVs, the SoC delivers powerful central compute targeting multiple automotive domains, with the flexibility to scale AI performance using chiplet extensions. It delivers up to 400 TOPS of AI performance, with chiplets boosting acceleration by four times or more. It also features 4 TFLOPS equivalent of GPU power for high-end graphics and over 1,000k DMIPS powered by 32 Arm Cortex-A720AE CPU cores and six Cortex-R52 lockstep cores with ASIL D support. Leveraging mixed criticality technology, the SoC executes advanced features in multiple domains without compromising safety.

  

Accelerating Automotive Innovation with an Open, Scalable RoX Whitebox SDK

To accelerate time-to-market, Renesas now offers the RoX Whitebox Software Development Kit (SDK) for the R-Car X5H, an open platform built on Linux, Android, and XEN hypervisor. Additional support for partner OS and solutions is available, including AUTOSAR, EB corbos Linux, QNX, Red Hat and SafeRTOS. Developers can jumpstart development out of the box using the SDK to build ADAS, L3/L4 autonomy, intelligent cockpit, and gateway systems. An integrated stack of AI and ADAS software enables real-time perception and sensor fusion while generative AI and Large Language Models (LLMs) enable intelligent human-machine interaction for next-generation AI cockpits. The SDK integrates production-grade application software stacks from leading partners such as Candera, DSPConceptsNullmaxSmartEyeSTRADVISION and ThunderSoft, supporting end-to-end development of modern automotive software architectures and faster time to market.

 

“Since introducing our most advanced R-Car device last year, we have been steadfast in developing market-ready solutions, including delivering silicon samples to customers earlier this year,” Vivek Bhan, Senior Vice President and General Manager of High-Performance Computing at Renesas. “In collaboration with OEMs, Tier-1s and partners, we are rapidly rolling out a complete development system that powers the next generation of software-defined vehicles. These intelligent compute platforms deliver a smarter, safer and more connected driving experience and are built to scale with future AI mobility demands.”

 

“Integrating Renesas’ R-Car X5 generation series into our high-performance compute portfolio is a natural next step that builds on our existing collaboration,” said Christian Koepp, Senior Vice President Compute Performance at Bosch’s Cross-Domain Computing Solutions Division. “At CES 2026, we look forward to showcasing this powerful solution with Renesas X5H SoC, demonstrating its fusion capabilities across multiple vehicle domains, including video perception for advanced driver assistance systems.”

 

“Innovative system-on-chip technology, such as Renesas’ R-Car X5H, is paving the way for ZF’s software-defined vehicle strategy,” said Dr. Christian Brenneke, Head of ZF’s Electronics & ADAS division. “Combining Renesas’ R-Car X5H with our ADAS software solutions enables us to offer full-stack ADAS capabilities with high computing power and scalability. The joint platform combines radar localization and HD mapping to provide accurate perception and positioning for reliable ADAS performance. At CES 2026, we’ll showcase our joint ADAS solution.”

 

First Fusion Demo on R-Car X5H with Partner Solutions at CES 2026

The new multi-domain demo upscales from R-Car Gen 4 to the next-generation R-Car X5H on the RoX platform, integrating ADAS and IVI stacks, RTOS, and edge AI functionality on Linux and Android with XEN hypervisor virtualization. Supporting input from eight high-resolution cameras and up to eight displays with resolutions reaching 8K2K, the platform delivers immersive visualization and robust sensor integration for next-generation SDVs. Combined with the RoX Whitebox SDK and production-grade partner software stacks, the platform is engineered for real-world deployment covering multiple automotive domains.


Availability

Renesas is shipping R-Car X5H silicon samples and evaluation boards, along with the RoX Whitebox SDK, to select customers and partners.

The post Renesas Launches R-Car Gen 5 Platform for Multi-Domain SDVs appeared first on ELE Times.

Why Frugal engineering is a critical aspect for advanced materials in 2026

ELE Times - Wed, 12/17/2025 - 08:14

by Vijay Bolloju, Director R&D, iVP Semiconductor

Widespread electrification of everything is pushing the boundaries of Power Electronics systems. The demand for high power densities and lower weight in systems necessitates the use of novel materials.

Representational Image

The newer generation power semiconductor devices can operate at higher temperatures. Operating at higher temperatures can increase the power densities and reduce the Power device costs of the system. At the same time, it poses reliability concerns due to dielectric breakdown, deformation, and increased leakage currents due to ionic contamination of the moulding compounds. Packaging materials capable of reliably operating at higher temperatures are needed to exploit their capabilities to the fullest.

It is also evident from the recent trends that the operating voltages of systems like EVs, Data Centres, telecom, etc, are on the rise. Higher operating voltages warrant a higher degree of compliance for the safety of the users.

The cost breakdown of the high-power electronic systems shows that more than half of the cost comes from non-semiconductor materials. Materials such as plastics used for packaging, thermal interface materials (TIM), sealing compounds, heat dissipaters such as heat sinks, cooling liquids, substrates, connectors, etc.

Substrates play a major role in the thermal performance, structural stability, and reliability of the systems. FR4 PCBs have very poor thermal conductivity (0.24 W/m-K) and are commonly used for low-power systems. FR4 also has low Tg (~ 130 °C) and limits the operating range for the power semiconductors. These substrates are not recommended for high-power applications.

Aluminium Metal core PCBs (MCPCBs)are also widely used for building electronic circuits. These substrates have relatively higher thermal conductivity (2 W/m-K) and higher Tg. MCPCBs offer better mechanical stability and thermal performance. Though multi-layer MCPCBs are available, the most common MCPCBs are single-layer due to cost considerations. This will limit the ability to make the systems compact.

Ceramic substrates such as alumina (Al2O3), aluminium nitride (AlN) have excellent thermal conductivity and mechanical stability. Alumina has 100X higher thermal conductivity (24 W/m-K) and aluminium nitride has 1000X higher thermal conductivity (240 W/m-K) than FR4 material. They also render superior reliability and high-temperature operation capability. They are perfectly suited for high-power systems. They are also single-layer due to cost considerations.

The selection of the substrate materials should be based on the cost-performance criteria. Cost of the substrates increases in this order: FR4 PCBs, MCPCBs, and ceramic substrates. But the power semiconductor costs will reduce in the reverse order due to the improvement in the thermal conductivity. The reliability of the system also depends on the substrate choice – Ceramics offering the best, and FR4 the least. So, a sensible trade-off should be considered to make a suitable choice.

Thermal interface materials (TIM) also have a profound effect on the system performance, reliability, and cost. They are often neglected and not paid due attention. But they can really help in enhancing the thermal performance of the system and even reducing the number of power devices needed to implement the designs. TIMs also help in providing dielectric insulation to the system. So, an ideal TIM has high thermal conductivity and high dielectric strength. Choosing a proper TIM that meets the system requirements can help in reducing overall system cost and size.

Choosing proper substrate materials, TIM, and heat dissipator can reduce the system cost and size considerably and lead to frugal designs.

A holistic approach to design from the selection of power device technologies, substrates, TIM, and power dissipators may result in high-performance, reliable, and lower-cost systems.

Currently, the Indian materials ecosystem is poor and needs to be revamped to serve the power electronics industry to achieve higher performance metrics. The plastics, substrates, TIM, and other materials can be locally developed using advances in materials such as nano-materials, carbon compounds, engineering plastics, composite materials, etc. India has a mature ceramics industry serving the energy sector, the medical industry, etc. The technologies can be used to make substrate materials for power electronics applications. Metallization of the ceramic substrates to print the circuits is also an essential skill set to be developed.

High thermal conductivity composite materials, metal foam forming, and phase change materials can elevate the thermal performance of the systems. If the system can be cooled using advanced materials without the need for a liquid cooling system, it can considerably reduce the system cost and improve the reliability of the system.

All the materials described above that can improve system performance and reliability while reducing cost (Frugal innovations) can be developed and manufactured locally. A concerted and collaborative effort is all it needs.

The post Why Frugal engineering is a critical aspect for advanced materials in 2026 appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator