Українською
  In English
Feed aggregator
rosco_m68k debugging story — two LEDs on, no boot
| | I recently assembled a rosco_m68k tht kit version. Took around 4 hours, tried to keep everything as clean and careful as possible. Ironically, I’m also working on my own soldering-related project called SolderDemon, so this failure was a good reminder that even clean work can hide stupid problems. After powering it on, the board wouldn’t boot. Only the START and RESET LEDs were on. Measuring the CPU RESET pin showed ~2V, which made no sense. First suspect was the RESET button, I desoldered it completely. No change. While reflashing the PLD, I finally noticed the real issue: one of the IC sockets had a bad pin. The chip looked seated properly, but that pin wasn’t making contact at all. I fixed the contact temporarily just to test it and the system booted immediately. Lesson learned: don’t just inspect solder joints. Check IC socket pins too. [link] [comments] |
More vintage electronics
| | submitted by /u/Green-Pie4963 [link] [comments] |
NUBURU secures $25m financing to complete acquisitions
Enabling a variable output regulator to produce 0 volts? Caveat, designer!

For some time now, many of EDN’s Design Ideas (DIs) have dealt with ground-referenced, single-power-supplied voltage regulators whose outputs can be configured to produce zero or near-zero volts [1][2].
In this mode of operation, regulation in response to an AC signal is problematic. This is because the regulator output voltage can’t be more negative than zero. For the many regulators with totem pole outputs, at zero volts, we could hope for the ground-side MOSFET to be indefinitely enabled, and the high side disabled. But that’s not a regulator, it’s a switch.
Wow the engineering world with your unique design: Design Ideas Submission Guide
There might be some devices that act this way when asked to produce 0 volts, but in general, the best that could be hoped for is that the output is simply disabled. In such a case, a load that is solely an energy sink would pull the voltage to ground (woe unto any that are energy sources!).
But is it lollipops and butterflies all the way down to and including zero volts? I decided to test one regulator to see how it behaves.
Testing the regulatorA TPS54821EVM-049 evaluation module employs a TPS54821 buck regulator. I’ve configured its PCB for 6.3-V out and connected it to an 8-Ω load. I’ve also connected a function generator through a 22 kΩ resistor to the regulator’s V_SNS (feedback) pin.
The generator is set to produce a 360 mVp-p square wave across the load. It also provides a variable offset voltage, which is used to set the minimum voltage Vmin of the regulator output’s square-wave. Figure 1 contains several screenshots of regulator operation while it’s configured for various values of Vmin.
Figure 1 Oscilloscope screenshot with Vmin set to (a) 400 mV, (b) 300 mV, (c) 200 mV, (d) 100 mV, (e) 30 mV, (f) 0 mV, (g) below 0 mV. See text for further discussion. The scales of each screenshot are 100mV and 1mS per large division. An exception is (g), whose timescale is 100µS per large division.
As can be seen, the output is relatively clean when Vmin is 400 mV, but gets progressively noisier as it is reduced in 100mV steps down to 100mV (Figures 1a – 1d).
But the real problems start when Vmin is set to about 30 mV and some kind of AC signal replaces what would preferably be a DC one; the regulator is switching between open and closed-loop operation (Figure 1e).
We really get into the swing of things when Vmin is set to 0 mV and intermittent signals of about 150 mVp-p arise and disappear (Figure 1f). As the generator continues to be changed in the direction that would drive the regulator output more negative if it were capable, the amplitude of the regulator’s ringing immediately following the waveform’s falling edge increases (Figure 1g). Additionally, the overshoot of its recovery increases.
Why isn’t it on the datasheet?This behavior might or might not disturb you. But it exists. And there are no guarantees that things would not be worse with different lots of TPS54821 or other switcher or linear regulator types altogether. These could be operating with different loads, feedback networks, and input voltage supplies with varying DC levels and amounts of noise.
There might be a very good reason that typical datasheets don’t discuss operation with output voltages below their references—it might not be possible to specify an output voltage below which all is guaranteed to work as desired. Or maybe it is.
But if it is, then why aren’t such capabilities mentioned? Where is there an IC manufacturer’s datasheet whose first page does not promise to kiss you and offer you a chocolate before you go to bed? (That is, list every possible feature of a product to induce you to buy it.)
Finding the lowest guaranteed output levelConsider a design whose intent is to allow a regulator to produce a voltage near or at zero. Absent any help from the regulator’s datasheet, I’m not sure I’d know how to go about finding a guaranteed output level below which bad things couldn’t happen.
But suppose this could be done. The “Gold-Plated” [1] DI was updated under this assumption. It provides a link to a spreadsheet that accepts the regulator reference voltage and its tolerance, a minimum allowed output voltage, a desired maximum one, and the tolerance of the resistors to be used in the circuit.
It calculates standard E96 resistor values of a specified precision along with the limits of both the maximum and the minimum output voltage ranges [3].
“Standard” regulator resultsA similar spreadsheet has been created for the more general “standard” regulator circuit in Figure 2. That latter can be found at [4].
Figure 2 The “standard” regulator in which a reference voltage Vext, independent of the regulator, is used in conjunction with Rg2 to drive the regulator output to voltages below its reference voltage. For linear regulators, L1 is replaced with a short.
The spreadsheet [4] was run with the following requirements in Figure 3.

Figure 3 Sample input requirements for the spreadsheet to calculate the resistor values and minimum and maximum output voltage range limits for a Standard regulator design.
The spreadsheet’s calculated voltage limits are shown in Figure 4.

Figure 4 Spreadsheet calculations of the minimum and maximum output voltage range limits for the requirements of Figure 3.
A Monte Carlo simulation was run 10000 times. The limits were confirmed to be close to and within the calculated ones (Figure 5).

Figure 5 Monte Carlo simulation results confirming the limits were consistent with the calculated ones.
A visual of the Monte Carlo results is helpful (Figure 6).

Figure 6 A graph of the Monte Carlo minimum output voltage range and the maximum one for the standard regulator. See text.
The minimum range is larger than the maximum range. This is because two large signals with tolerances are being subtracted to produce relatively small ones. The signals’ nominal values interfere destructively as intended. Unfortunately, the variations due to the tolerances of the two references do not:
OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 ) – Vext · PWM · Rf/Rg2
“Gold-Plated” regulator resultsWhen I released the “Gold-Plated” DI whose basic concept is seen in Figure 7, I did so as a lark. But after applying the aforementioned “standard” regulator’s design criteria to the Gold-Plated design’s spreadsheet [3], it became apparent that the Gold-Plated design has a real value—its ability to more greatly constrain the limits of the minimum output voltage range.
Figure 7 The basic concept of the Gold-Plated regulator. K = 1 + R3/R4 .
The input to the Gold-Plated spreadsheet is shown in Figure 8.

Figure 8 The inputs to the Gold-Plated spreadsheet.
Its calculations of the minimum and maximum output voltage range limits are shown in Figure 9.

Figure 9 The results for the “Gold-Plated” spreadsheet showing maximum and minimum voltage range limits when PWM inputs are at minimum and maximum duty cycles.
The limits resulting from its 10000 run Monte Carlo simulation were again confirmed to be close to and within those calculated by the spreadsheet:

Figure 10 Monte Carlo simulation results of the Gold-Plated spreadsheet, confirming the limits were consistent with the calculated ones.
Again, a visual is helpful, with the Gold-Plated results on the left and the Standard on the right.

Figure 11 Graphs of the Monte Carlo simulation results of the Gold-Plated (left) and Standard (right) designs. The minimum voltage range of the Gold-Plated design is far smaller than that of the Standard.
The Standard regulator’s minimum range magnitude is 161 mV, while that of the Gold-Plated version is only 33 mV. The Gold-Plated’s advantage will increase as the desired Vmin approaches 0 volts. Its benefits are due to the fact that only a single reference is involved in the subtraction of terms:
OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 · PWM · ( 1 – K ) )
Belatedly, another advantage of the Gold-Plated was discovered: When a load is applied to any regulator, its output voltage falls by a small amount, causing a reduction of ΔV at the Vref feedback pin.
In the Gold-Plated, there is an even larger reduction at the output of its op-amp because of its gain. The result is a reduced drop across Rg2. This acts to increase the output voltage, improving load regulation.
In contrast, while the Standard regulator also sees a ΔV drop at the feedback pin, the external regulator voltage remains steady. The result is an increase in the drop across Rg2, further reducing the output voltage and degrading load regulation.
Summing upThe benefits of the Gold-Plated design are clear, but it’s not a panacea. Whether a Gold-Plated or Standard design is used, designers still must address the question: How low should you go? Caveat, designer!
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content/References
- Gold-Plated PWM-control of linear and switching regulators
- Accuracy loss from PWM sub-Vsense regulator programming
- Gold-Plated DI Github
- Enabling a variable output regulator to produce 0 volts DI Github
The post Enabling a variable output regulator to produce 0 volts? Caveat, designer! appeared first on EDN.
RENA joins UK-funded consortium to strengthen national semiconductor metrology capabilities
Why memory swizzling is hidden tax on AI compute

Walk into any modern AI lab, data center, or autonomous vehicle development environment, and you’ll hear engineers talk endlessly about FLOPS, TOPS, sparsity, quantization, and model scaling laws. Those metrics dominate headlines and product datasheets. If you spend time with the people actually building or optimizing these systems, a different truth emerges: Raw arithmetic capability is not what governs real-world performance.
What matters most is how efficiently data moves. And for most of today’s AI accelerators, data movement is tangled up with something rarely discussed outside compiler and hardware circles, that is, memory swizzling.
Memory swizzling is one of the biggest unseen taxes paid by modern AI systems. It doesn’t enhance algorithmic processing efficiency. It doesn’t improve accuracy. It doesn’t lower energy consumption. It doesn’t produce any new insight. Rather, it exists solely to compensate for architectural limitations inherited from decades-old design choices. And as AI models grow larger and more irregular, the cost of this tax is growing.
This article looks at why swizzling exists, how we got here, what it costs us, and how a fundamentally different architectural philosophy, specifically, a register-centric model, removes the need for swizzling entirely.
The problem nobody talks about: Data isn’t stored the way hardware needs it
In any AI tutorial, tensors are presented as ordered mathematical objects that sit neatly in memory in perfect layouts. These layouts are intuitive for programmers, and they fit nicely into high-level frameworks like PyTorch or TensorFlow.
The hardware doesn’t see the world this way.
Modern accelerators—GPUs, TPUs, and NPUs—are built around parallel compute units that expect specific shapes of data: tiles of fixed size, strict alignment boundaries, sequences with predictable stride patterns, and arranged in ways that map into memory banks without conflicts.
Unfortunately, real-world tensors never arrive in those formats. Before the processing even begins, data must be reshaped, re-tiled, re-ordered, or re-packed into the format the hardware expects. That reshaping is called memory swizzling.
You may think of it this way: The algorithm thinks in terms of matrices and tensors; the computing hardware thinks in terms of tiles, lanes, and banks. Swizzling is the translation layer—a translation that costs time and energy.
Why hierarchical memory forces us to swizzle
Virtually, every accelerator today uses a hierarchical memory stack whose layers, from the top-down, encompass registers; shared or scratchpad memory; L1 cache, L2 cache, sometimes even L3 cache, high-bandwidth memory (HBM), and, at the bottom of the stack, the external dynamic random-access memory (DRAM).
Each level has different size, latency, bandwidth, access energy consumption, and, rather important, alignment constraints. This is a legacy of CPU-style architecture where caches hide memory latency. See Figure 1 and Table 1.

Figure 1 See the capacity and bandwidth attributes of a typical hierarchical memory stack in all current hardware processors. Source: VSORA

Table 1 Capacity, latency, bandwidth, and access energy dissipation of a typical hierarchical memory stack in all current hardware processors are shown here. Source: VSORA
GPUs inherited this model, then added single-instruction multiple-thread (SIMT) execution on top. That makes them phenomenally powerful—but also extremely sensitive to how data is laid out. If neighboring threads in a warp don’t access neighboring memory locations, performance drops dramatically. If tile boundaries don’t line up, tensor cores stall. If shared memory bank conflicts occur, everything waits.
TPUs suffer from similar constraints, just with different mechanics. Their systolic arrays operate like tightly choreographed conveyor belts. Data must arrive in the right order and at the right time. If weights are not arranged in block-major format, the systolic fabric can’t operate efficiently.
NPUs-based accelerators—from smartphone chips to automotive systems—face the same issues: multi-bank SRAMs, fixed vector widths, and 2D locality requirements for vision workloads. Without swizzling, data arrives “misaligned” for the compute engine, and performance nosedives.
In all these cases, swizzling is not an optimization—it’s a survival mechanism.
The hidden costs of swizzling
Swizzling takes time, sometimes a lot
In real workloads, swizzling often consumes 20–60% of the total runtime. That’s not a typo. In a convolutional neural network, half the time may be spent doing NHWC
NCHW conversions; that is, two different ways of laying out 4D tensors in memory. In a transformer, vast amounts of time are wasted into reshaping Q/K/V tensors, splitting heads, repacking tiles for GEMMs, and reorganizing outputs.
Swizzling burns energy and energy is the real limiter
A single MAC consumes roughly a quarter of a picojoule. Moving a value from DRAM can cost 500 picojoules. Moving data from a DRAM dissipates in the ballpark of 1,000 times more energy than performing a basic multiply-accumulate operation.
Swizzling requires reading large blocks of data, rearranging them, and writing them back. And this often happens multiple times per layer. When 80% of your energy budget goes to moving data rather than computing on it, swizzling becomes impossible to ignore.
Swizzling inflates memory usage
Most swizzling requires temporary buffers: packed tiles, staging buffers, and reshaped tensors. These extra memory footprints can push models over the limits of L2, L3, or even HBM, forcing even more data movement.
Swizzling makes software harder and less portable
Ask a CUDA engineer what keeps him up at night. Ask a TPU compiler engineer why XLA is thousands of pages deep in layout inference code. Ask anyone who writes an NPU kernel for mobile why they dread channel permutations.
It’s swizzling. The software must carry enormous complexity because the hardware demands very specific layouts. And every new model architecture—CNNs, LSTMs, transformers, and diffusion models—adds new layout patterns that must be supported.
The result is an ecosystem glued together by layout heuristics, tensor transformations, and performance-sensitive memory choreography.
How major architectures became dependent on swizzling
- Nvidia GPUs
Tensor cores require specific tile-major layouts. Shared memory is banked, avoiding conflicts requires swizzling. Warps must coalesce memory accesses; otherwise, efficiency tanks. Even cuBLAS and cuDNN, the most optimized GPU libraries on Earth, are filled with internal swizzling kernels.
- Google TPUs
TPUs rely on systolic arrays. The flow of data through these arrays must be perfectly ordered. Weights and activations are constantly rearranged to align with the systolic fabric. Much of XLA exists simply to manage data layout.
- AMD CDNA, ARM Ethos, Apple ANE, and Qualcomm AI engine
Every one of these architectures performs swizzling. Morton tiling, interleaving, channel stacking, etc. It’s a universal pattern. Every architecture that uses hierarchical memory inherits the need for swizzling.
A different philosophy: Eliminating swizzling at the root
Now imagine stepping back and rethinking AI hardware from first principles. Instead of accepting today’s complex memory hierarchies as unavoidable—the layers of caches, shared-memory blocks, banked SRAMs, and alignment rules—imagine an architecture built on a far simpler premise.
What if there were no memory hierarchy at all? What if, instead, the entire system revolved around a vast, flat expanse of registers? What if the compiler, not the hardware, orchestrated every data movement with deterministic precision? And what if all the usual anxieties—alignment, bank conflicts, tiling strategies, and coalescing rules—simply disappeared because they no longer mattered?
This is the philosophy behind a register-centric architecture. Rather than pushing data up and down a ladder of memory levels, data simply resides in the registers where computation occurs. The architecture is organized not around the movement of data, but around its availability.
That means:
- No caches to warm up or miss
- No warps to schedule
- No bank conflicts to avoid
- No tile sizes to match
- No tensor layouts to respect
- No sensitivity to shapes or strides, and therefore no swizzling at all
In such a system, the compiler always knows exactly where each value lives, and exactly where it needs to be next. It doesn’t speculate, prefetch, tile, or rely on heuristics. It doesn’t cross its fingers hoping the hardware behaves. Instead, data placement becomes a solvable, predictable problem.
The result is a machine where throughput remains stable, latency becomes predictable, and energy consumption collapses because unnecessary data motion has been engineered out of the loop. It’s a system where performance is no longer dominated by memory gymnastics—and where computing, the actual math, finally takes center stage.
The future of AI: Why a register-centric architecture matters
As AI systems evolve, the tidy world of uniform tensors and perfectly rectangular compute tiles are steadily falling away. Modern models are no longer predictable stacks of dense layers marching in lockstep. Instead, they expand in every direction: They ingest multimodal inputs, incorporate sparse and irregular structures, reason adaptively, and operate across ever-longer sequences. They must also respond in real time for safety-critical applications, and they must do so within tight energy budgets—from cars to edge devices.
In other words, the assumptions that shaped GPU and TPU architectures—the expectation of regularity, dense grids, and neat tiling—are eroding. The future workloads are simply not shaped the way the hardware wants them to be.
A register-centric architecture offers a fundamentally different path. Because it operates directly on data where it lives, rather than forcing that data into tile-friendly formats, it sidesteps the entire machinery of memory swizzling. It does not depend on fixed tensor shapes.
It doesn’t stumble when access patterns become irregular or dynamic. It avoids the costly dance of rearranging data just to satisfy the compute units. And as models grow more heterogeneous and more sophisticated, such an architecture scale with their complexity instead of fighting against it.
This is more than an incremental improvement. It represents a shift in how we think about AI compute. By eliminating unnecessary data movement—the single largest bottleneck and energy sink in modern accelerators—a register-centric approach aligns hardware with the messy, evolving reality of AI itself.
Memory swizzling is the quiet tax that every hierarchical-memory accelerator pays. It is fundamental to how GPUs, TPUs, NPUs, and nearly all AI chips operate. It’s also a growing liability. Swizzling introduces latency, burns energy, bloats memory usage, and complicates software—all while contributing nothing to the actual math.
One register-centric architecture eliminates swizzling at the root by removing the hierarchy that makes it necessary. It replaces guesswork and heuristics with deterministic dataflow. It prioritizes locality without requiring rearrangement. It lets the algorithm drive the hardware, not vice versa.
As AI workloads become more irregular, dynamic, and power-sensitive, architectures that keep data stationary and predictable—rather than endlessly reshuffling it—will define the next generation of compute.
Swizzling was a necessary patch for the last era of hardware. It should not define the next one.
Lauro Rizzatti is a business advisor to VSORA, a technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.
Related Content
- Overcoming the AI memory bottleneck
- AI to Drive Surge in Memory Prices Through 2026
- HBM memory chips: The unsung hero of the AI revolution
- Generative AI and memory wall: A wakeup call for IC industry
- Breaking Through Memory Bottlenecks: The Next Frontier for AI Performance
The post Why memory swizzling is hidden tax on AI compute appeared first on EDN.
NHanced supporting mixed-material heterogeneous hybrid bonding production with copper or nickel bonds
NEHU’s indigenous chip against Red Spider Mite for tea gardens
The North-Eastern Hill University (NEHU) in Meghalaya, developed an innovative indigenous semiconductor chip aimed at repelling the Red Spider Mite, one of the most destructive pests affecting the tea gardens across Northeast and other tea-growing regions in India.
This tech driven and eco-friendly innovation has been entirely developed at the Department of Electronics and communication Engineering, NEHU with collaborative research efforts of Pankaj Sarkar, Sushanta Kabir Dutta, Sangeeta Das, and Bhaiswajyoti Lahon.
The chip’s fabrication was undertaken at the Semiconductor Laboratory, Mohali, a premier Government of India facility for semiconductor manufacturing.
Currently, the semiconductor technology is relevant in the agricultural sector with regards to sensors, drone facilities, edge computing, IoT, and Artificial Intelligence. The development of technology has modernised the agricultural infrastructure to provide precision farming, and predictive analysis to boast yield.
The post NEHU’s indigenous chip against Red Spider Mite for tea gardens appeared first on ELE Times.
I Got Yer Vintage ICs
| submitted by /u/jim11662418 [link] [comments] |
Renesas Launches R-Car Gen 5 Platform for Multi-Domain SDVs
Renesas Electronics Corporation is expanding its software-defined vehicle (SDV) solution offerings centered around the fifth-generation (Gen 5) R-Car family. The latest device in the Gen 5 family, the R-Car X5H is the industry’s first multi-domain automotive system-on-chip (SoC) manufactured with advanced 3nm process technology. It is capable of simultaneously running vehicle functions across advanced driver assistance systems (ADAS), in-vehicle infotainment (IVI), and gateway systems.
Renesas has begun sampling Gen 5 silicon and now offers full evaluation boards and the R-Car Open Access (RoX) Whitebox Software Development Kit (SDK) as part of the next phase of development. Renesas is also driving deeper collaboration with customers and partners to accelerate adoption. At CES 2026, Renesas will showcase AI-powered multi-domain demonstrations of the R-Car X5H in action.
The R-Car X5H leverages one of the most advanced process nodes in the industry to offer the highest level of integration, performance and power efficiency, with up to 35 percent lower power consumption than previous 5nm solutions. As AI becomes integral to next-generation SDVs, the SoC delivers powerful central compute targeting multiple automotive domains, with the flexibility to scale AI performance using chiplet extensions. It delivers up to 400 TOPS of AI performance, with chiplets boosting acceleration by four times or more. It also features 4 TFLOPS equivalent of GPU power for high-end graphics and over 1,000k DMIPS powered by 32 Arm Cortex-A720AE CPU cores and six Cortex-R52 lockstep cores with ASIL D support. Leveraging mixed criticality technology, the SoC executes advanced features in multiple domains without compromising safety.
Accelerating Automotive Innovation with an Open, Scalable RoX Whitebox SDK
To accelerate time-to-market, Renesas now offers the RoX Whitebox Software Development Kit (SDK) for the R-Car X5H, an open platform built on Linux, Android, and XEN hypervisor. Additional support for partner OS and solutions is available, including AUTOSAR, EB corbos Linux, QNX, Red Hat and SafeRTOS. Developers can jumpstart development out of the box using the SDK to build ADAS, L3/L4 autonomy, intelligent cockpit, and gateway systems. An integrated stack of AI and ADAS software enables real-time perception and sensor fusion while generative AI and Large Language Models (LLMs) enable intelligent human-machine interaction for next-generation AI cockpits. The SDK integrates production-grade application software stacks from leading partners such as Candera, DSPConcepts, Nullmax, SmartEye, STRADVISION and ThunderSoft, supporting end-to-end development of modern automotive software architectures and faster time to market.
“Since introducing our most advanced R-Car device last year, we have been steadfast in developing market-ready solutions, including delivering silicon samples to customers earlier this year,” Vivek Bhan, Senior Vice President and General Manager of High-Performance Computing at Renesas. “In collaboration with OEMs, Tier-1s and partners, we are rapidly rolling out a complete development system that powers the next generation of software-defined vehicles. These intelligent compute platforms deliver a smarter, safer and more connected driving experience and are built to scale with future AI mobility demands.”
“Integrating Renesas’ R-Car X5 generation series into our high-performance compute portfolio is a natural next step that builds on our existing collaboration,” said Christian Koepp, Senior Vice President Compute Performance at Bosch’s Cross-Domain Computing Solutions Division. “At CES 2026, we look forward to showcasing this powerful solution with Renesas X5H SoC, demonstrating its fusion capabilities across multiple vehicle domains, including video perception for advanced driver assistance systems.”
“Innovative system-on-chip technology, such as Renesas’ R-Car X5H, is paving the way for ZF’s software-defined vehicle strategy,” said Dr. Christian Brenneke, Head of ZF’s Electronics & ADAS division. “Combining Renesas’ R-Car X5H with our ADAS software solutions enables us to offer full-stack ADAS capabilities with high computing power and scalability. The joint platform combines radar localization and HD mapping to provide accurate perception and positioning for reliable ADAS performance. At CES 2026, we’ll showcase our joint ADAS solution.”
First Fusion Demo on R-Car X5H with Partner Solutions at CES 2026
The new multi-domain demo upscales from R-Car Gen 4 to the next-generation R-Car X5H on the RoX platform, integrating ADAS and IVI stacks, RTOS, and edge AI functionality on Linux and Android with XEN hypervisor virtualization. Supporting input from eight high-resolution cameras and up to eight displays with resolutions reaching 8K2K, the platform delivers immersive visualization and robust sensor integration for next-generation SDVs. Combined with the RoX Whitebox SDK and production-grade partner software stacks, the platform is engineered for real-world deployment covering multiple automotive domains.
Availability
Renesas is shipping R-Car X5H silicon samples and evaluation boards, along with the RoX Whitebox SDK, to select customers and partners.
The post Renesas Launches R-Car Gen 5 Platform for Multi-Domain SDVs appeared first on ELE Times.
Why Frugal engineering is a critical aspect for advanced materials in 2026
by Vijay Bolloju, Director R&D, iVP Semiconductor
Widespread electrification of everything is pushing the boundaries of Power Electronics systems. The demand for high power densities and lower weight in systems necessitates the use of novel materials.
Representational Image
The newer generation power semiconductor devices can operate at higher temperatures. Operating at higher temperatures can increase the power densities and reduce the Power device costs of the system. At the same time, it poses reliability concerns due to dielectric breakdown, deformation, and increased leakage currents due to ionic contamination of the moulding compounds. Packaging materials capable of reliably operating at higher temperatures are needed to exploit their capabilities to the fullest.
It is also evident from the recent trends that the operating voltages of systems like EVs, Data Centres, telecom, etc, are on the rise. Higher operating voltages warrant a higher degree of compliance for the safety of the users.
The cost breakdown of the high-power electronic systems shows that more than half of the cost comes from non-semiconductor materials. Materials such as plastics used for packaging, thermal interface materials (TIM), sealing compounds, heat dissipaters such as heat sinks, cooling liquids, substrates, connectors, etc.
Substrates play a major role in the thermal performance, structural stability, and reliability of the systems. FR4 PCBs have very poor thermal conductivity (0.24 W/m-K) and are commonly used for low-power systems. FR4 also has low Tg (~ 130 °C) and limits the operating range for the power semiconductors. These substrates are not recommended for high-power applications.
Aluminium Metal core PCBs (MCPCBs)are also widely used for building electronic circuits. These substrates have relatively higher thermal conductivity (2 W/m-K) and higher Tg. MCPCBs offer better mechanical stability and thermal performance. Though multi-layer MCPCBs are available, the most common MCPCBs are single-layer due to cost considerations. This will limit the ability to make the systems compact.
Ceramic substrates such as alumina (Al2O3), aluminium nitride (AlN) have excellent thermal conductivity and mechanical stability. Alumina has 100X higher thermal conductivity (24 W/m-K) and aluminium nitride has 1000X higher thermal conductivity (240 W/m-K) than FR4 material. They also render superior reliability and high-temperature operation capability. They are perfectly suited for high-power systems. They are also single-layer due to cost considerations.
The selection of the substrate materials should be based on the cost-performance criteria. Cost of the substrates increases in this order: FR4 PCBs, MCPCBs, and ceramic substrates. But the power semiconductor costs will reduce in the reverse order due to the improvement in the thermal conductivity. The reliability of the system also depends on the substrate choice – Ceramics offering the best, and FR4 the least. So, a sensible trade-off should be considered to make a suitable choice.
Thermal interface materials (TIM) also have a profound effect on the system performance, reliability, and cost. They are often neglected and not paid due attention. But they can really help in enhancing the thermal performance of the system and even reducing the number of power devices needed to implement the designs. TIMs also help in providing dielectric insulation to the system. So, an ideal TIM has high thermal conductivity and high dielectric strength. Choosing a proper TIM that meets the system requirements can help in reducing overall system cost and size.
Choosing proper substrate materials, TIM, and heat dissipator can reduce the system cost and size considerably and lead to frugal designs.
A holistic approach to design from the selection of power device technologies, substrates, TIM, and power dissipators may result in high-performance, reliable, and lower-cost systems.
Currently, the Indian materials ecosystem is poor and needs to be revamped to serve the power electronics industry to achieve higher performance metrics. The plastics, substrates, TIM, and other materials can be locally developed using advances in materials such as nano-materials, carbon compounds, engineering plastics, composite materials, etc. India has a mature ceramics industry serving the energy sector, the medical industry, etc. The technologies can be used to make substrate materials for power electronics applications. Metallization of the ceramic substrates to print the circuits is also an essential skill set to be developed.
High thermal conductivity composite materials, metal foam forming, and phase change materials can elevate the thermal performance of the systems. If the system can be cooled using advanced materials without the need for a liquid cooling system, it can considerably reduce the system cost and improve the reliability of the system.
All the materials described above that can improve system performance and reliability while reducing cost (Frugal innovations) can be developed and manufactured locally. A concerted and collaborative effort is all it needs.
The post Why Frugal engineering is a critical aspect for advanced materials in 2026 appeared first on ELE Times.
Every STM32 Project Begins with Optimism
| Pain, Patience, and Persistence [link] [comments] |
I guess we're posting vintage ICs now?
| submitted by /u/botman [link] [comments] |
When you use a standard electrolytic capacitor instead of a low-ESR one in a switch power supply.
| submitted by /u/Electro-nut [link] [comments] |
Music with Flyback Transformer
| submitted by /u/mosfet01 [link] [comments] |
Ignoring the regulator’s reference

DAC control (via PWM or other source) of regulators is a popular design topic here in editor Aalyia Shaukat’s Design Ideas (DIs) corner. There’s even a special aspect of this subject that frequently provokes enthusiastic and controversial (even contentious) exchanges of opinion.
It’s the many and varied possibilities for integrating the regulator’s internal voltage reference into the design. The discussion tends to be extra energetic (and the resulting circuitry complex) when the design includes generating output voltages lower than the regulator’s internal reference.
Wow the engineering world with your unique design: Design Ideas Submission Guide
What can be done to make the discussion less complicated (and heated)?
An old rule of thumb suggests that when one facet of a problem makes the solution complex, sometimes a simple (and better) solution can be found by just ignoring that facet. So, I decided, just for fun, to give it a try with the regulator reference problem. Figure 1 shows the result.

Figure 1 DAC control of a regulator while ignoring its internal voltage reference where Vo = Vomax*(Vdac/Vdacmax). *±0.1%
Figure 1’s simple theory of operation revolves around the A1 differential amplifier.
Vo = Vomax(Vdac/Vdacmax)
If Vdacmax >= Vomax then R1a = R5/((Vdacmax/Vomax) – 1), omit R1b
If Vomax >= Vdacmax then R1b = R2/((Vomax/Vdacmax) – 1), omit R1a
A1 subtracts (suitably attenuated versions) of the control input signal (Vdac) from U1’s output voltage (Vo) and integrates the difference in the R3C3 feedback pair. The resulting negative feedback supplied to U1’s Vsense pin is independent of the Vsense voltage and is therefore independent of U1’s internal reference.
With the contribution of accuracy (and inaccuracy) from U1’s internal reference thus removed, the problem of integrating it into the design is therefore likewise removed.
Turns out the potential for really good precision is actually improved by ignoring the regulator reference, because they’re seldom better than 1% anyway.
With the Figure 1 circuit, accuracy is ultimately limited only by the DAC and very high precision DACs can be assembled at reasonable cost. For an example see, “A nice, simple, and reasonably accurate PWM-driven 16-bit DAC.”
Another nice feature is that Figure 1 leaks no pesky bias current into the feedback network. This bias is typically scores of microamps and can prevent the output from getting any closer than tens of millivolts to a true zero when the output load is light. No such problem exists here, unless picoamps count (hint: they don’t).
And did I mention it’s simple?
Oh yeah. About R6, depending on the voltage supplied to A1’s pin 8 and the absmax rating of U1’s Vsense pin, the possibility of an overvoltage might exist. If so, adjust the R4R6 ratio to prevent that. Otherwise, omit R6.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Active current mirror
- Gold-plated PWM-control of linear and switching regulators
- Accuracy loss from PWM sub-Vsense regulator programming
The post Ignoring the regulator’s reference appeared first on EDN.
Expanding power delivery in systems with USB PD 3.1

The Universal Serial Bus (USB) started out as a data interface, but it didn’t take long before progressing to powering devices. Initially, its maximum output was only 2.5 W; now, it can deliver up to 240 W over USB Type-C cables and connectors, processing power, data, and video. This revision is known as Extended Power Range (EPR), or USB Power Delivery Specification 3.1 (USB PD 3.1), introduced by the USB Implementers Forum. EPR uses higher voltage levels (28 V, 36 V, and 48 V), which at 5 A will deliver power of 140 W, 180 W, and 240 W, respectively.
USB PD 3.1 has an adjustable voltage supply mode, allowing for intermediate voltages between 9 V and the highest fixed voltage of the charger. This allows for greater flexibility by meeting the power needs of individual devices. USB PD 3.1 is backward-compatible with previous USB versions including legacy at 15 W (5 V/3 A) and the standard power range mode of below 100 W (20 V/5 A).
The ability to negotiate power for each device is an important strength of this specification. For example, a device consumes only the power it needs, which varies depending on the application. This applies to peripherals, where a power management process allows each device to take only the power it requires.
The USB PD 3.1 specification found a place in a wide range of applications, including laptops, gaming stations, monitors, industrial machinery and tools, small robots and drones, e-bikes, and more.
Microchip USB PD demo boardMicrochip provides a USB PD dual-charging-port (DCP) demonstration application, supporting the USB PD 3.1 specification. The MCP19061 USB PD DCP reference board (Figure 1) is pre-built to show the use of this technology in real-life applications. The board is fully assembled, programmed, and tested to evaluate and demonstrate digitally controlled smart charging applications for different USB PD loads, and it allows each connected device to request the best power level for its own operation.
Figure 1: MCP19061 USB DCP board (Source: Microchip Technology Inc.)
The board shows an example charging circuit with robust protections. It highlights charge allocation between the two ports as well as dynamically reconfigurable charge profile availability (voltage and current) for a given load. This power-balancing feature between ports provides better control over the charging process, in addition to delivering the right amount of power to each device.
The board provides output voltages from 3 V to 21 V and output currents from 0.5 A to 3 A. Its maximum input voltage range is from 6 V to 18 V, with 12 V being the recommended value.
The board comes with firmware designed to operate with a graphical user interface (GUI) and contains headers for in-circuit serial programming and I2C communication. An included USB-to-serial bridging board (such as the BB62Z76A MCP2221A breakout board USB) with the GUI allows different configurations to be quickly tested with real-world load devices charging on the two ports. The DCP board GUI requires a PC with Microsoft Windows operating system 7–11 and a USB 2.0 port. The GUI then shows parameter and board status and faults and enables user configuration.
DCP board componentsBeing a port board with two ports, there are two independent USB PD channels (Figure 2), each with their own dedicated analog front end (AFE). The AFE in the Microchip MCP19061 device is a mixed-signal, digitally controlled four-switch buck-boost power controller with integrated synchronous drivers and an I2C interface (Figure 3).
Figure 2: Two independently managed USB PD channels on the MCP19061-powered DCP board (Source: Microchip Technology Inc.)
Figure 3: Block diagram of the MCP19061 four-switch buck-boost device (Source: Microchip Technology Inc.)
Moreover, one of the channels features the Microchip MCP22350 device, a highly integrated, small-format USB Type-C PD 2.0 controller, whereas the other channel contains a Microchip MCP22301 device, which is a standalone USB Type-C PD port controller, supporting the USB PD 3.0 specification.
The MCP22350 acts as a companion PD controller to an external microcontroller, system-on-chip or USB hub. The MCP22301 is an integrated PD device with the functionality of the SAMD20 microcontroller, a low-power, 32-bit Arm Cortex-M0+ with an added MCP22350 PD media access control and physical layer.
Each channel also has its own UCS4002 USB Type-C port protector, guarding from faults but also protecting the integrity of the charging process and the data transfer (Figure 4).
Traditionally a USB Type-C connector embeds the D+/D– data lines (USB2), Rx/Tx for USB3.x or USB4, configuration channel (CC) lines for charge mode control, sideband-use (SBU) lines for optional functions, and ground (GND). The UCS4002 protects the CC and D+/D– lines for short-to-battery. It also offers battery short-to-GND (SG_SENS) protection for charging ports.
Integrated switching VCONN FETs (VCONN is a dedicated power supply pin in the USB Type-C connector) provide overvoltage, undervoltage, back-voltage, and overcurrent protection through the VCONN voltage. The board’s input rail includes a PMOS switch for reverse polarity protection and a CLC EMI filter. There are also features such as a VDD fuse and thermal shutdown, enabled by a dedicated temperature sensor, the MCP9700, which monitors the board’s temperature.
Figure 4: Block diagram of the UCS4002 USB port protector device (Source: Microchip Technology Inc.)
The UCS4002 also provides fault-reporting configurability via the FCONFIG pin, allowing users to configure the FAULT# pin behavior. The CC, D+/D –, and SG_SENS pins are electrostatic-discharge-protected to meet the IEC 61000-4-2 and ISO 10605 standards.
The DCP board includes an auxiliary supply based on the MCP16331 integrated step-down switch-mode regulator providing a 5-V voltage and an MCP1825 LDO linear regulator providing a 3.3-V auxiliary voltage.
Board operationThe MCP19061 DCP board shows how the MCP19061 device operates in a four-switch buck-boost topology for the purpose of supplying USB loads and charging them with their required voltage within a permitted range, regardless of the input voltage value. It is configured to independently regulate the amount of output voltage and current for each USB channel (their individual charging profile) while simultaneously communicating with the USB-C-connected loads using the USB PD stack protocols.
All operational parameters are programmable using the two integrated Microchip USB PD controllers, through a dynamic reconfiguration and customization of charging operations, power conversion, and other system parameters. The demo shows how to enable the USB PD programmable power supply fast-charging capability for advanced charging technology that can modify the voltage and current in real time for maximum power outputs based on the device’s charging status.
The MCP19061 device works in conjunction with both current- and voltage-sense control loops to monitor and regulate the load voltage and current. Moreover, the board automatically detects the presence or removal of a USB PD–compliant load.
When a USB PD–compliant load is connected to the USB-C Port 1 (on the PCB right side; this is the higher one), the USB communication starts and the MCP19061 DCP board displays the charging profiles under the Port 1 window.
If another USB PD load is connected to the USB-C Port 2, the Port 2 window gets populated the same way.
The MCP19061 PWM controllerThe MCP19061 is a highly integrated, mixed-signal four-switch buck-boost controller that operates from 4.5 V to 36 V and can withstand up to 42 V non-operating. Various enhancements were added to the MCP19061 to provide USB PD compatibility with minimum external components for improved calibration, accuracy, and flexibility. It features a digital PWM controller with a serial communication bus for external programmability and reporting. The modulator regulates the power flow by controlling the length of the on and off periods of the signal, or pulse widths.
The operation of the MCP19061 enables efficient power conversion with the capability to operate in buck (step-down), boost (step-up), and buck-boost topologies for various voltage levels that are lower, higher, or the same as the input voltage. It provides excellent precision and efficiency in power conversions for embedded systems while minimizing power losses. Its features include adjustable switching frequencies, integrated MOSFET drivers, and advanced fault protection. The operating parameters, protection levels, and fault-handling procedures are supervised by a proprietary state machine stored in its nonvolatile memory, which also stores the running parameters.
Internal digital registers handle the customization of the operating parameters, the startup and shutdown profiles, the protection levels, and the fault-handling procedures. To set the output current and voltage, an integrated high-accuracy reference voltage is used. Internal input and output dividers facilitate the design while maintaining high accuracy. A high-accuracy current-sense amplifier enables precise current regulation and measurement.
The MCP19061 contains three internal LDOs: a 5-V LDO (VDD) powers internal analog circuits and gate drivers and provides 5 V externally; a 4-V LDO (AVDD) powers the internal analog circuitry; and a 1.8-V LDO supplies the internal logic circuitry.
The MCP19061 is packaged in a 32-lead, 5 × 5-mm VQFN, allowing system designers to customize application-specific features without costly board real estate and additional component costs. A 1-MHz I2C serial bus enables the communication between the MCP19061 and the system controller.
The MCP19061 can be programmed externally. For further evaluation and testing, Microchip provides an MCP19061 dedicated evaluation board, the EV82S16A.
The post Expanding power delivery in systems with USB PD 3.1 appeared first on EDN.
Can we just agree that nixies are cool?
| I wanted experiment with them for a while, but I always thought that building a clock is just boring, so instead in making a nixie display for my geiger counter! [link] [comments] |
India’s Semicon Programme: From Design to Packaging
Union Minister of State for Electronics and Information Technology Shri Jitin Prasada, stated India’s achievements from the Semicon India Programme. The Indian Government launched the programme to develop a complete ecosystem ranging from design, fabrication, assembly, testing and packaging.
The government approved 10 units worth Rs. 1.6 lakh crore to set up Silicon fab, Silicon Carbide fab, advanced packaging, memory packaging, among others. These are expected to cater to chip requirements of multiple sectors such as consumer appliances, industrial electronics, automobiles, telecommunications, aerospace, and power electronics. The minister also recalled how some of the approved projects are also using indigenous technology for assembly, testing and packaging of semiconductor chips.
Additionally, the Product Linked Incentive scheme (PLI) for large scale electronics manufacturing of mobile phones and certain specified components, attracted an investment of Rs 14,065 Cr. up to October 2025.
Design Development
On the design front, the government launched the Design Linked Incentive scheme (DLI), which provided support to 23 companies (24 designs) for designing chips and SoCs for the products in satellite communication, drones, surveillance camera, Internet of Things (IoT) devices, LEDs driver, AI devices, telecom equipment, smart meter, etc. Additionally, to assist with infrastructure, the government provided free design tool (EDA) access to 94 startups, enabling 47 lakh hours of design tool usage.
Developing a Skilled Workforce
Realising the importance of skilled workforce in the semiconductor manufacturing process, the government also launched several programmes and collaborations to build a skilled workforce for India. The All India Council for Technical Education (AICTE) has launched various courses to provide technical training to students.
The government’s Chips to Start-up (C2S) Programme encourage India’s young engineers, where the government provides latest design tools (EDA) to 397 universities and start-ups.
A Skilled Manpower Advanced Research and Training (SMART) Lab has also been setup in NIELIT Calicut with an aim to train 1 lakh engineers nation-wide. More than 62 thousand engineers have already been trained.
ISM has also partnered with Lam Research for conducting a large-scale training programme in nanofabrication and process-engineering skills. These would further augment skilled workforce for ATMP and advanced packaging. The program aims to generate 60,000 trained manpower in next 10 years.
Subsequently, the FutureSkills PRIME program is a collaborative initiative of MeitY and National Association of Software and Service Companies (NASSCOM) aimed at making India a cutting-edge digital talent nation.
The post India’s Semicon Programme: From Design to Packaging appeared first on ELE Times.
AC to DC Conevrter design on KiCad
| | I created my first ever design on kiCad enjoyed the process. Any suggestions are purely welcomed.. Moreover I need some projects to make on ki cad soa s to start my career in PD design in VLSI Final year btech student ECE [link] [comments] |




