Збирач потоків

RENA joins UK-funded consortium to strengthen national semiconductor metrology capabilities

Semiconductor today - 4 години 9 хв тому
RENA Technologies GmbH of Gütenbach, Germany (which supplies production machines for wet chemical surface preparation) is a key industrial partner in a new €1.3m (£1.2m) Government-funded project led by the National Physical Laboratory (NPL) – the UK’s National Metrology Institute (NMI) – and supported by the Department for Science, Innovation and Technology (DSIT). The initiative aims to establish critical new metrology capabilities to strengthen the UK’s semiconductor innovation infrastructure and accelerate the development and adoption of next-generation semiconductor materials and processes...

Why memory swizzling is hidden tax on AI compute

EDN Network - 4 години 18 хв тому

Walk into any modern AI lab, data center, or autonomous vehicle development environment, and you’ll hear engineers talk endlessly about FLOPS, TOPS, sparsity, quantization, and model scaling laws. Those metrics dominate headlines and product datasheets. If you spend time with the people actually building or optimizing these systems, a different truth emerges: Raw arithmetic capability is not what governs real-world performance.

What matters most is how efficiently data moves. And for most of today’s AI accelerators, data movement is tangled up with something rarely discussed outside compiler and hardware circles, that is, memory swizzling.

Memory swizzling is one of the biggest unseen taxes paid by modern AI systems. It doesn’t enhance algorithmic processing efficiency. It doesn’t improve accuracy. It doesn’t lower energy consumption. It doesn’t produce any new insight. Rather, it exists solely to compensate for architectural limitations inherited from decades-old design choices. And as AI models grow larger and more irregular, the cost of this tax is growing.

This article looks at why swizzling exists, how we got here, what it costs us, and how a fundamentally different architectural philosophy, specifically, a register-centric model, removes the need for swizzling entirely.

The problem nobody talks about: Data isn’t stored the way hardware needs it

In any AI tutorial, tensors are presented as ordered mathematical objects that sit neatly in memory in perfect layouts. These layouts are intuitive for programmers, and they fit nicely into high-level frameworks like PyTorch or TensorFlow.

The hardware doesn’t see the world this way.

Modern accelerators—GPUs, TPUs, and NPUs—are built around parallel compute units that expect specific shapes of data: tiles of fixed size, strict alignment boundaries, sequences with predictable stride patterns, and arranged in ways that map into memory banks without conflicts.

Unfortunately, real-world tensors never arrive in those formats. Before the processing even begins, data must be reshaped, re-tiled, re-ordered, or re-packed into the format the hardware expects. That reshaping is called memory swizzling.

You may think of it this way: The algorithm thinks in terms of matrices and tensors; the computing hardware thinks in terms of tiles, lanes, and banks. Swizzling is the translation layer—a translation that costs time and energy.

Why hierarchical memory forces us to swizzle

Virtually, every accelerator today uses a hierarchical memory stack whose layers, from the top-down, encompass registers; shared or scratchpad memory; L1 cache, L2 cache, sometimes even L3 cache, high-bandwidth memory (HBM), and, at the bottom of the stack, the external dynamic random-access memory (DRAM).

Each level has different size, latency, bandwidth, access energy consumption, and, rather important, alignment constraints. This is a legacy of CPU-style architecture where caches hide memory latency. See Figure 1 and Table 1.

Figure 1 See the capacity and bandwidth attributes of a typical hierarchical memory stack in all current hardware processors. Source: VSORA

Table 1 Capacity, latency, bandwidth, and access energy dissipation of a typical hierarchical memory stack in all current hardware processors are shown here. Source: VSORA

GPUs inherited this model, then added single-instruction multiple-thread (SIMT) execution on top. That makes them phenomenally powerful—but also extremely sensitive to how data is laid out. If neighboring threads in a warp don’t access neighboring memory locations, performance drops dramatically. If tile boundaries don’t line up, tensor cores stall. If shared memory bank conflicts occur, everything waits.

TPUs suffer from similar constraints, just with different mechanics. Their systolic arrays operate like tightly choreographed conveyor belts. Data must arrive in the right order and at the right time. If weights are not arranged in block-major format, the systolic fabric can’t operate efficiently.

NPUs-based accelerators—from smartphone chips to automotive systems—face the same issues: multi-bank SRAMs, fixed vector widths, and 2D locality requirements for vision workloads. Without swizzling, data arrives “misaligned” for the compute engine, and performance nosedives.

In all these cases, swizzling is not an optimization—it’s a survival mechanism.

The hidden costs of swizzling

Swizzling takes time, sometimes a lot

In real workloads, swizzling often consumes 20–60% of the total runtime. That’s not a typo. In a convolutional neural network, half the time may be spent doing NHWC ↔ NCHW conversions; that is, two different ways of laying out 4D tensors in memory. In a transformer, vast amounts of time are wasted into reshaping Q/K/V tensors, splitting heads, repacking tiles for GEMMs, and reorganizing outputs.

Swizzling burns energy and energy is the real limiter

A single MAC consumes roughly a quarter of a picojoule. Moving a value from DRAM can cost 500 picojoules. Moving data from a DRAM dissipates in the ballpark of 1,000 times more energy than performing a basic multiply-accumulate operation.

Swizzling requires reading large blocks of data, rearranging them, and writing them back. And this often happens multiple times per layer. When 80% of your energy budget goes to moving data rather than computing on it, swizzling becomes impossible to ignore.

Swizzling inflates memory usage

Most swizzling requires temporary buffers: packed tiles, staging buffers, and reshaped tensors. These extra memory footprints can push models over the limits of L2, L3, or even HBM, forcing even more data movement.

Swizzling makes software harder and less portable

Ask a CUDA engineer what keeps him up at night. Ask a TPU compiler engineer why XLA is thousands of pages deep in layout inference code. Ask anyone who writes an NPU kernel for mobile why they dread channel permutations.

It’s swizzling. The software must carry enormous complexity because the hardware demands very specific layouts. And every new model architecture—CNNs, LSTMs, transformers, and diffusion models—adds new layout patterns that must be supported.

The result is an ecosystem glued together by layout heuristics, tensor transformations, and performance-sensitive memory choreography.

How major architectures became dependent on swizzling

  1. Nvidia GPUs

Tensor cores require specific tile-major layouts. Shared memory is banked, avoiding conflicts requires swizzling. Warps must coalesce memory accesses; otherwise, efficiency tanks. Even cuBLAS and cuDNN, the most optimized GPU libraries on Earth, are filled with internal swizzling kernels.

  1. Google TPUs

TPUs rely on systolic arrays. The flow of data through these arrays must be perfectly ordered. Weights and activations are constantly rearranged to align with the systolic fabric. Much of XLA exists simply to manage data layout.

  1. AMD CDNA, ARM Ethos, Apple ANE, and Qualcomm AI engine

Every one of these architectures performs swizzling. Morton tiling, interleaving, channel stacking, etc. It’s a universal pattern. Every architecture that uses hierarchical memory inherits the need for swizzling.

A different philosophy: Eliminating swizzling at the root

Now imagine stepping back and rethinking AI hardware from first principles. Instead of accepting today’s complex memory hierarchies as unavoidable—the layers of caches, shared-memory blocks, banked SRAMs, and alignment rules—imagine an architecture built on a far simpler premise.

What if there were no memory hierarchy at all? What if, instead, the entire system revolved around a vast, flat expanse of registers? What if the compiler, not the hardware, orchestrated every data movement with deterministic precision? And what if all the usual anxieties—alignment, bank conflicts, tiling strategies, and coalescing rules—simply disappeared because they no longer mattered?

This is the philosophy behind a register-centric architecture. Rather than pushing data up and down a ladder of memory levels, data simply resides in the registers where computation occurs. The architecture is organized not around the movement of data, but around its availability.

That means:

  • No caches to warm up or miss
  • No warps to schedule
  • No bank conflicts to avoid
  • No tile sizes to match
  • No tensor layouts to respect
  • No sensitivity to shapes or strides, and therefore no swizzling at all

In such a system, the compiler always knows exactly where each value lives, and exactly where it needs to be next. It doesn’t speculate, prefetch, tile, or rely on heuristics. It doesn’t cross its fingers hoping the hardware behaves. Instead, data placement becomes a solvable, predictable problem.

The result is a machine where throughput remains stable, latency becomes predictable, and energy consumption collapses because unnecessary data motion has been engineered out of the loop. It’s a system where performance is no longer dominated by memory gymnastics—and where computing, the actual math, finally takes center stage.

The future of AI: Why a register-centric architecture matters

As AI systems evolve, the tidy world of uniform tensors and perfectly rectangular compute tiles are steadily falling away. Modern models are no longer predictable stacks of dense layers marching in lockstep. Instead, they expand in every direction: They ingest multimodal inputs, incorporate sparse and irregular structures, reason adaptively, and operate across ever-longer sequences. They must also respond in real time for safety-critical applications, and they must do so within tight energy budgets—from cars to edge devices.

In other words, the assumptions that shaped GPU and TPU architectures—the expectation of regularity, dense grids, and neat tiling—are eroding. The future workloads are simply not shaped the way the hardware wants them to be.

A register-centric architecture offers a fundamentally different path. Because it operates directly on data where it lives, rather than forcing that data into tile-friendly formats, it sidesteps the entire machinery of memory swizzling. It does not depend on fixed tensor shapes.

It doesn’t stumble when access patterns become irregular or dynamic. It avoids the costly dance of rearranging data just to satisfy the compute units. And as models grow more heterogeneous and more sophisticated, such an architecture scale with their complexity instead of fighting against it.

This is more than an incremental improvement. It represents a shift in how we think about AI compute. By eliminating unnecessary data movement—the single largest bottleneck and energy sink in modern accelerators—a register-centric approach aligns hardware with the messy, evolving reality of AI itself.

Memory swizzling is the quiet tax that every hierarchical-memory accelerator pays. It is fundamental to how GPUs, TPUs, NPUs, and nearly all AI chips operate. It’s also a growing liability. Swizzling introduces latency, burns energy, bloats memory usage, and complicates software—all while contributing nothing to the actual math.

One register-centric architecture eliminates swizzling at the root by removing the hierarchy that makes it necessary. It replaces guesswork and heuristics with deterministic dataflow. It prioritizes locality without requiring rearrangement. It lets the algorithm drive the hardware, not vice versa.

As AI workloads become more irregular, dynamic, and power-sensitive, architectures that keep data stationary and predictable—rather than endlessly reshuffling it—will define the next generation of compute.

Swizzling was a necessary patch for the last era of hardware. It should not define the next one.

Lauro Rizzatti is a business advisor to VSORA, a technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.

 

Related Content

The post Why memory swizzling is hidden tax on AI compute appeared first on EDN.

NHanced supporting mixed-material heterogeneous hybrid bonding production with copper or nickel bonds

Semiconductor today - 5 годин 17 хв тому
NHanced Semiconductors Inc of Batavia, IL (the first US-based pure-play advanced packaging foundry) says that it uniquely supports mixed-material hybrid bonding with either copper or nickel bonds. Its new Besi bonding system further expands its advanced packaging yield and throughput...

NEHU’s indigenous chip against Red Spider Mite for tea gardens

ELE Times - 5 годин 53 хв тому

The North-Eastern Hill University (NEHU) in Meghalaya, developed an innovative indigenous semiconductor chip aimed at repelling the Red Spider Mite, one of the most destructive pests affecting the tea gardens across Northeast and other tea-growing regions in India.

This tech driven and eco-friendly innovation has been entirely developed at the Department of Electronics and communication Engineering, NEHU with collaborative research efforts of Pankaj Sarkar, Sushanta Kabir Dutta, Sangeeta Das, and Bhaiswajyoti Lahon.

The chip’s fabrication was undertaken at the Semiconductor Laboratory, Mohali, a premier Government of India facility for semiconductor manufacturing.

Currently, the semiconductor technology is relevant in the agricultural sector with regards to sensors, drone facilities, edge computing, IoT, and Artificial Intelligence. The development of technology has modernised the agricultural infrastructure to provide precision farming, and predictive analysis to boast yield.

The post NEHU’s indigenous chip against Red Spider Mite for tea gardens appeared first on ELE Times.

Renesas Launches R-Car Gen 5 Platform for Multi-Domain SDVs

ELE Times - 7 годин 8 хв тому

Renesas Electronics Corporation is expanding its software-defined vehicle (SDV) solution offerings centered around the fifth-generation (Gen 5) R-Car family. The latest device in the Gen 5 family, the R-Car X5H is the industry’s first multi-domain automotive system-on-chip (SoC) manufactured with advanced 3nm process technology. It is capable of simultaneously running vehicle functions across advanced driver assistance systems (ADAS), in-vehicle infotainment (IVI), and gateway systems.

 

Renesas has begun sampling Gen 5 silicon and now offers full evaluation boards and the R-Car Open Access (RoX) Whitebox Software Development Kit (SDK) as part of the next phase of development. Renesas is also driving deeper collaboration with customers and partners to accelerate adoption. At CES 2026, Renesas will showcase AI-powered multi-domain demonstrations of the R-Car X5H in action.

 

The R-Car X5H leverages one of the most advanced process nodes in the industry to offer the highest level of integration, performance and power efficiency, with up to 35 percent lower power consumption than previous 5nm solutions. As AI becomes integral to next-generation SDVs, the SoC delivers powerful central compute targeting multiple automotive domains, with the flexibility to scale AI performance using chiplet extensions. It delivers up to 400 TOPS of AI performance, with chiplets boosting acceleration by four times or more. It also features 4 TFLOPS equivalent of GPU power for high-end graphics and over 1,000k DMIPS powered by 32 Arm Cortex-A720AE CPU cores and six Cortex-R52 lockstep cores with ASIL D support. Leveraging mixed criticality technology, the SoC executes advanced features in multiple domains without compromising safety.

  

Accelerating Automotive Innovation with an Open, Scalable RoX Whitebox SDK

To accelerate time-to-market, Renesas now offers the RoX Whitebox Software Development Kit (SDK) for the R-Car X5H, an open platform built on Linux, Android, and XEN hypervisor. Additional support for partner OS and solutions is available, including AUTOSAR, EB corbos Linux, QNX, Red Hat and SafeRTOS. Developers can jumpstart development out of the box using the SDK to build ADAS, L3/L4 autonomy, intelligent cockpit, and gateway systems. An integrated stack of AI and ADAS software enables real-time perception and sensor fusion while generative AI and Large Language Models (LLMs) enable intelligent human-machine interaction for next-generation AI cockpits. The SDK integrates production-grade application software stacks from leading partners such as Candera, DSPConceptsNullmaxSmartEyeSTRADVISION and ThunderSoft, supporting end-to-end development of modern automotive software architectures and faster time to market.

 

“Since introducing our most advanced R-Car device last year, we have been steadfast in developing market-ready solutions, including delivering silicon samples to customers earlier this year,” Vivek Bhan, Senior Vice President and General Manager of High-Performance Computing at Renesas. “In collaboration with OEMs, Tier-1s and partners, we are rapidly rolling out a complete development system that powers the next generation of software-defined vehicles. These intelligent compute platforms deliver a smarter, safer and more connected driving experience and are built to scale with future AI mobility demands.”

 

“Integrating Renesas’ R-Car X5 generation series into our high-performance compute portfolio is a natural next step that builds on our existing collaboration,” said Christian Koepp, Senior Vice President Compute Performance at Bosch’s Cross-Domain Computing Solutions Division. “At CES 2026, we look forward to showcasing this powerful solution with Renesas X5H SoC, demonstrating its fusion capabilities across multiple vehicle domains, including video perception for advanced driver assistance systems.”

 

“Innovative system-on-chip technology, such as Renesas’ R-Car X5H, is paving the way for ZF’s software-defined vehicle strategy,” said Dr. Christian Brenneke, Head of ZF’s Electronics & ADAS division. “Combining Renesas’ R-Car X5H with our ADAS software solutions enables us to offer full-stack ADAS capabilities with high computing power and scalability. The joint platform combines radar localization and HD mapping to provide accurate perception and positioning for reliable ADAS performance. At CES 2026, we’ll showcase our joint ADAS solution.”

 

First Fusion Demo on R-Car X5H with Partner Solutions at CES 2026

The new multi-domain demo upscales from R-Car Gen 4 to the next-generation R-Car X5H on the RoX platform, integrating ADAS and IVI stacks, RTOS, and edge AI functionality on Linux and Android with XEN hypervisor virtualization. Supporting input from eight high-resolution cameras and up to eight displays with resolutions reaching 8K2K, the platform delivers immersive visualization and robust sensor integration for next-generation SDVs. Combined with the RoX Whitebox SDK and production-grade partner software stacks, the platform is engineered for real-world deployment covering multiple automotive domains.


Availability

Renesas is shipping R-Car X5H silicon samples and evaluation boards, along with the RoX Whitebox SDK, to select customers and partners.

The post Renesas Launches R-Car Gen 5 Platform for Multi-Domain SDVs appeared first on ELE Times.

Why Frugal engineering is a critical aspect for advanced materials in 2026

ELE Times - 9 годин 1 хв тому

by Vijay Bolloju, Director R&D, iVP Semiconductor

Widespread electrification of everything is pushing the boundaries of Power Electronics systems. The demand for high power densities and lower weight in systems necessitates the use of novel materials.

Representational Image

The newer generation power semiconductor devices can operate at higher temperatures. Operating at higher temperatures can increase the power densities and reduce the Power device costs of the system. At the same time, it poses reliability concerns due to dielectric breakdown, deformation, and increased leakage currents due to ionic contamination of the moulding compounds. Packaging materials capable of reliably operating at higher temperatures are needed to exploit their capabilities to the fullest.

It is also evident from the recent trends that the operating voltages of systems like EVs, Data Centres, telecom, etc, are on the rise. Higher operating voltages warrant a higher degree of compliance for the safety of the users.

The cost breakdown of the high-power electronic systems shows that more than half of the cost comes from non-semiconductor materials. Materials such as plastics used for packaging, thermal interface materials (TIM), sealing compounds, heat dissipaters such as heat sinks, cooling liquids, substrates, connectors, etc.

Substrates play a major role in the thermal performance, structural stability, and reliability of the systems. FR4 PCBs have very poor thermal conductivity (0.24 W/m-K) and are commonly used for low-power systems. FR4 also has low Tg (~ 130 °C) and limits the operating range for the power semiconductors. These substrates are not recommended for high-power applications.

Aluminium Metal core PCBs (MCPCBs)are also widely used for building electronic circuits. These substrates have relatively higher thermal conductivity (2 W/m-K) and higher Tg. MCPCBs offer better mechanical stability and thermal performance. Though multi-layer MCPCBs are available, the most common MCPCBs are single-layer due to cost considerations. This will limit the ability to make the systems compact.

Ceramic substrates such as alumina (Al2O3), aluminium nitride (AlN) have excellent thermal conductivity and mechanical stability. Alumina has 100X higher thermal conductivity (24 W/m-K) and aluminium nitride has 1000X higher thermal conductivity (240 W/m-K) than FR4 material. They also render superior reliability and high-temperature operation capability. They are perfectly suited for high-power systems. They are also single-layer due to cost considerations.

The selection of the substrate materials should be based on the cost-performance criteria. Cost of the substrates increases in this order: FR4 PCBs, MCPCBs, and ceramic substrates. But the power semiconductor costs will reduce in the reverse order due to the improvement in the thermal conductivity. The reliability of the system also depends on the substrate choice – Ceramics offering the best, and FR4 the least. So, a sensible trade-off should be considered to make a suitable choice.

Thermal interface materials (TIM) also have a profound effect on the system performance, reliability, and cost. They are often neglected and not paid due attention. But they can really help in enhancing the thermal performance of the system and even reducing the number of power devices needed to implement the designs. TIMs also help in providing dielectric insulation to the system. So, an ideal TIM has high thermal conductivity and high dielectric strength. Choosing a proper TIM that meets the system requirements can help in reducing overall system cost and size.

Choosing proper substrate materials, TIM, and heat dissipator can reduce the system cost and size considerably and lead to frugal designs.

A holistic approach to design from the selection of power device technologies, substrates, TIM, and power dissipators may result in high-performance, reliable, and lower-cost systems.

Currently, the Indian materials ecosystem is poor and needs to be revamped to serve the power electronics industry to achieve higher performance metrics. The plastics, substrates, TIM, and other materials can be locally developed using advances in materials such as nano-materials, carbon compounds, engineering plastics, composite materials, etc. India has a mature ceramics industry serving the energy sector, the medical industry, etc. The technologies can be used to make substrate materials for power electronics applications. Metallization of the ceramic substrates to print the circuits is also an essential skill set to be developed.

High thermal conductivity composite materials, metal foam forming, and phase change materials can elevate the thermal performance of the systems. If the system can be cooled using advanced materials without the need for a liquid cooling system, it can considerably reduce the system cost and improve the reliability of the system.

All the materials described above that can improve system performance and reliability while reducing cost (Frugal innovations) can be developed and manufactured locally. A concerted and collaborative effort is all it needs.

The post Why Frugal engineering is a critical aspect for advanced materials in 2026 appeared first on ELE Times.

Ignoring the regulator’s reference

EDN Network - Втр, 12/16/2025 - 15:00

DAC control (via PWM or other source) of regulators is a popular design topic here in editor Aalyia Shaukat’s Design Ideas (DIs) corner. There’s even a special aspect of this subject that frequently provokes enthusiastic and controversial (even contentious) exchanges of opinion.

It’s the many and varied possibilities for integrating the regulator’s internal voltage reference into the design. The discussion tends to be extra energetic (and the resulting circuitry complex) when the design includes generating output voltages lower than the regulator’s internal reference.

Wow the engineering world with your unique design: Design Ideas Submission Guide

What can be done to make the discussion less complicated (and heated)?

An old rule of thumb suggests that when one facet of a problem makes the solution complex, sometimes a simple (and better) solution can be found by just ignoring that facet. So, I decided, just for fun, to give it a try with the regulator reference problem. Figure 1 shows the result.

Figure 1 DAC control of a regulator while ignoring its internal voltage reference where Vo = Vomax*(Vdac/Vdacmax). *±0.1%

Figure 1’s simple theory of operation revolves around the A1 differential amplifier.

Vo = Vomax(Vdac/Vdacmax)
If Vdacmax >= Vomax then R1a = R5/((Vdacmax/Vomax) – 1), omit R1b
If Vomax >= Vdacmax then R1b = R2/((Vomax/Vdacmax) – 1), omit R1a

A1 subtracts (suitably attenuated versions) of the control input signal (Vdac) from U1’s output voltage (Vo) and integrates the difference in the R3C3 feedback pair. The resulting negative feedback supplied to U1’s Vsense pin is independent of the Vsense voltage and is therefore independent of U1’s internal reference.

With the contribution of accuracy (and inaccuracy) from U1’s internal reference thus removed, the problem of integrating it into the design is therefore likewise removed. 

Turns out the potential for really good precision is actually improved by ignoring the regulator reference, because they’re seldom better than 1% anyway.

With the Figure 1 circuit, accuracy is ultimately limited only by the DAC and very high precision DACs can be assembled at reasonable cost. For an example see, “A nice, simple, and reasonably accurate PWM-driven 16-bit DAC.”

Another nice feature is that Figure 1 leaks no pesky bias current into the feedback network. This bias is typically scores of microamps and can prevent the output from getting any closer than tens of millivolts to a true zero when the output load is light. No such problem exists here, unless picoamps count (hint: they don’t).

And did I mention it’s simple? 

Oh yeah. About R6, depending on the voltage supplied to A1’s pin 8 and the absmax rating of U1’s Vsense pin, the possibility of an overvoltage might exist. If so, adjust the R4R6 ratio to prevent that. Otherwise, omit R6.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Ignoring the regulator’s reference appeared first on EDN.

Expanding power delivery in systems with USB PD 3.1

EDN Network - Втр, 12/16/2025 - 15:00
Microchip MCP19061 USB DCP board.

The Universal Serial Bus (USB) started out as a data interface, but it didn’t take long before progressing to powering devices. Initially, its maximum output was only 2.5 W; now, it can deliver up to 240 W over USB Type-C cables and connectors, processing power, data, and video. This revision is known as Extended Power Range (EPR), or USB Power Delivery Specification 3.1 (USB PD 3.1), introduced by the USB Implementers Forum. EPR uses higher voltage levels (28 V, 36 V, and 48 V), which at 5 A will deliver power of 140 W, 180 W, and 240 W, respectively.

USB PD 3.1 has an adjustable voltage supply mode, allowing for intermediate voltages between 9 V and the highest fixed voltage of the charger. This allows for greater flexibility by meeting the power needs of individual devices. USB PD 3.1 is backward-compatible with previous USB versions including legacy at 15 W (5 V/3 A) and the standard power range mode of below 100 W (20 V/5 A).

The ability to negotiate power for each device is an important strength of this specification. For example, a device consumes only the power it needs, which varies depending on the application. This applies to peripherals, where a power management process allows each device to take only the power it requires.

The USB PD 3.1 specification found a place in a wide range of applications, including laptops, gaming stations, monitors, industrial machinery and tools, small robots and drones, e-bikes, and more.

Microchip USB PD demo board

Microchip provides a USB PD dual-charging-port (DCP) demonstration application, supporting the USB PD 3.1 specification. The MCP19061 USB PD DCP reference board (Figure 1) is pre-built to show the use of this technology in real-life applications. The board is fully assembled, programmed, and tested to evaluate and demonstrate digitally controlled smart charging applications for different USB PD loads, and it allows each connected device to request the best power level for its own operation.

Microchip MCP19061 USB DCP board.Figure 1: MCP19061 USB DCP board (Source: Microchip Technology Inc.)

The board shows an example charging circuit with robust protections. It highlights charge allocation between the two ports as well as dynamically reconfigurable charge profile availability (voltage and current) for a given load. This power-balancing feature between ports provides better control over the charging process, in addition to delivering the right amount of power to each device.

The board provides output voltages from 3 V to 21 V and output currents from 0.5 A to 3 A. Its maximum input voltage range is from 6 V to 18 V, with 12 V being the recommended value.

The board comes with firmware designed to operate with a graphical user interface (GUI) and contains headers for in-circuit serial programming and I2C communication. An included USB-to-serial bridging board (such as the BB62Z76A MCP2221A breakout board USB) with the GUI allows different configurations to be quickly tested with real-world load devices charging on the two ports. The DCP board GUI requires a PC with Microsoft Windows operating system 7–11 and a USB 2.0 port. The GUI then shows parameter and board status and faults and enables user configuration.

DCP board components

Being a port board with two ports, there are two independent USB PD channels (Figure 2), each with their own dedicated analog front end (AFE). The AFE in the Microchip MCP19061 device is a mixed-signal, digitally controlled four-switch buck-boost power controller with integrated synchronous drivers and an I2C interface (Figure 3).

Diagram showing two independently managed USB PD channels on the Microchip MCP19061-powered DCP board.Figure 2: Two independently managed USB PD channels on the MCP19061-powered DCP board (Source: Microchip Technology Inc.) Block diagram of the Microchip MCP19061 four-switch buck-boost device.Figure 3: Block diagram of the MCP19061 four-switch buck-boost device (Source: Microchip Technology Inc.)

Moreover, one of the channels features the Microchip MCP22350 device, a highly integrated, small-format USB Type-C PD 2.0 controller, whereas the other channel contains a Microchip MCP22301 device, which is a standalone USB Type-C PD port controller, supporting the USB PD 3.0 specification.

The MCP22350 acts as a companion PD controller to an external microcontroller, system-on-chip or USB hub. The MCP22301 is an integrated PD device with the functionality of the SAMD20 microcontroller, a low-power, 32-bit Arm Cortex-M0+ with an added MCP22350 PD media access control and physical layer.

Each channel also has its own UCS4002 USB Type-C port protector, guarding from faults but also protecting the integrity of the charging process and the data transfer (Figure 4).

Traditionally a USB Type-C connector embeds the D+/D– data lines (USB2), Rx/Tx for USB3.x or USB4, configuration channel (CC) lines for charge mode control, sideband-use (SBU) lines for optional functions, and ground (GND). The UCS4002 protects the CC and D+/D– lines for short-to-battery. It also offers battery short-to-GND (SG_SENS) protection for charging ports.

Integrated switching VCONN FETs (VCONN is a dedicated power supply pin in the USB Type-C connector) provide overvoltage, undervoltage, back-voltage, and overcurrent protection through the VCONN voltage. The board’s input rail includes a PMOS switch for reverse polarity protection and a CLC EMI filter. There are also features such as a VDD fuse and thermal shutdown, enabled by a dedicated temperature sensor, the MCP9700, which monitors the board’s temperature.

Block diagram of the Microchip UCS4002 USB port protector device.Figure 4: Block diagram of the UCS4002 USB port protector device (Source: Microchip Technology Inc.)

The UCS4002 also provides fault-reporting configurability via the FCONFIG pin, allowing users to configure the FAULT# pin behavior. The CC, D+/D –, and SG_SENS pins are electrostatic-discharge-protected to meet the IEC 61000-4-2 and ISO 10605 standards.

The DCP board includes an auxiliary supply based on the MCP16331 integrated step-down switch-mode regulator providing a 5-V voltage and an MCP1825 LDO linear regulator providing a 3.3-V auxiliary voltage.

Board operation

The MCP19061 DCP board shows how the MCP19061 device operates in a four-switch buck-boost topology for the purpose of supplying USB loads and charging them with their required voltage within a permitted range, regardless of the input voltage value. It is configured to independently regulate the amount of output voltage and current for each USB channel (their individual charging profile) while simultaneously communicating with the USB-C-connected loads using the USB PD stack protocols.

All operational parameters are programmable using the two integrated Microchip USB PD controllers, through a dynamic reconfiguration and customization of charging operations, power conversion, and other system parameters. The demo shows how to enable the USB PD programmable power supply fast-charging capability for advanced charging technology that can modify the voltage and current in real time for maximum power outputs based on the device’s charging status.

The MCP19061 device works in conjunction with both current- and voltage-sense control loops to monitor and regulate the load voltage and current. Moreover, the board automatically detects the presence or removal of a USB PD–compliant load.

When a USB PD–compliant load is connected to the USB-C Port 1 (on the PCB right side; this is the higher one), the USB communication starts and the MCP19061 DCP board displays the charging profiles under the Port 1 window.

If another USB PD load is connected to the USB-C Port 2, the Port 2 window gets populated the same way.

The MCP19061 PWM controller

The MCP19061 is a highly integrated, mixed-signal four-switch buck-boost controller that operates from 4.5 V to 36 V and can withstand up to 42 V non-operating. Various enhancements were added to the MCP19061 to provide USB PD compatibility with minimum external components for improved calibration, accuracy, and flexibility. It features a digital PWM controller with a serial communication bus for external programmability and reporting. The modulator regulates the power flow by controlling the length of the on and off periods of the signal, or pulse widths.

The operation of the MCP19061 enables efficient power conversion with the capability to operate in buck (step-down), boost (step-up), and buck-boost topologies for various voltage levels that are lower, higher, or the same as the input voltage. It provides excellent precision and efficiency in power conversions for embedded systems while minimizing power losses. Its features include adjustable switching frequencies, integrated MOSFET drivers, and advanced fault protection. The operating parameters, protection levels, and fault-handling procedures are supervised by a proprietary state machine stored in its nonvolatile memory, which also stores the running parameters.

Internal digital registers handle the customization of the operating parameters, the startup and shutdown profiles, the protection levels, and the fault-handling procedures. To set the output current and voltage, an integrated high-accuracy reference voltage is used. Internal input and output dividers facilitate the design while maintaining high accuracy. A high-accuracy current-sense amplifier enables precise current regulation and measurement.

The MCP19061 contains three internal LDOs: a 5-V LDO (VDD) powers internal analog circuits and gate drivers and provides 5 V externally; a 4-V LDO (AVDD) powers the internal analog circuitry; and a 1.8-V LDO supplies the internal logic circuitry.

The MCP19061 is packaged in a 32-lead, 5 × 5-mm VQFN, allowing system designers to customize application-specific features without costly board real estate and additional component costs. A 1-MHz I2C serial bus enables the communication between the MCP19061 and the system controller.

The MCP19061 can be programmed externally. For further evaluation and testing, Microchip provides an MCP19061 dedicated evaluation board, the EV82S16A.

The post Expanding power delivery in systems with USB PD 3.1 appeared first on EDN.

Can we just agree that nixies are cool?

Reddit:Electronics - Втр, 12/16/2025 - 12:54
Can we just agree that nixies are cool?

I wanted experiment with them for a while, but I always thought that building a clock is just boring, so instead in making a nixie display for my geiger counter!

submitted by /u/Thick_Swordfish6666
[link] [comments]

India’s Semicon Programme: From Design to Packaging

ELE Times - Втр, 12/16/2025 - 12:18

Union Minister of State for Electronics and Information Technology Shri Jitin Prasada, stated India’s achievements from the Semicon India Programme. The Indian Government launched the programme to develop a complete ecosystem ranging from design, fabrication, assembly, testing and packaging.

The government approved 10 units worth Rs. 1.6 lakh crore to set up Silicon fab, Silicon Carbide fab, advanced packaging, memory packaging, among others. These are expected to cater to chip requirements of multiple sectors such as consumer appliances, industrial electronics, automobiles, telecommunications, aerospace, and power electronics. The minister also recalled how some of the approved projects are also using indigenous technology for assembly, testing and packaging of semiconductor chips.

Additionally, the Product Linked Incentive scheme (PLI) for large scale electronics manufacturing of mobile phones and certain specified components, attracted an investment of Rs 14,065 Cr. up to October 2025.

Design Development

On the design front, the government launched the Design Linked Incentive scheme (DLI), which provided support to 23 companies (24 designs) for designing chips and SoCs for the products in satellite communication, drones, surveillance camera, Internet of Things (IoT) devices, LEDs driver, AI devices, telecom equipment, smart meter, etc. Additionally, to assist with infrastructure, the government provided free design tool (EDA) access to 94 startups, enabling 47 lakh hours of design tool usage.

Developing a Skilled Workforce

Realising the importance of skilled workforce in the semiconductor manufacturing process, the government also launched several programmes and collaborations to build a skilled workforce for India. The All India Council for Technical Education (AICTE) has launched various courses to provide technical training to students.

The government’s Chips to Start-up (C2S) Programme encourage India’s young engineers, where the government provides latest design tools (EDA) to 397 universities and start-ups.

A Skilled Manpower Advanced Research and Training (SMART) Lab has also been setup in NIELIT Calicut with an aim to train 1 lakh engineers nation-wide. More than 62 thousand engineers have already been trained.

ISM has also partnered with Lam Research for conducting a large-scale training programme in nanofabrication and process-engineering skills. These would further augment skilled workforce for ATMP and advanced packaging. The program aims to generate 60,000 trained manpower in next 10 years.

Subsequently, the FutureSkills PRIME program is a collaborative initiative of MeitY and National Association of Software and Service Companies (NASSCOM) aimed at making India a cutting-edge digital talent nation.

The post India’s Semicon Programme: From Design to Packaging appeared first on ELE Times.

AC to DC Conevrter design on KiCad

Reddit:Electronics - Втр, 12/16/2025 - 12:16
AC to DC Conevrter design on KiCad

I created my first ever design on kiCad enjoyed the process. Any suggestions are purely welcomed.. Moreover I need some projects to make on ki cad soa s to start my career in PD design in VLSI Final year btech student ECE

submitted by /u/armtech_897
[link] [comments]

Troubleshooting often involves conflicting symptoms and scenarios

EDN Network - Втр, 12/16/2025 - 11:54

I’ve always regarded debugging and troubleshooting as the most challenging of all hands-on engineering skills. It’s not formally taught; it is usually learned through hands-on experience (often the hard way) and almost every case is different. And it’s a long list of why debugging and troubleshooting are often so difficult.

In some cases, there’s the “aha” moment when the problem is clearly identified and knocked down, but in many other cases, you are “pretty sure” you’ve got the problem but not completely so.

Note that I distinguish between debugging and troubleshooting. The former is when you are working on a breadboard or prototype that is not working and perhaps has never fully worked; it’s in the design phase. The latter is when a tested, solid product with some track record and field exposure misbehaves or fails in use. Each has its own starting points and constraints, but the terms are used interchangeably by many people.

Every engineer or technician has his or her own horror story of an especially challenging situation. It’s especially frustrating when there is no direct, consistent one-to-one link between observed symptoms and root cause(s). There are multiple cause/effect scenarios:

  • Clarity: The single-problem, single-effect situation—generally, the easiest to deal with.
  • Causality: A single problem with multiple causes, where one problem (often not directly visible) triggers a second, more visible one.
  • Correlation: Two apparent problems, with one common cause—or maybe the observed symptoms are unrelated? It’s also easy to have the assumption that correlation implies causality, but that is often not the case.
  • Coincidence: Two apparent problems that appear linked but really have no link at all.
  • Confusion: A problem with contradictory explanations, where the explanation addresses one aspect but does not explain the others.
  • Consistent: The problem is intermittent with no consistent set of circumstances that cause it to occur.

My recent dilemma

Whatever the cause(s) of faults, the most frustrating situation for engineers is where the problem is presumably fixed, but no clear cause (or causes) is found. This happened to me recently with my home heating system, which heats up water for domestic use and for radiator heating. It has one small pump sending heated water to a storage tank and a second small pump sending it to radiators; both pumps do not run at the same time.

One morning, I saw that we lost heat and hot water, so I checked the system (just four years old) and saw that the service-panel circuit breaker with a dedicated line had tripped.

A tripped breaker is generally bad news. My first thought was that perhaps there had been some AC-line glitch during the night, but all other sensitive systems in the house—PCs, network interfaces, and plug-in digital clocks—were fine. Perhaps some solar flare or cosmic particles had targeted just this one AC feed? Very unlikely. I reset the breaker and the system ran for about an hour, then the breaker tripped again.

I called the service team that had installed the system, they came over and they, too, were mystified. The small diagnostic panel display on the system said all was fine. They noted that my thermostat was a 50-year-old mechanical unit, similar to the classic 1953 round Honeywell unit, designed by Henry Dreyfus and now in the permanent display at Cooper Hewitt/Smithsonian Design Museum in New York (Figure 1). These two-wire units, with their bimetallic strip and glass-enclosed mercury-wetted switch, are extremely reliable; millions are still in use after many decades.

 

Figure 1 You have to start somewhere: The first step was to take out a possible but unlikely source of the problem. So, the mercury-wetted metallic-strip thermostat (above) similar to the classic Honeywell unit was replaced with a simple PRO1 T701 electronic equivalent (below). Sources: Cooper Hewitt Museum

While failure of these units is rare, technicians suggested replacing it “just in case.” I said, sure, “why not?” and replaced it with a simple, non-programmable, non-connected electronic unit that emulates the functions of the mechanical/mercury one.

But we knew that was very unlikely to be the actual problem, and the repair technicians could not envision any scenario where a thermostat—on 24-V AC loop with contact closure to energize a mechanical or solid-state relay to call for heat—could induce a circuit breaker to trip. Maybe the original thermostat contacts were “chattering” excessively, thus inducing the motor to cycle on and off rapidly? Even so, that shouldn’t trip a breaker.

Once again, the system ran for about an hour and then the breaker tripped. The techs spend some time adjusting the system’s hot water and heating water pumps; each has a small knob that selects among various operating modes.

Long-story short: the “fixed” system has been working fine for several weeks. But…and it’s a big “but” …they never did actually find a reason for the circuit-breaker tripping. Even if the pumps were not at their optimum settings, that should not cause an AC-line breaker to trip. And why would the system run for several years without a problem.

What does it all mean?

From an engineering perspective, that’s the most frustrating outcome. Now, even though the system is running, it still has me in that “somewhat worried” mental zone. A problem that should not have occurred did occur several times, but now it has gone away for no confirmed reason.

There’s not much that can be done to deal with non-reproducible problems such as this one. Do I need an AC-line monitor, as perhaps that’s the root cause? What sort of other long-term monitoring instrumentation is available for this heating system? How long would you have it “baby-sit” the system?

Perhaps there was an intermittent short circuit in the system’s internal AC wiring that caused the breaker to trip, and the act of opening the system enclosure and moving things around made the intermittent go away? We can only speculate.

Right now, I’m trying to put this frustrating dilemma out of my mind, but it’s not easy. Online troubleshooting guides are useless, as they have generic flowcharts asking, “Is the power on?” “Are the cables and connectors plugged in and solid?”

Perhaps I’ll instead re-read the excellent book, “DeBugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems” by David J. Agans (Figure 2). Although my ability and desire to poke, probe, and swap parts of a home heating system are close to zero.

Figure 2 This book on systematic debugging of electronic designs and products (and software) has many structured and relevant tactics for both beginners and experienced professionals. Source: Digital Library—Association for Computing Machinery

Or perhaps the system just wanted some personal, hands-on attention after four years of faithful service alone in the basement.

Have you ever had a frustrating failure where you poked, pushed, checked, measured, swapped parts, and did more, with the problem eventually going away—yet you really have no idea what the problem was? How did you handle it? Did you accept it and move on or pursue the mystery further?

Related Content

The post Troubleshooting often involves conflicting symptoms and scenarios appeared first on EDN.

IDTechEx assesses status of 800V for EVs

Semiconductor today - Втр, 12/16/2025 - 11:48
With the transition to 800V electric vehicles (EVs) affecting the whole powertrain (including the power electronics), IDTechEx’s report ‘Power Electronics for Electric Vehicles 2026–2036: Technologies, Markets, and Forecasts’, this trend is analysed and used to forecast the adoption of wide-bandgap semiconductors silicon carbide (SiC) and gallium nitride (GaN), as well as the entire power electronics market for EVs...

Lumentum appoints onsemi’s CFO Thad Trent to board

Semiconductor today - Втр, 12/16/2025 - 11:14
Lumentum Holdings Inc of San Jose, CA, USA (which designs and makes photonics products for optical networks and lasers for industrial and consumer markets) has appointed Thad Trent to its board of birectors, expanding the board membership to nine members...

Caliber Launches Advanced 3-Phase Monitoring Relay for India’s Industries

ELE Times - Втр, 12/16/2025 - 09:19

Caliber Interconnect Solutions Private Limited, a global engineering and deep-tech solutions company, has introduced its latest Three Phase Monitoring Relay, to address the growing need for reliable electrical protection across India’s industrial, infrastructure, and renewable energy sectors.

As industries increasingly operate in environments affected by voltage fluctuations and power quality challenges, the relay is designed to deliver accurate and dependable monitoring. Built on True RMS (TRMS) measurement principles, it provides precise voltage assessment even under distorted or unstable electrical conditions—an essential requirement for modern manufacturing, HVAC, power distribution, and solar installations.

Suresh Babu, Managing Director, Caliber Interconnects, said, “With rapid electrification and automation across Indian industry, electrical reliability has become a foundational requirement. This Three Phase Monitoring Relay is engineered to ensure consistent system protection and operational continuity in real-world conditions.”

The relay continuously monitors all three phases and detects critical electrical anomalies, including over-voltage and under-voltage with configurable thresholds, phase imbalance, phase loss, and phase sequence faults—issues that are among the most common causes of industrial equipment failure. Clear LED fault indications enable quick identification and troubleshooting at the panel level, supporting faster response by maintenance teams.

Manufactured using UL 94 V-0 grade flame-retardant material, the device emphasizes safety and durability. Its compact 19.1 mm footprint and fast response time of under two seconds make it suitable for space-constrained panels and contemporary industrial control systems.

Designed and manufactured in India, the Three Phase Monitoring Relay aligns with the country’s focus on electrical safety, energy efficiency, and resilient industrial infrastructure. Caliber Interconnects brings over two decades of experience in delivering high-reliability engineering and product solutions across semiconductors, automotive, railways, avionics, and medical sectors, serving customers in India, Singapore, the United States, Japan, Malaysia, and Israel. The company is ISO 9001, AS9100D, and ISO 27001 certified.

The Three Phase Monitoring Relay is suitable for a wide range of applications including industrial machinery, HVAC systems, motors, generators, pumps, compressors, solar energy systems, and electrical control panels. It is available through Caliber Interconnects’ offices in Bengaluru and Coimbatore, supporting deployment across India.

The post Caliber Launches Advanced 3-Phase Monitoring Relay for India’s Industries appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів