Новини світу мікро- та наноелектроніки

Can the SDV Revolution Happen Without SoC Standardization?

ELE Times - Thu, 02/12/2026 - 12:40

Speaking at the Auto EV Tech Vision Summit 2025, Yogesh Devangere, who heads the Technical Center at Marelli India, turned attention to a layer of the Software-Defined Vehicle (SDV) revolution that often escapes the spotlight: the silicon itself. The transition from distributed electronic control units (ECUs) to centralized computing is not just a software story—it is a System-on-Chip (SoC) story.

While much of the industry conversation revolves around features, over-the-air updates, AI assistants, and digital cockpits, Devangere argued that none of it is possible without a fundamental architectural shift inside the vehicle. If SDVs represent the future of mobility, then SoCs are the engines quietly driving that future.

From 16-Bit Controllers to Heterogeneous Superchips

Automotive electronics have evolved dramatically over the past two decades. What began as simple 16-bit microcontrollers has now transformed into complex, heterogeneous SoCs integrating multiple CPU cores, GPUs, neural processing units, digital signal processors, hardware security modules, and high-speed connectivity interfaces—all within a single chipset.

“These SoCs are what enable the SDV journey,” Devangere explained, describing them as high-performance computing platforms that can consolidate multiple vehicle domains into centralized architectures. Unlike traditional ECUs designed for single-purpose tasks, modern SoCs are built to manage diverse functions simultaneously—from ADAS image processing and AI model deployment to infotainment rendering, telematics, powertrain control, and network management. This manifests a structural shift in the automotive industry.

Centralized Computing Is the Real Transformation

The move toward SDVs, in a way, is a move toward centralized computing. Simply stated, instead of dozens of independent ECUs scattered across the vehicle, OEMs are increasingly experimenting with domain controller architectures or centralized controllers combined with zonal controllers. In both cases, the SoC becomes the computational heart of the system, and this consolidation enables:

  • Higher processing power
  • Cross-domain feature integration
  • Over-the-air (OTA) updates
  • AI-driven functionality
  • Flexible software deployment across operating systems such as Linux, Android, and QNX

A key enabler in this architecture is the hypervisor layer, which abstracts hardware from software and allows multiple operating systems to run independently on shared silicon. This flexibility is essential in a transition era where AUTOSAR (AUTomotive Open System ARchitecture) and non-AUTOSAR stacks coexist. AUTOSAR is a global software standard for automotive electronic control units (ECUs). It defines how automotive software should be structured, organized, and communicated, so that different suppliers and OEMs can build compatible systems.

But while the architectural promise is compelling, Devangere made it clear that implementation is far from straightforward.

The Architecture Is Not Standardized

One of the most critical challenges he highlighted is the absence of hardware-level standardization. “Every OEM is implementing SDV architecture in their own way,” he noted. Some opt for multiple domain controllers; others experiment with centralized controllers and zonal approaches. The result is a fragmented ecosystem.

Unlike the smartphone world—where Android runs on broadly standardized hardware platforms—automotive SoCs lack a unified framework. There is currently no hardware consortium defining a common architecture. While open-source software efforts such as Eclipse aim to harmonize parts of the software stack, the hardware layer remains highly individualized. The consequence is complexity. Tier-1 suppliers cannot rely on long lifecycle platforms, as SoCs evolve rapidly. What might be viable today could become obsolete within a few years.

In an industry accustomed to decade-long product cycles, that volatility is disruptive.

Complexity vs. Time-to-Market

If architectural fragmentation were not enough, development timelines are simultaneously shrinking. Designing with SoCs is inherently complex. A single SoC program often involves coordination among five to nine suppliers. Hardware validation must account for electromagnetic compatibility, thermal performance, and interface stability across multiple cores and peripherals. Software integration spans multi-core configurations, multiple operating systems, and intricate stack dependencies.

Yet market expectations continue to demand faster launches. “You cannot go back to longer development cycles,” Devangere observed. The pressure to innovate collides with the technical realities of high-complexity chip integration.

Power, Heat, and the Hidden Engineering Burden

Beyond software flexibility and AI capability lies a more fundamental engineering constraint: energy. High-performance SoCs generate significant heat and demand careful power management—critical in electric vehicles where battery efficiency is paramount. Many current architectures still rely on companion microcontrollers for power and network management, while the SoC handles high-compute workloads.

Balancing performance with energy efficiency, ensuring timing determinism across multiple simultaneous functions, and maintaining safety compliance remain non-trivial challenges. As vehicles consolidate ADAS, infotainment, telematics, and control systems onto shared silicon, resource management becomes as important as raw processing capability.

Partnerships Over Isolation

Given the scale of complexity, Devangere emphasized collaboration as the only viable path forward. SoC development and integration are rarely the work of a single organization. Semiconductor suppliers, Tier-1 system integrators, software stack providers, and OEMs must align early in the architecture phase.

Some level of standardization—particularly at the hardware architecture level—could significantly accelerate development cycles. Without it, the industry risks “multiple horses running in different directions,” as one audience member aptly put it during the discussion.

For now, that standardization remains aspirational.

The Real Work of the SDV Era

The excitement surrounding software-defined vehicles often focuses on user-facing features—AI assistants, personalized interfaces, downloadable upgrades. Devangere’s message was more grounded. Behind every seamless update, every AI-enabled feature, and every connected service lies a dense web of silicon complexity. Multi-core processing, heterogeneous architectures, thermal constraints, validation cycles, and fragmented standards form the invisible scaffolding of the SDV transformation.

The car may be becoming a computer on wheels. But building that computer—robust, safe, efficient, and scalable—remains one of the most demanding engineering challenges the automotive industry has ever faced.

And at the center of it all is the SoC.

The post Can the SDV Revolution Happen Without SoC Standardization? appeared first on ELE Times.

ElevateX 2026, Marking a New Chapter in Human Centric and Intelligent Automation

ELE Times - Thu, 02/12/2026 - 12:11

Teradyne Robotics today hosted ElevateX 2026 in Bengaluru – its flagship industry forum bringing together Universal Robots (UR) and Mobile Industrial Robots (MiR) to spotlight the next phase of human‑centric, collaborative, and intelligent automation shaping India’s manufacturing and intralogistics landscape.

Designed as a high‑impact platform for industry leadership and ecosystem engagement, ElevateX 2026 convened 25+ CEO/CXO leaders, technology experts, startups, and media, reinforcing how Indian enterprises are progressing from isolated automation pilots to scalable, business‑critical deployments. Ots)

Teradyne Robotics emphasized the rapidly expanding role of flexible and intelligent automation in enabling enterprises to scale confidently and safely. With industrial collaborative robots (cobots) and autonomous mobile robots (amr’s) becoming mainstream across sectors, the company underlined its commitment to driving advanced automation, skill development, and stronger industry‑partner ecosystems in India.

The event showcased several real‑world automation applications featuring cobots and amr’s across key sectors, including Automotive, F&B, FMCG, Education, and Logistics. These demos highlighted the ability of Universal Robots and MiR to help organizations scale quickly, redeploy easily, and improve throughput and workforce efficiency.

Showcasing high‑demand applications from palletizing and welding to material transport, machine tending, and training, the demonstrations reflected how Teradyne Robotics enables faster ROI, simpler deployment, and safe automation across high‑mix and high‑volume operations.

Speaking at the event, James Davidson, Chief Artificial Intelligence Officer, Teradyne Robotics, said, “Automation is entering a defining era – one where intelligence, flexibility, and human-centric design are no longer optional, but fundamental to how businesses innovate, scale, and compete. AI is transforming robots from tools that simply execute tasks into intelligent collaborators that can perceive, learn, and adapt in dynamic environments. In India, we are witnessing a decisive shift from experimentation to enterprise-wide adoption, and ElevateX 2026 reflects this momentum – bringing the ecosystem together to explore how collaborative and intelligent automation can become a strategic growth engine for both established enterprises and the next generation of startups.”

Poi Toong Tang, Vice President of Sales, Asia Pacific, Teradyne Robotics, added, “India is rapidly emerging as one of the most important and dynamic automation markets in Asia Pacific. Organizations today are not just looking to automate – they are looking to build operations that are flexible, resilient, and future-ready. The demand is for modular automation that delivers faster ROI and can evolve alongside business needs. Through Universal Robots and MiR, we are enabling end-to-end automation across production and intralogistics, helping Indian companies scale with confidence and compete on a global stage.”

Sougandh K.M., Business Director – South Asia, Teradyne Robotics, said, “India’s automation journey will be defined by collaboration across its ecosystem — by partners, system integrators, startups, and skilled talent working together to turn technology into real impact. At Teradyne Robotics, our belief is simple: automation should be for anyone and anywhere, and robots should enable people to do better work, not work like robots. Our focus is on automating tasks that are dull, dirty, and dangerous, while helping organizations improve productivity, safety, and quality. ElevateX 2026 is about lowering barriers to adoption and building long-term capability in India, making automation practical, scalable, and accessible, and positioning Teradyne Robotics as a trusted partner in every stage of that growth journey .”

Customer Case Story Testimonial/Teaser

A key highlight of ElevateX 2026 was the spotlight on customer success, and Origin stood out. As a fast‑growing U.S. construction tech startup, they shared how partnering with Universal Robots is driving measurable impact through improved productivity, stronger safety, and consistently high‑quality project outcomes powered by collaborative automation.

Yogesh Ghaturle, the Co-founder and CEO of Origin, said, “Our goal is to bring true autonomy to the construction site, transforming how the world builds. Executing this at scale requires a technology stack where every component operates with absolute predictability. Universal Robots provides the robust, operational backbone we need. With their cobots handling the mechanical precision, we are free to focus on deploying our intelligent systems in the real world.” 

The post ElevateX 2026, Marking a New Chapter in Human Centric and Intelligent Automation appeared first on ELE Times.

The Architecture of Edge Computing Hardware: Why Latency, Power and Data Movement Decide Everything

ELE Times - Thu, 02/12/2026 - 11:59

Courtesy: Ambient Scientific

Most explanations of edge computing hardware talk about devices instead of architecture. They list sensors, gateways, servers and maybe a chipset or two. That’s useful for beginners, but it does nothing for someone trying to understand how edge systems actually work or why certain designs succeed while others bottleneck instantly.

If you want the real story, you have to treat edge hardware as a layered system shaped by constraints: latency, power, operating environment and data movement. Once you look at it through that lens, the category stops feeling abstract and starts behaving like a real engineering discipline.

Let’s break it down properly.

What edge hardware really is when you strip away the buzzwords

Edge computing hardware is the set of physical computing components that execute workloads near the source of data. This includes sensors, microcontrollers, SoCs, accelerators, memory subsystems, communication interfaces and local storage. It is fundamentally different from cloud hardware because it is built around constraints rather than abundance.

Edge hardware is designed to do three things well:

  1. Ingest data from sensors with minimal delay
  2. Process that data locally to make fast decisions
  3. Operate within tight limits for power, bandwidth, thermal capacity and physical space

If those constraints do not matter, you are not doing edge computing. You are doing distributed cloud.

This is the part most explanations skip. They treat hardware as a list of devices rather than a system shaped by physics and environment.

The layers that actually exist inside edge machines

The edge stack has four practical layers. Ignore any description that does not acknowledge these.

  1. Sensor layer: Where raw signals are produced. This layer cares about sampling rate, noise, precision, analogue front ends and environmental conditions.
  2. Local compute layer: Usually MCUs, DSP blocks, NPUs, embedded SoCs or low-power accelerators. This is where signal processing, feature extraction and machine learning inference happen.
  3. Edge aggregation layer: Gateways or industrial nodes that handle larger workloads, integrate multiple endpoints or coordinate local networks.
  4. Backhaul layer: Not cloud. Just whatever communication fabric moves selective data upward when needed.

These layers exist because edge workloads follow a predictable flow: sense, process, decide, transmit. The architecture of the hardware reflects that flow, not the other way around.

Why latency is the first thing that breaks and the hardest thing to fix

Cloud hardware optimises for throughput. Edge hardware optimises for reaction time.

Latency in an edge system comes from:

  1. Sensor sampling delays
  2. Front-end processing
  3. Memory fetches
  4. Compute execution
  5. Writeback steps
  6. Communication overhead
  7. Any DRAM round-trip
  8. Any operating system scheduling jitter

If you want low latency, you design hardware that avoids round-trip to slow memory, minimises driver overhead, keeps compute close to the sensor path and treats the model as a streaming operator rather than a batch job.

This is why general-purpose CPUs almost always fail at the edge. Their strengths do not map to the constraints that matter.

Power budgets at the edge are not suggestions; they are physics

Cloud hardware runs at hundreds of watts. Edge hardware often gets a few milliwatts, sometimes even microwatts.

Power is consumed by:

  1. Sensor activation
  2. Memory access
  3. Data movement
  4. Compute operations
  5. Radio transmissions

Here is a simple table with the numbers that actually matter.

Operation Approx Energy Cost
One 32-bit memory access from DRAM High tens to hundreds of pJ
One 32-bit memory access from SRAM Low single-digit pJ
One analogue in memory MAC Under 1 pJ effective
One radio transmission Orders of magnitude higher than compute

These numbers already explain why hardware design for the edge is more about architecture than brute force performance. If most of your power budget disappears into memory fetches, no accelerator can save you.

Data movement: the quiet bottleneck that ruins most designs

Everyone talks about computing. Almost no one talks about the cost of moving data through a system.

In an edge device, the actual compute is cheap. Moving data to the compute is expensive.

Data movement kills performance in three ways:

  1. It introduces latency
  2. It drains power
  3. It reduces compute utilisation

Many AI accelerators underperform at the edge because they rely heavily on DRAM. Every trip to external memory cancels out the efficiency gains of parallel compute units. When edge deployments fail, this is usually the root cause.

This is why edge hardware architecture must prioritise:

  1. Locality of reference
  2. Memory hierarchy tuning
  3. Low-latency paths
  4. SRAM-centric design
  5. Streaming operation
  6. Compute in memory or near memory

You cannot hide a bad memory architecture under a large TOPS number.

Architectural illustration: why locality changes everything

To make this less abstract, it helps to look at a concrete architectural pattern that is already being applied in real edge-focused silicon. This is not a universal blueprint for edge hardware, and it is not meant to suggest a single “right” way to build edge systems. Rather, it illustrates how some architectures, including those developed by companies like Ambient Scientific, reorganise computation around locality by keeping operands and weights close to where processing happens. The common goal across these designs is to reduce repeated memory transfers, which directly improves latency, power efficiency, and determinism under edge constraints.

Figure: Example of a memory-centric compute architecture, similar to approaches used in modern edge-focused AI processors, where operands and weights are kept local to reduce data movement and meet tight latency and power constraints.

How real edge pipelines behave, instead of how diagrams pretend they behave

Edge hardware architecture exists to serve the data pipeline, not the other way around. Most workloads at the edge look like this:

  1. The sensor produces raw data
  2. Front end converts signals (ADC, filters, transforms)
  3. Feature extraction or lightweight DSP
  4. Neural inference or rule-based decision
  5. Local output or higher-level aggregation

If your hardware does not align with this flow, you will fight the system forever. Cloud hardware is optimised for batch inputs. Edge hardware is optimised for streaming signals. Those are different worlds.

This is why classification, detection and anomaly models behave differently on edge systems compared to cloud accelerators.

The trade-offs nobody escapes, no matter how good the hardware looks on paper

Every edge system must balance four things:

  1. Compute throughput
  2. Memory bandwidth and locality
  3. I/O latency
  4. Power envelope

There is no perfect hardware. Only hardware that is tuned to the workload.

Examples:

  1. A vibration monitoring node needs sustained streaming performance and sub-millisecond reaction windows
  2. A smart camera needs ISP pipelines, dedicated vision blocks and sustained processing under thermal pressure
  3. A bio signal monitor needs to be always in operation with strict microamp budgets
  4. A smart city air node needs moderate computing but high reliability in unpredictable conditions

None of these requirements match the hardware philosophy of cloud chips.

Where modern edge architectures are headed, whether vendors like it or not

Modern edge workloads increasingly depend on local intelligence rather than cloud inference. That shifts the architecture of edge hardware toward designs that bring compute closer to the sensor and reduce memory movement.

Compute in memory approaches, mixed signal compute block sand tightly integrated SoCs are emerging because they solve edge constraints more effectively than scaled-down cloud accelerators.

You don’t have to name products to make the point. The architecture speaks for itself.

How to evaluate edge hardware like an engineer, not like a brochure reader

Forget the marketing lines. Focus on these questions:

  1. How many memory copies does a singleinference require
  2. Does the model fit entirely in local memory
  3. What is the worst-case latency under continuous load
  4. How deterministic is the timing under real sensor input
  5. How often does the device need to activate the radio
  6. How much of the power budget goes to moving data
  7. Can the hardware operate at environmental extremes
  8. Does the hardware pipeline align with the sensor topology

These questions filter out 90 per cent of devices that call themselves edge capable.

The bottom line: if you don’t understand latency, power and data movement, you don’t understand edge hardware

Edge computing hardware is built under pressure. It does not have the luxury of unlimited power, infinite memory or cool air. It has to deliver real-time computation in the physical world where timing, reliability and efficiency matter more than large compute numbers.

If you understand latency, power and data movement, you understand edge hardware. Everything else is an implementation detail.

The post The Architecture of Edge Computing Hardware: Why Latency, Power and Data Movement Decide Everything appeared first on ELE Times.

Edge AI in a DRAM shortage: Doing more with less

EDN Network - Thu, 02/12/2026 - 11:59

Memory is having a difficult year. As manufacturers prioritize DDR5 and high-bandwidth memory (HBM) for data centers and large-scale AI workloads, availability has tightened and costs have risen sharply: up to 3–4x compared to Q3 2025 levels and market signals suggest the peak has not yet been reached.

Even hyperscalers—typically at the frontline—are reportedly receiving only about 70% of their allocated volumes, and analysts expect tight conditions to persist well into 2026 and possibly even 2027.

The strain isn’t evenly distributed, with the steepest price hikes and longest lead times concentrated in higher-capacity modules. Those components sit directly in the path of cloud infrastructure demand, and their pricing reflects it. On the other hand, lower-capacity modules (1-2 GB) have remained accessible and far more stable.

This trend is now influencing how teams think about system design. AI workloads built around large memory footprints now run into procurement challenges; systems engineered to operate within modest memory baselines avoid both the price spikes and the uncertainty. The outcome is important: in a shortage, architecture built for efficiency gives teams more strategic freedom compared to architectures built for abundance.

The most effective solution: DRAM-less AI accelerator

In a constrained memory market, the most robust solution is also the simplest: remove the dependency on external DRAM entirely. Take the case of Hailo-8 and Hailo-8L AI accelerators. By keeping the full inference pipeline on-chip, Hailo-8/8L eliminate the most expensive and supply-constrained component in the system.

In practical terms, avoiding DRAM can reduce bill of materials by up to $100 per device, while also improving power efficiency, latency, and system reliability. Though not every AI application can avoid DRAM.

Generative AI workloads inherently require more memory, and systems that run them will continue to rely on external DRAM. But even in this case, memory constraints strongly favor moving inference closer to the edge.

Running generative AI on the edge allows teams to work with smaller, domain-specific models rather than large, general-purpose ones designed for the cloud. Smaller models translate directly into smaller DRAM requirements, reducing cost, easing procurement, and improving power efficiency. This is where edge-focused accelerators come into play, enabling efficient generative AI inference while keeping memory footprints as lean as possible.

Privacy and latency have long shaped the case for running intelligence on the device. In 2025, another factor cemented it: the expectation that generative AI simply be there. Users now rely on transcription, summarization, audio cleanup, translation, and basic reasoning often with no tolerance for startup delays or network dependency.

Recent cloud outages from AWS, Azure and Cloudflare underscored how fragile cloud-only assumptions can be. When the networks faced disruptions, everyday features across consumer apps and enterprise workflows failed. Even brief interruptions highlighted how a single infrastructure dependency can take down tools that users now rely on dozens of times a day.

As AI moves deeper into everyday workflows and users expect agentic AI capabilities to be available instantly, a hybrid approach proves far more resilient. Keep frequently used intelligence local, either on the device or in a nearby gateway, while using the cloud for heavier or less frequent tasks. And crucially, when models are small enough to operate within 1-2 GB of memory, that hybrid approach becomes far easier to implement using memory configurations that are still readily sourced.

Small models change the equation

Until recently, generative AI required the memory and compute scale of the cloud. A new class of small language models (SLMs) and compact vision language models (VLMs) now deliver strong instruction following, reliable tool use, and competitive benchmark performance at a fraction of the parameters.

Releases like IBM’s Granite 4.0 Nano line demonstrate how far efficient architectures have come. These models show that some generative AI tasks and applications no longer need massive, expensive system memory—they need well-defined domains, optimized inference paths, and efficient pre- and post-processing.

For hardware teams, this evolution has many practical benefits. Smaller models reduce the “memory tax” that has been baked into AI design for years. When an entire intelligence pipeline can operate in 1-2 GB of DRAM, several constraints loosen simultaneously:

  • Costs fall as systems avoid the inflated pricing of high-capacity DRAM.
  • Supply-chain risk drops as lower-capacity memory chips remain easier to procure.
  • Power consumption improves because smaller models with hardware-assisted offload (NPU or AI accelerator) run cooler and more efficiently.
  • System reliability increases as local inference keeps essential features online even during network outages.

An AI architecture designed for efficiency rather than abundance fits squarely within the ethos of edge computing. Many high-value agentic AI tasks—summarizing a conversation, describing an image, or translating speech—do not require massive models. In narrow domains, compact models can deliver faster, more private and consistent results because they operate with fewer unknowns.

The path forward

If the DRAM shortage proves anything, it’s that the most resilient AI systems are the ones designed around constraints, not excess. Teams are re-evaluating assumptions about model size, memory baselines, and what “good enough” looks like for common tasks. They’re recognizing that domain-specific intelligence often performs better than brute-force scale—especially in environments that demand consistency, privacy, and low power draw.

Edge AI fits naturally within this moment. Its memory profile lines up with the DRAM capacities that remain accessible, and its deployment model brings stability to the tasks users rely on most. As supply tightness continues, organizations that invest in leaner model design and hybrid deployment strategies will be better positioned to deliver stable, responsive AI without absorbing high memory costs.

Avi Baum is chief technology officer (CTO) and co-founder of Hailo.

Special Section: AI Design

The post Edge AI in a DRAM shortage: Doing more with less appeared first on EDN.

Govt Bets Big on Chips: India Semiconductor Mission 2.0 Gets ₹1,000 Crore Funding

ELE Times - Thu, 02/12/2026 - 11:38

In a significant push for the nation’s tech ambitions, the Government of India has earmarked Rs. 1,000 crores for the India Semiconductor Mission (ISM) 2.0 in the Union Budget 2026-27.

The new funding aims to supercharge domestic production, with investments slated for semiconductor manufacturing equipment, local IP development, and supply chain fortification both within India and on the international stage.

This upgraded version of the ISM will focus on industry-driven research and the refinement of training centres to enhance technology advancement, thereby fostering a skilled workforce for the future growth of the industry.

With India aiming for self-reliance through boosting domestic manufacturing in multiple sectors, the need for semiconductor manufacturing has exponentially increased.

Recently, Qualcomm tapped out the most advanced 2nm chips led by Indian engineering teams. This is a major boost to Indian semiconductor aspirations.

The first phase of the ISM was supported by a Rs. 76,000 crores incentive scheme, with ten projects worth Rs. 1.60 lakh crores approved by December, 2025, covering the entire manufacturing spectrum from fabrication units, packaging to assembly, and testing infrastructure development.

By: Shreya Bansal, Sub-editor

The post Govt Bets Big on Chips: India Semiconductor Mission 2.0 Gets ₹1,000 Crore Funding appeared first on ELE Times.

UK–Bulgaria collaboration developing Green Silicon Carbide wafer factory

Semiconductor today - Thu, 02/12/2026 - 11:35
Under the UK–Bulgaria Strategic Partnership, the UK Science and Technology Network (STN) and the Department for Business and Trade (DBT) have connected UK expertise with Bulgaria’s ambitions under the EU Chips Act 2023 and its fast-growing auto electronics sector. The Science and Technology Network has served as a bridge between government, academia and industry in the UK and Bulgaria – strengthening mutual understanding and unlocking opportunities for collaboration...

7 Segment Display Decoder

Reddit:Electronics - Thu, 02/12/2026 - 08:57
7 Segment Display Decoder

Here’s a decoder I made in my class! It takes the binary inputs from the four switches and uses a seven-segment display to turn them into decimal numbers. Made with a 7447 CMOS IC.

I know it’s very disorganized and I could certainly get better at saving space. I’m still new to building circuits, but I still think it’s really cool!

submitted by /u/Logical_Gate1010
[link] [comments]

Microchip and Hyundai Collaborate, Exploring 10BASE-T1S SPE for Future Automotive Connectivity 

ELE Times - Thu, 02/12/2026 - 08:24
Microchip Technology announced a collaboration with Hyundai Motor Group (HMG) to explore the adoption of advanced in-vehicle network solutions based on 10BASE-T1S Single Pair Ethernet (SPE) technology. This cooperation effort is intended to support the development of more efficient, reliable and scalable vehicle architectures that meet the evolving demands of future mobility.
The rapid advancement of Advanced Driver-Assistance Systems (ADAS) and connected vehicle features is driving the need for robust, high-performance in-vehicle networks. SPE serves as a foundational technology for modern automotive architectures, enabling seamless connectivity across systems. By reducing the need for bridging between multiple standard and proprietary communication buses, SPE significantly simplifies wiring, lowers system costs and streamlines network integration.
As part of this collaboration, Hyundai Motor Group is working together with Microchip on integration of Microchip’s 10BASE-T1S solutions into its future vehicle platforms, particularly in high-growth areas such as electric vehicles, autonomous driving, and smart mobility. The collaboration also includes access to Microchip’s technical support and early product samples to help accelerate time to market and optimize system performance.
“As the automotive industry transitions to software-defined vehicles, the need for high-performance and scalable in-vehicle networks has never been greater. Our comprehensive portfolio of Single Pair Ethernet hardware and software solutions enables customers to reduce cost, risk, and time to market,” said Matthias Kaestner, corporate vice president of Microchip’s automotive, data center and networking business. “Our collaboration with Hyundai will support the development of next-generation in-vehicle network solutions that address the mobility needs of tomorrow.”
“Partnering with Microchip enables us to leverage their Ethernet expertise to support next-gen vehicle connectivity,” said Hyundai Motor Group. “HMG will accelerate the adoption of 10BASE-T1S technology and enable the next generation of intelligent, connected vehicles.”
10BASE-T1S technology supports multi-drop Ethernet communication over a single twisted pair, extending in-vehicle networking to the edge—connecting devices, actuators and sensors with greater efficiency and cost-effectiveness.
 

The post Microchip and Hyundai Collaborate, Exploring 10BASE-T1S SPE for Future Automotive Connectivity  appeared first on ELE Times.

Wolfspeed accelerates AI-powered manufacturing and operations with Snowflake

Semiconductor today - Wed, 02/11/2026 - 22:28
Wolfspeed Inc of Durham, NC, USA — which makes silicon carbide (SiC) materials and power semiconductor devices — is expanding its use of US-based Snowflake Inc to accelerate manufacturing efficiency and operational excellence as it scales production to meet growing market demand...

My first proper inverter bridge with CM200 IGBT bricks

Reddit:Electronics - Wed, 02/11/2026 - 21:52
My first proper inverter bridge with CM200 IGBT bricks

Thinking of using it for either an induction heater or a dual resonant solid state tesla coil, but next up will be having to deal with annoying gate drive stuff first.

submitted by /u/ieatgrass0
[link] [comments]

Self-oscillating sawtooth generator spans 5 decades of frequencies

EDN Network - Wed, 02/11/2026 - 15:00

There are many ways of generating analog sawtooth waveforms with oscillating circuits. Here’s a method that employs a single supply voltage rail to produce a buffered signal whose frequency can be varied over a range from 10Hz to 1MHz (Figure 1).

Figure 1 The sawtooth output waveform is the signal “saw” available at the output of op amp U1a. Its frequency is set by the value of resistor R6 which can vary from 120 Ω to 12 MΩ.

Wow the engineering world with your unique design: Design Ideas Submission Guide

U3, powered through R5, uses Q2 and R6 to create a constant current source. U3 enforces a constant voltage Vref of 1.2 V between its V+ and FB pins. Q2 is a high-beta NPN transistor that passes virtually all of R2’s current Vref/R6 through its collector to charge C3 with a constant current, producing the linear ramp portion of this ground-referenced sawtooth.

Op-amp U1 buffers this signal and applies it to an input of comparator U2a. The comparator’s other input’s voltage causes its output to transition low when the sawtooth rises to 1 volt. U2A, R1, Q1, R8, C1, and U2b produce a 100 ns one-shot signal at the output of U2b, which drives the gate of M1 high to rapidly discharge C3 to ground.

The frequency of the waveform is 1.2 / ( R6 ×  C3 ) Hz. With the availability of U3’s Vref tolerances as low as 0.2% and a 0.1% tolerance for R6, the circuit’s overall tolerances are generally limited by an at best 1% C3 combined with the parasitic capacitances of M1.

Waveforms at several different frequencies are seen in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, and Figure 7.

Figure 2 10 Hz sawtooth for an R6 of 12 MΩ.

Figure 3 100 Hz sawtooth for an R6 of 1.2 MΩ.

Figure 4 1 kHz sawtooth for an R6 of 120 kΩ.

Figure 5 10 kHz sawtooth for an R6 of 12 kΩ.

Figure 6 100 kHz sawtooth for an R6 of 1.2 kΩ.

Figure 7 1 MHz sawtooth for an R6 of 120 Ω.

Figures 3 and 4 show near-ideal sawtooth waveforms. But Figure 2, with its 12 MΩ R6, shows that even when “off,” M1 has a non-infinite drain-source resistance which contributes to the non-linearity of the ramp. It’s also worth noting that although U3’s FB pin typically pulls less than 100 nA, that’s the current that the 12 MΩ R6 is intended to source, so waveform frequency accuracy for this value of resistor is problematic.

Figures 5, 6, and 7 show progressive increases in the effects of the 100nS discharge time for C3 and of the finite recovery time of the op amp when its output saturates near the ground rail.

These circuits do not require any matched-value components. Accuracies are improved by the use of precision versions of R4, R6, R7, and U3, but the circuit’s operation does not necessitate these.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Self-oscillating sawtooth generator spans 5 decades of frequencies appeared first on EDN.

Full circle current loops: 4mA-20mA to 0mA-20mA

EDN Network - Wed, 02/11/2026 - 15:00

A topic that has recently drawn a lot of interest (!) and no fewer than four separate design articles (!!) here in Design Ideas, is the conversion of 0 to 20mA current sources into industrial standard 4mA to 20mA current loop signals. Here’s the list—so far—in reverse chronological order. Apologies if (as is quite possible) I’ve missed one—or N.

With so much energy already devoted to that one side of this well-tossed coin, it seemed only fair to pay a little attention to the flip side of the conversion function coin. Figure 1 shows the result. Its (fairly) simple circuit performs a precision conversion from 4-20mA to 0-20mA.  Here’s how it works.

Figure 1 The flip side of the current conversion coin: Iout = (IinR1 – 1.24v)/R2 = 1.25(Iin – 4mA).

Wow the engineering world with your unique design: Design Ideas Submission Guide

The core of the circuit is the Vin = IR1 = 1.24 V to 7.20 V developed by the 4-20mA input working into R1 and sensed by the Vref input of Z1. The principle in play is discussed in Figure 1 of “Precision programmable current sink.”

The resulting Z1 cathode current is (Iin R1 – Vref)/R2 = 0 to 20 mA as I increases from 4 mA to 20 mA. Or it would be if not for the phenomenon of Vref modulation by Z1 cathode voltage. The D1, Q2 cascode pair greatly attenuates this effect by holding Z1’s cathode voltage near zero and constant. It also extends Z1’s cathode voltage limit from an inadequate 7 V to the 30 V capability of Q2. Of course, a different choice for Q2 could extend it further.  But if 30 V will do, the >1000 typical beta of the 5089 is good for accuracy.

Current booster Q1 extends Z1’s 15 mA max current limit while also reducing thermal effects. The net result holds Z1’s maximum power dissipation to single-digit milliwatts.

With 0.1% precision R1 and R2 and the ±0.5% tolerance TLV431B, better than 1% accuracy can be achieved with the untrimmed Figure 1 circuit. If this level of precision is still inadequate, manual post-assembly trim can be added with just two extra parts, as shown in Figure 2. Calibration is achieved with one pass.

  1. Set input current to 4.00 mA
  2. Adjust R4 for output current of ~50 µA.  Note this is only 0.25% of full-scale, so don’t worry about hitting it exactly. You probably won’t.
  3. Set input current to 20 mA
  4. Adjust R5 for an output current of 20 mA

Figure 2 R4 and R5 trims allow post-assembly precision optimization.

Input max overhead voltage is 8 V, output overhead is 9 V. Worst case (resistor limited) fault current with 24 V supply = 80 mA.

Readers may notice a capacitor labeled “Ca” in Figures 1 and 2. This is the “Ashu capacitance” that Design Idea (DI) contributor and current source circuitry expert Ashutosh Sapre discovered to be essential for frequency stability of the cascode topology. Thanks, Ashu!

And a closing note. Since the output scale factor is set by and inversely proportional to R2, if any full-scale other than 20 mA is desired, it’s easily achieved by an appropriate choice for R2.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Full circle current loops: 4mA-20mA to 0mA-20mA appeared first on EDN.

HexaTech launches 3”-diameter aluminium nitride substrate

Semiconductor today - Wed, 02/11/2026 - 13:56
HexaTech Inc of Morrisville, NC, USA (a subsidiary of Stanley Electric Co Ltd of Tokyo, Japan) has announced the immediate production release of its new 3-inch (76.2mm)-diameter single-crystal aluminium nitride (AlN) substrate product...

Are non-magnetic connectors in your future?

EDN Network - Wed, 02/11/2026 - 13:55

Many years ago, I overheard an engineer, with whom I had some project contact, make a casual remark about an RF connector situation, asking “what’s the big deal, it’s just a connector?” That statement was enough to make me wonder about his overall professional judgment.

Connectors may look simple but they are not, of course, as they must combine electrical requirements with mechanical issues and incorporate suitable materials for both body and contact. The materials and platings of their contacts are especially intricate as they blend metallurgical chemistry with other factors such as manufacturability, flexibility, resilience, and resistance objectives.

In recent years, there’s been an added demand on connectors: the need to be non-magnetic. Technically, this means the connector’s materials exhibit extremely low magnetic susceptibility, as they neither generate magnetic fields nor interact with external ones in any significant way.

Note that the term “magnetic connector” is also used for a connector/cable that relies on a magnetic force to both make and maintain a connection. In this arrangement, the plug and the socket have corresponding magnets or magnetic faces to make a self-aligning connection. They are designed for quick, easy, and, often, “break-away” disconnection to protect ports from wear and damage. But the magnetic/non-magnetic connectors here are not these.

Is it easy to visually distinguish a magnetic connector from a non-magnetic one? Maybe, maybe not. Some non-magnetic connectors have a different surface sheen or glow compared to conventional connectors, while others have different color (Figure 1). Of course, some magnetic ones also have a different color depending on the finish, so it’s not a certainty. Fortunately, magnetism is easy enough to test.

Figure 1 These two RF connectors are non-magnetic; other than their color, they look like magnetic connectors. So, color alone is not a definitive indicator. Source: Rosenberger Group

Even minute amounts of magnetic “interference” can have significant consequences in high-frequency or magnetically sensitive systems. Therefore, the objective of non-magnetic component design is to make these parts “magnetically invisible”. So, they don’t distort the surrounding field or interfere with nearby sensors or measurement instruments.

This is especially crucial in environments where magnetic fields play an active role, such as MRI systems, particle accelerators, and quantum computers:

  • In MRI systems, magnetic components can distort the magnetic field lines, leading to degraded system performance, measurement inaccuracies, and artifacts in imaging results. In contrast, non-magnetic components minimize these disturbances by maintaining field uniformity.
  • In precision RF and microwave metrology, magnetic components can bias sensor readings or create unpredictable phase errors. For example, a magnetic connector near a current probe could influence the magnetic coupling, altering the measured waveform.
  • In systems such as scanning electron microscopes, where magnetic fields steer and direct the electrons to supercolliders, where superconducting magnets keep the particle centered as they are being accurate, the magnetic field must be precisely shaped and controlled.
  • In the “hot” field of quantum computing, the qubits—the quantum bits that carry computational information—are extremely sensitive to external magnetic fields. Even minor magnetic impurities in nearby materials can cause decoherence, leading to computational errors or reduced qubit lifetime.

Non-magnetic connectors provide lowloss signal transmission and maintain stable performance across temperature cycles—without contributing to unwanted magnetic noise. In these cryogenic systems, even small amounts of magnetic interaction could invalidate experimental results.

A non-magnetic connector will typically have a low magnetic susceptibility of less than 10-5 (think back Electromagnetics 101: susceptibility is a dimensionless ratio) and a magnetic field strength of less than 0.1 milligauss. That’s at least one to two orders of magnitude less than standard connectors.

Making the non-magnetic connector

It may seem that all that’s required to make a non-magnetic connector is to use non-magnetic material such as copper. If only it were that easy, as non-magnetic materials have very different mechanical and electrical attributes, which affect connector performance and consistency.

A connector has three elements: the body, usually made of nylon or an engineered plastic and not a magnetic consideration; the contact or terminal pin, usually phosphor bronze, beryllium copper, or brass; and the surface plating(s), which can be copper, nickel, gold, tin, silver, palladium, or other metal.

The plating is the largest challenge, as it’s critical to long-term performance of the contact surfaces. The magnetic metals that are the concern here are iron, cobalt, and nickel, notes the Samtec video “Exploring Non-Magnetic Interconnects” (Figure 2).

Figure 2 Trouble zone in the periodic table: these three elements are the source of most of the magnetic problems. Solid-state physics analysis explains why this is so. Source: Samtec Inc.

The simple solution would be to avoid using these metals and instead use brass or aluminum for connector bodies with silver or gold plating. However, that’s often undesirable for performance reasons.

There are other options. For example, Samtec uses a nickel-phosphorus electrodeposited coating that works as a barrier layer between the copper-alloy base metal and subsequent outer layers. This barrier is needed to prevent migration of the copper to the surface-layer gold or tin of the connector pins, which would degrade the performance of that layer.

But wait—isn’t nickel one of the troublesome metals? Yes, but that’s where metallurgists bring some technical “magic” to the story. By adding phosphorus to the nickel, the ferromagnetism associated with high-purity nickel is reduced. This is because the added phosphorus interrupts the nickel’s atomic dipoles, causing the metal to become non-magnetic.

This is not the only option for going non-magnetic. Palladium provides a non-magnetic layer but is a costly alternative to nickel. Associated fasteners can be made of austenitic stainless steel (grades 304 or 316), which is non-magnetic due to its unique crystalline structure.

Other possibilities are eliminating the nickel completely, but this requires thicker copper and gold layers to slow the migration; use of a copper/tin/zinc alloy (Cu/Sn/Zn) called Tri-M3 as a barrier layer; or use of nickel-tungsten (Ni/W—tradename Xtalics). The goal is to reduce to grain size to nanoparticles and so disrupt the possibilities for alignment of the magnetic domains.

There are several ways to devise and fabricate non-magnetic connectors. It requires pure materials, deep-physics insight, metallurgical expertise, and precise control of production process. Assessing the non-magnetic characteristics involves sophisticated instrumentation to measure the magnetic permeability of the materials and connectors.

Each vendor has its own approach and a set of trade-offs regarding connector performance. Designers have many connector parameters to consider with respect to performance, solderability, number of mating cycles, supply-chain risk, and more.

The good news is that the increasing need for such connector means they are not items only available from one or two specialty suppliers. Nearly every manufacturer of RF connectors also offers non-magnetic versions, so users have many options for their connector needs and bill of materials.

What’s the price difference between magnetic and non-magnetic connectors? A quick, unscientific sampling showed that the non-magnetic ones were two to three times the price of their magnetic counterparts. It’s trivial to say that cost is a secondary concern in the applications where they are needed, but that is likely true.

Have you ever used non-magnetic connectors? Was the need for them identified in advance, or was it recognized after regular connectors were used, with problems identified and then linked to the magnetic connectors?

Certainly, the next time someone says, “it’s just a connector,” you can offer them firm evidence that’s not the case at all.

Related Content

The post Are non-magnetic connectors in your future? appeared first on EDN.

Microchip Extends its Edge AI Solutions for Development of Production-ready Applications using its MCUs & MPUs

ELE Times - Wed, 02/11/2026 - 13:14

A major next step for artificial intelligence (AI) and machine learning (ML) innovation is moving ML models from the cloud to the edge for real-time inferencing and decision-making applications in today’s industrial, automotive, data center and consumer Internet of Things (IoT) networks. Microchip Technology has extended its edge AI offering with full-stack solutions that streamline the development of production-ready applications using its microcontrollers (MCUs) and microprocessors (MPUs) – the devices that are located closest to the many sensors at the edge that gather sensor data, control motors, trigger alarms and actuators, and more.

Microchip’s products are long-time embedded-design workhorses, and the new solutions turn its MCUs and MPUs into complete platforms for bringing secure, efficient and scalable intelligence to the edge. The company has rapidly built and expanded its growing, full-stack portfolio of silicon, software and tools that solve edge AI performance, power consumption and security challenges while simplifying implementation.

“AI at the edge is no longer experimental—it’s expected, because of its many advantages over cloud implementations,” said Mark Reiten, corporate vice president of Microchip’s Edge AI business unit. “We created our Edge AI business unit to combine our MCUs, MPUs and FPGAs with optimised ML models plus model acceleration and robust development tools. Now, the addition of the first in our planned family of application solutions accelerates the design of secure and efficient intelligent systems that are ready to deploy in demanding markets.”

Microchip’s new full-stack application solutions for its MCUs and MPUs encompass pre-trained and deployable models as well as application code that can be modified, enhanced and applied to different environments. This can be done either through Microchip’s embedded software and ML development tools or those from Microchip partners. The new solutions include:

  • Detection and classification of dangerous electrical arc faults using AI-based signal analysis
  • Condition monitoring and equipment health assessment for predictive maintenance
  • Facial recognition with liveness detection supporting secure, on-device identity verification
  • Keyword spotting for consumer, industrial and automotive command-and-control interfaces

Development Tools for AI at the Edge

Engineers can leverage familiar Microchip development platforms to rapidly prototype and deploy AI models, reducing complexity and accelerating design cycles. The company’s MPLAB X Integrated Development Environment (IDE) with its MPLAB Harmony software framework and MPLAB ML Development Suite plug-in provides a unified and scalable approach for supporting embedded AI model integration through optimised libraries. Developers can, for example, start with simple proof-of-concept tasks on 8-bit MCUs and move them to production-ready high-performance applications on Microchip’s 16- or 32-bit MCUs.

For its FPGAs, Microchip’s VectorBlox Accelerator SDK 2.0 AI/ML inference platform accelerates vision, Human-Machine Interface (HMI), sensor analytics and other computationally intensive workloads at the edge while also enabling training, simulation and model optimisation within a consistent workflow.

Other support includes training and enablement tools like the company’s motor control reference design featuring its dsPIC DSCs for data extraction in a real-time edge AI data pipeline, and others for load disaggregation in smart e-metering, object detection and counting, and motion surveillance. Microchip also helps solve edge AI challenges through complementary components that are required for product design and development. These include PCIe® devices that connect embedded compute at the edge and high-density power modules that enable edge AI in industrial automation and data centre applications.

The analyst firm IoT Analytics stated in its October 2025 market reportthat embedding edge AI capabilities directly into MCUs is among the top four industry trends, enabling AI-driven applications “…that reduce latency, enhance data privacy, and lower dependency on cloud infrastructure.” Microchip’s AI initiative reinforces this trend with its MCU and MPU platforms, as well as its FPGAs. Edge AI ecosystems increasingly require support for both software AI accelerators and integrated hardware acceleration on multiple devices across a range of memory configurations.

The post Microchip Extends its Edge AI Solutions for Development of Production-ready Applications using its MCUs & MPUs appeared first on ELE Times.

NUBURU completes first tranche of preferred equity restructuring

Semiconductor today - Wed, 02/11/2026 - 11:56
NUBURU Inc of Centennial, CO, USA — a developer of high-performance blue-laser technology and an emerging integrated defense & security platform provider — has completed the first tranche of a preferred equity restructuring transaction that materially simplifies its capital structure and reduces legacy balance-sheet overhang...

The Grid as Strategy: Powering India’s 2047 Transformation

ELE Times - Wed, 02/11/2026 - 11:44

by Varun Bhatia, Vice President – Projects and Learning Solutions, Electronics Sector Skills Council of India. 

As India approaches its centenary in 2047, the idea of a Viksit Bharat has shifted decisively from aspiration to obligation. A 30 trillion-dollar economy, globally competitive manufacturing, integrated logistics, and digital universality are no longer distant goals. They are policy commitments.

Yet beneath every ambition lies a foundational truth. Development runs on dependable power. No country has crossed into developed-nation status on unreliable electricity. In India’s case, the transmission grid is not a supporting actor in this transformation. It is the stage itself.

The Grid That Holds the Nation Together

This transition from access to assurance has been enabled by a quiet but extraordinary expansion of India’s transmission network. India’s national power transmission system has crossed 5 lakh circuit kilometers, supported by 1,407 GVA of transformation capacity. Since 2014, the network has grown by 71.6 percent, with the addition of 2.09 lakh circuit kilometers of transmission lines and 876 GVA of transformation capacity. Integration at this scale has reshaped the energy landscape. The inter-regional power transfer capacity now stands at 1,20,340 megawatts, enabling electricity to move seamlessly across regions. This has successfully realized the vision of One Nation, One Grid, One Frequency and created one of the largest synchronized grids in the world. This architecture is not merely technical. It is economic infrastructure. It allows energy to flow from resource-rich states to industrial corridors without friction, strengthening productivity, investment confidence, and national competitiveness.

From Electrification to Excellence

India’s first power-sector revolution was about access, and that mission is largely complete. Saubhagya connected 2.86 crore households, while DDUGJY achieved universal village electrification by 2018. These were historic milestones.

However, access is only the starting point. Developed economies operate on a higher standard where power is always available, always stable, and always scalable. In a Viksit Bharat, outages must be exceptions rather than expectations. Voltage fluctuations cannot be built into business models. An industrial unit in rural Assam must receive the same quality of supply as one operating in an export hub in Southeast Asia. Reliability has now become the true benchmark of progress.

Rural India: From Load Centre to Growth Partner

The impact of a strong transmission backbone is most visible in rural India. Average rural power supply has increased from 12.5 hours per day in 2014 to 22.6 hours in FY 2025. This improvement has fundamentally altered the economic potential of villages and small towns. Reliability is being reinforced by systemic reforms. Under the Revamped Distribution Sector Scheme, grid modernization has reduced national AT&C losses to 15.37 percent, improving the financial sustainability of electricity supply.

Digital tools are accelerating this shift. More than 4.76 crore smart meters have been installed nationwide, bringing transparency, efficiency, and real-time control to energy consumption. Targeted interventions continue to close the remaining gaps. The PM-JANMAN initiative is electrifying remote habitations of Particularly Vulnerable Tribal Groups, while PM-KUSUM is reshaping agricultural power by enabling reliable daytime electricity through solarization. With states tendering over 20 gigawatts of feeder-level solar capacity, farmers are increasingly becoming urjadatas, contributing power back to the grid.

Reliable transmission makes this participation possible. The tower standing in a farmer’s field is no longer just infrastructure. It is a direct connection to the national economy. With assured round-the-clock power, industries no longer need to cluster around congested urban centers. Cold chains, food processing units, automated MSMEs, and digital services can operate efficiently in Tier-2 and Tier-3 towns. This urban transformation creates local employment, strengthens regional economies, and reduces migration pressures. In this model, rural India is no longer a subsidized consumer of power. It becomes a productive contributor to national growth.

Green Ambitions Need Grid Muscle

A Viksit Bharat must also be a sustainable Bharat. India’s commitment to achieving 500 gigawatts of non-fossil fuel capacity by 2030 reflects both climate responsibility and strategic foresight. Renewable energy, however, is geographically dispersed. Solar potential lies in deserts, wind along coastlines, and hydro resources in mountainous regions. Without a strong transmission backbone, clean energy remains stranded. The expanded grid, supported by investments under the Green Energy Corridor program, has become the central enabler of renewable integration. Strengthened inter-regional links ensure that clean power generated in remote areas can reach demand centers efficiently. This capability allows India to pursue growth without compromising its environmental commitments.

Resilience as National Security

Recent global energy shocks and climate-induced disruptions have reinforced one reality. Energy security is inseparable from national security. The grid of a developed India must therefore be resilient, intelligent, and adaptive. Smart Grids capable of self-healing, predictive maintenance, and advanced demand-response management are no longer optional. They are essential. Equally important is social resilience. Right-of-Way challenges require a partnership-driven approach. Landowners must be treated as stakeholders in national progress, with fair compensation and transparent processes that build trust and cooperation.

The Backbone of a Developed India

As India moves steadily toward 2047, development will be measured not only by economic output or industrial capacity, but by the consistency and quality of its power supply. Every kilometer of transmission line laid becomes a conduit for productivity. Every additional GVA of capacity strengthens energy security. The quiet hum of high-voltage lines signals a nation growing with confidence. Connecting Bharat is no longer about lighting homes. It is about powering aspirations, enabling enterprise, and securing India’s place as a self-reliant global force.

The transmission grid is not merely supporting the vision of Viksit Bharat. It is sustaining it.

The post The Grid as Strategy: Powering India’s 2047 Transformation appeared first on ELE Times.

Renesas licenses EPC’s low-voltage eGaN technology to complement its 650V+ portfolio

Semiconductor today - Wed, 02/11/2026 - 10:47
Efficient Power Conversion Corp (EPC) of El Segundo, CA, USA — which makes enhancement-mode gallium nitride on silicon (eGaN) power field-effect transistors (FETs) and integrated circuits for power management applications — has announced a comprehensive licensing agreement with Renesas Electronics Corp of Tokyo, Japan, a global supplier of semiconductor solutions and high-voltage GaN transistors...

Engineering the Future of High-Voltage Battery Management: Rohit Bhan on BMIC Innovation

ELE Times - Wed, 02/11/2026 - 09:58

ELE Times conducts an exclusive interview with Rohit Bhan, Senior Staff Electrical Engineer at Renesas Electronics America, discussing how advanced sensing, 120 V power conversion, ±5 mV precision ADCs, and ASIL D fault-handling capabilities are driving safer, more efficient, and scalable battery systems across industrial, mobility, and energy-storage applications.

Rohit Bhan has spent two decades advancing mixed-signal and system-level semiconductor design, with a specialization in AMS/DMS verification and battery-management architectures. Over the past year, he has expanded this foundation through significant contributions to high-voltage BMIC development, helping to push Renesas’ next generation of power-management solutions into new levels of accuracy, safety, and integration.

Rohit is highly regarded within Renesas and industry-wide for his ability to bridge detailed analog modeling, digital verification, and real-world application requirements. His recent work includes developing ±5 mV high-accuracy ADCs for precise cell monitoring, implementing an on-chip buck converter that reduces board complexity, and architecting 18-bit current-sensing solutions that enable more advanced state-of-charge and state-of-health analytics. He has also integrated microcontroller-driven safety logic into verification environments—supporting ASIL D-level fault detection and autonomous response—while contributing to Renesas’ first BMIC design.

Rohit’s expertise spans behavioral modeling, reusable verification environments, multi-cell chip operation, and stackable architectures for even higher cell counts. His end-to-end perspective—ranging from system definition and testbench development to customer engagement and product innovation—has made him a key contributor to Renesas’ battery-management roadmap. As the industry moves toward higher voltages, smarter analytics, and tighter functional-safety requirements, his work is helping shape the next wave of intelligent, reliable, and scalable BMIC platforms.

Here are the excerpts from the interaction:

ELE Times: Rohit, you recently helped deliver a multi-cell BMIC architecture capable of operating at high voltage. What were the most significant engineering hurdles in moving to a new process technology for the first time, and what does that enable for future high-voltage applications?

ROHIT BHAN: From a design perspective, key challenges included managing high-stress device margins (such as parasitic bipolar effects and field-plate optimization), defining robust protection strategies for elevated operating conditions, integrating higher-energy power domains, maintaining analog accuracy across very large common-mode ranges, and working through evolving process design kit maturity. From a verification standpoint, this required extensive coverage of extreme transient conditions (including electrical overstress, surge, and load-dump-like events), which drove expanded corner matrices, mixed-signal simulation complexity, and tight correlation between silicon measurements and models to close the accuracy loop and ensure specified performance.

Looking forward, these advances enable future high-energy applications with increased monitoring and protection headroom, simpler system-level implementations, and improved measurement integrity. A mature high-stress-capable process combined with robust analog and IP libraries provides a scalable foundation for derivative products (such as variants with different channel densities or feature sets) and for modular or isolated architectures that support higher aggregate operating ranges—while preserving a common verification, validation, and qualification framework.

ELE Times: Among your 2025 accomplishments, your team achieved ±5 mV accuracy in cell-voltage measurement. Why is this level of precision so critical for cell balancing, battery longevity, and safety—especially in EV, industrial, and energy-storage use cases?

RB: If our measurement error is ±20 mV, the BMIC can “think” a cell is high when it isn’t or miss a genuinely high cell; the result is oscillatory balancing and residual imbalance that never collapses. Tightening to ±5 mV allows thresholds and hysteresis to be set small enough that balancing actions converge to a narrow spread instead of dithering. Over hundreds of cycles, that cell becomes the pack limiter (early full/empty flags, rising impedance). Keeping the max cell delta small via ±5 mV metrology lowers the risk of one cell aging faster and dragging usable capacity and power down. In addition, early detection of abnormal dV/dt under load or rest hinges on accurate voltage plateaus and inflection points—errors here mask the onset of dangerous behavior.

ELE Times: An on-chip buck converter is a major milestone in integration. How did you approach embedding such a high-voltage converter into the BMIC, and what advantages does this bring to OEMs in terms of board simplification, thermal performance, and cost?

RB: There are multiple steps involved in making this decision. It starts with finding the right process and devices, partitioning the power tree into clean voltage domains, and engineering isolation, spacing, and ESD for HV switching nodes. Finally, close the control loop details (gate drive, peak‑current trims, offsets) and verify at the system level, and correlate early in the execution phase.

For OEMs, this translates into simpler boards with fewer external components, easier routing, and a smaller overall footprint, while eliminating the need for external high-stress pre-regulators feeding the battery monitor, since the pack-level domain is managed on die. By internalizing the high-energy conversion and using cleaner harnessing and creepage strategies, elevated-potential nodes are no longer distributed across the board, significantly simplifying creepage and clearance planning at the power-management boundary. The result is fewer late-stage compliance surprises and integrated high-energy domains that are aligned with process-level reliability reviews, reducing the risk of re-layout driven by spacing or derating constraints. 

ELE Times: You also worked on an 18-bit ADC for current sensing. How does this resolution improve state-of-charge and state-of-health algorithms, and what new analytics or predictive-maintenance features become possible as a result?

RB: Regarding the native 18‑bit resolution and long integration window: the coulomb‑counter (CC) ADC integrates for ~250 ms (milliseconds) per cycle, with selectable input ranges ±50/±100/±200 mV across the sense shunt; results land in CCR[L/M/H] and raise a completion IRQ. This is the basis for low‑noise charge throughput measurement and synchronized analytics. Error and linearity you can budget: the EC table shows 18‑bit CC resolution, INL ~27 LSB, and range‑dependent µV‑level error (e.g., ±25 µV in the ±50 mV range), plus a programmable dead‑zone threshold for direction detection—so the math can be made deterministic. Cross‑domain sync: A firmware/RTL option lets the CC “integration complete” event trigger the voltage ADC sequencer, tightly aligning V and I snapshots for impedance/OCV‑coupled analytics.

Two main functionalities that depend on this accuracy are State of Charge (SOC) and State of Health (SOH). First, for SOC accuracy—following is where the extra bits show up:

  1. Lower quantization and drift in coulomb counting: with 18‑bit integration over 250 ms, the charge quantization step is orders smaller than typical load perturbations. Combined with the ±25–100 µV error bands (range‑dependent), which reduces cycle‑to‑cycle SOC drift and tightens coulombic efficiency computation—especially at low currents (standby, tail‑charge), where coarse ADCs mis‑estimate.
  2. Cleaner “merge” of model‑based and measurement‑based SOC: the synchronized CC‑→‑voltage trigger lets you fuse dQ/dV features with the integrated current over the same window, improving EKF/UKF observability when OCV slopes flatten near the top of charge. Practically: fewer recalibration waypoints and tighter SOC confidence bounds across temperature.
  3. Robust direction detection at very small currents: the dead‑zone and direction bits (e.g., cc_dir) are asserted based on CC codes exceeding a programmable threshold; you can reliably switch charge/discharge logic around near‑zero crossings without chattering. That matters for taper‑charge and micro‑leak checks.

For SOH + predictive maintenance, this resolution enables capacity‑fade trending with confidence, specifically:

  • Cycle‑level coulombic efficiency becomes statistically meaningful, not noise‑dominated—letting you detect early deviations from the fleet baseline.
  • Impedance‑based health scoring (per cell and stack): enabling impedance mode in CC (aligned with voltage sampling) gives snapshots each conversion period; tracking ΔR growth vs. temperature and SOC identifies aging cells and connector/cable degradation proactively.
  • Micro‑leakage & parasitic load detection: with µV‑level CC error windows and long integration, you can flag slow, persistent current draw (sleep paths, corrosion) that would be invisible to 12–14‑bit chains—preventing “vanishing capacity” events in ESS and industrial packs.
  • Adaptive balancing + charge policy: fusing accurate dQ with cell ΔV allows balancing decisions based on energy imbalance, not just voltage spread. That reduces balancing energy, speeds convergence, and lowers thermal stress on weak cells.
  • Early anomaly signatures: the combination of high‑resolution CC and triggered voltage sequences yields load‑signature libraries (step response, ripple statistics) that expose incipient IR jumps or contact resistance growth—feeding an anomaly detector before safety limits trip.

ELE Times: Even with high-accuracy ADCs, on-chip buck converters, and advanced fault-response logic, the chip is designed to minimize quiescent current without compromising monitoring capability. What design strategies or architectural decisions enabled such low power consumption?

RB: We achieved very low standby power through four key strategies. First, we defined true power states that completely shut down high-consumption circuitry, such as switching regulators, charge pumps, high-speed clocks, and data converters. Second, wake-up behavior is fully event-driven rather than periodically active. Third, the always-on control logic is designed for ultra-low leakage operation. Finally, voltage references and regulators are aggressively gated, so precision analog blocks are only enabled when they are actively needed. Deeper low-power modes further reduce consumption by selectively disabling additional domains, enabling progressively lower leakage states for long-term storage or shipping scenarios.

ELE Times: You’ve emphasized the role of embedded microcontrollers in both chip functionality and verification. Can you explain how MCU-driven fault handling—covering short circuits, overcurrent, open-wire detection, and more—elevates functional safety toward ASIL D compliance?

RB: In our current chip, safety is layered so hazards are stopped in hardware while an embedded MCU and state machines deliver the diagnostics and control that raise integrity toward ASIL D. Fast analog protection shuts high‑side FETs on short‑circuit/overcurrent and keeps low‑frequency comparators active even in low‑power modes, while event‑driven wake and staged regulator control ensure deterministic, traceable transitions to safe states.

The MCU/FSM layer logs faults, latches status, applies masks, and cross‑checks control vs. feedback, with counters providing bounded detection latency and reliable classification—including near‑zero current direction via a programmable dead‑zone. Communication paths use optional CRC to guard commands/telemetry, and a dedicated runaway mechanism forces NORMAL→SHIP if software misbehaves, guaranteeing a known safe state. Together, these mechanisms deliver immediate hazard removal, high diagnostic coverage of single‑point/latent faults, auditable evidence, and controlled recovery—providing the system‑level building blocks needed to argue ISO 26262 compliance up to ASIL D.

ELE Times: Stackable BMICs are becoming a major focus for high-cell-count systems. What challenges arise when daisy-chaining devices for applications like e-bikes, industrial storage, or large EV packs, and how is your team addressing communication, synchronization, and safety requirements?

RB: Stacking BMICs for high‑cell‑count packs introduces tough problems—EMI and large common‑mode swings on long harnesses, chain length/topology limits, tighter protocol timing at higher baud rates, coherent cross‑device sampling, and ASIL D‑level diagnostics plus safe‑state behavior under hot‑plug and sleep/wake. We address these with hardened links (transformer for tens of meters, capacitive for short hops), controlled slew and comparator front‑ends, ring/loop redundancy, and ASIL D‑capable comm bridges that add autonomous wake; end‑to‑end integrity uses 16/32‑bit CRC, timeouts, overflow guards, and memory CRC. For synchronization, we enforce true simultaneous sampling, global triggers, and evaluate PTP‑style timing, using translator ICs to coordinate mixed chains.

ELE Times: You have deep experience building behavioral models using wreal and Verilog-AMS. How does robust modeling influence system definition, mixed-mode verification, and ultimately silicon success for high-voltage BMICs?

RB:  Robust wreal/Verilog‑AMS modeling is a force multiplier across the mixed signal devices. It clarifies system definition (pin‑accurate behavioral blocks with explicit supplies, bias ranges, and built‑in checks), accelerates mixed‑mode verification (SV/UVM testbenches that reuse the same stimuli in DMS and AMS, with proxy/bridge handshakes for analog ramp/settling), and de‑risks silicon by catching integration and safety issues early (SOA/EMC assumptions, open‑wire/CRC paths, power‑state transitions) while keeping sims fast enough for coverage.

Concretely, pin‑accurate DMS/RNM models and standardized generators enforce the right interfaces and bias/inputs status (“supplyOK”, “biasOK”), reducing schematic/model drift. SV testbenches drive identical sequences into RNM and AMS configs for one‑bench reuse so timing‑critical behaviors are verified deterministically. RNM delivers order‑of‑magnitude speed‑ups (e.g., ~60× seen in internal comparisons) to reach coverage across modes. Model‑vs‑schematic flows quantify correlation (minutes vs. hours) and expose regressions when analog blocks change. Together with these practices in our methodology and testbench translates into earlier bug discovery, tighter spec alignment, and first‑time‑right outcomes.

ELE Times: Your work spans diverse categories—from power tools and drones to renewable-energy systems and electric mobility. How do application-specific requirements shape decisions around cell balancing, current sensing, and protection features?

RB: Across segments, application realities drive our choices: power tools and drones favor compact BOMs and fast transients, so 50 mA internal balancing with brief dwell and measurement settling, tight short‑circuit latency, and coulomb‑counter averaging for SoC works well; e‑bikes/LEV typically stay at 50 mA but require separate charge vs. discharge thresholds (regen vs. propulsion), longer DOC windows and microsecond‑class SCD cutoffs to satisfy controller safety timing. Industrial/renewables often need scheduled balancing and external FET paths beyond 50 mA, plus deep diagnostics (averaging, CRC, open‑wire) across daisy‑chained stacks and EV/high‑voltage packs push toward ASIL D architectures with pack monitors, redundant current channels, contactor drivers, and ring communications. Current sensing is chosen to match the environment—low‑side for cost‑sensitive packs, HV differential with isolation checks in EV/ESS—while an 18‑bit ΔΣ coulomb counter and near‑zero dead‑zone logic preserve direction fidelity. Protection consistently blends fast analog comparators for immediate energy removal with MCU‑logged recovery and robust comms (CRC, watchdogs), so each market gets the right balance of performance, safety, and serviceability.

ELE Times: As battery management and gauges (BMG) evolve toward higher voltages, embedded intelligence, and greater integration, what do you see as the next major leap in BMIC design? Where are the biggest opportunities for innovation over the next five years?

RB: This is an exciting topic. Based on our roadmaps and the work we have been doing, the next major leap in BMIC design is a shift from “cell‑monitor ICs” to a smart, safety‑qualified pack platform—a Battery Junction Box–centric architecture with edge intelligence, open high‑speed wired communications, and deep diagnostics that run in drive and park. Here’s where I believe the biggest opportunities lie over the next five years:

  • Pack‑centric integration: the Smart Battery Junction Box
  • Communications: from proprietary chains to open, ring‑capable PHY
  • Metrology: precision sensing + edge analytics
  • Functional safety that persists in sleep/park
  • Power: HV buck integration becomes table stakes
  • Balancing: thermal‑aware schedulers and scalable currents
  • Cybersecurity & configuration integrity for packs
  • Verification‑driven design: models that shorten the loop.

The post Engineering the Future of High-Voltage Battery Management: Rohit Bhan on BMIC Innovation appeared first on ELE Times.

Anritsu Launches New RF Hardware Option, Supporting 6G FR3 

ELE Times - Wed, 02/11/2026 - 09:23

Anritsu Corporation released a new RF hardware option for its Radio Communication Test Station MT8000A to support the key FR3 (Frequency Range 3) frequency band for next‑gen 6G mobile systems. With this release, the MT8000A platform now supports evolving communications technologies, covering R&D through to final commercial deployment of 4G/5G and future 6G/FR3 devices.

Anritsu will showcase the new solution in its booth at MWC Barcelona 2026 (Mobile World Congress), the world’s largest mobile communications exhibition, held in Barcelona, Spain, from March 2 to 5, 2026.

Since 6G is expected to deliver ultra-high speed, ultra-low latency, ultra-safety and reliability far surpassing 5G, worldwide, international standardisation efforts are accelerating toward commercial 6G release.

The key high‑capacity data transmission and wide-coverage features of 6G require using the FR3 frequency band (7.125 to 24.25 GHz), and the Lower FR3 band range up to 16 GHz, which extends from the FR1 (7.125 GHz) band, is already on the agenda for the 2027 World Radiocommunication Conference (WRC-27) discussions.

By leveraging its long expertise in wireless measurement, Anritsu’s MT8000A test platform leads the industry with this highly scalable new RF hardware option supporting the Lower FR3 band, and covering both current and next‑generation technologies. Future 6G functions will be supported by seamless software upgrades, helping speed development and release of new 6G devices.

Development Background

The FR3 frequency band is increasingly important in achieving practical 6G devices, meaning current 4G/5G test instruments (supporting FR1 and FR2) require hardware upgrades.

Additionally, dedicated FR3 RF tests are required because FR3 and conventional FR1/FR2 bands have different RF-related connectivity and communication quality features.

Furthermore, FR3 test instruments will be essential for both 6G protocol tests to validate network connectivity, and for functional tests to comprehensively evaluate service/application performance.

These factors are driving demand for a highly expandable, multifunctional, and high‑performance test platform like the MT8000A, covering both existing 4G/5G devices and next‑generation multimode 4G/5G/6G devices.

Product Overview and Features

Radio Communication Test Station MT8000A

The current MT8000A test platform supports a wide range of 3GPP-based applications, including RF, protocol, and functional tests for developing 4G/5G devices.

By adding this new industry-beating RF hardware option supporting 6G/Lower FR3 bands, Anritsu’s MT8000A platform assures long‑term, cost-effective use for developing future 6G/FR3 devices.

Anritsu’s continuing support for future 6G/FR3 test functions using MT8000A software upgrades will advance the evolution of next‑generation communications and help achieve a useful, safe, and stable network‑connected society.

The post Anritsu Launches New RF Hardware Option, Supporting 6G FR3  appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки