Збирач потоків

Filtronic unveiling GaN-based Cerus V solid-state power amplifier

Semiconductor today - Чтв, 09/25/2025 - 12:14
Filtronic plc of Sedgefield and Leeds, UK — which designs and manufactures RF and millimeter-wave (mmWave) transmit & receive components and subsystems — is launching the latest development in solid-state power amplifiers (SSPAs) operating in the V-band. The Cerus V is being showcased alongside its Cerus W solid-state power amplifiers at European Microwave Week (EuMW 2025) in Utrecht, the Netherlands (21–26 September)...

India Can Lead in Semiconductor Innovation, If We Skill Right

ELE Times - Чтв, 09/25/2025 - 12:07

Author: Mr. Saleem Ahmed, Officiating Head, ESSCI

When you hold your smartphone, drive your car, or even switch on your smart TV, there’s an invisible heartbeat inside, semiconductors. These tiny chips power satellites in orbit, fighter jets in the skies, and the AI algorithms reshaping our lives. For years, India has depended on importing these critical components, an Achilles’ heel for a country that aspires to be a global tech leader. But the tide is turning. With billion-dollar investments, government incentives, and, most importantly, a young pool of engineers, India is poised to script a new chapter: becoming a semiconductor innovation powerhouse.

Yet, there’s a catch. Money can build fabs and policy can set direction, but without the right skills, the vision will remain half-written. The future of India’s semiconductor journey will not just be about silicon, it will be about skills.

The Opportunity Before India

The global semiconductor industry is racing towards a trillion-dollar valuation by 2030. India’s own market is expected to touch $64 billion by 2026, nearly three times its 2019 size of $22.7 billion, according to Counterpoint Research and the India Electronics & Semiconductor Association (IESA). At the same time, the Electronics Sector Skills Council of India (ESSCI) projects the industry will employ 1.70 lakh professionals by 2025 and create another 1.03 lakh jobs by 2030. These are not routine jobs, they’re high-paying, future-facing roles that will place young Indians at the cutting edge of global innovation.

India already has an edge: it is home to nearly 20% of the world’s chip design engineers. Leading companies have set up design and R&D centers in Bengaluru, Hyderabad, and Noida. The recent unveiling of a 3nm semiconductor chip designed in India showcased the sheer technical capability of our engineers and the strategic importance of Indian design centers to the global industry.

By August 2025, the Union Cabinet of India had approved a total of ten semiconductor projects under the India Semiconductor Mission (ISM), amounting to cumulative investments of approximately ₹1.6 lakh crore (around $18.2 billion) across six states. These projects cover a range of technologies and partners, including advanced fabs, packaging, and silicon carbide-based compound semiconductors.

Why Skilling is the Decisive Factor

Despite this progress, the semiconductor value chain is highly talent-intensive. The design and IP creation phase, where the most economic value resides, requires deep expertise in VLSI (Very-Large-Scale Integration), electronic design automation (EDA) tools, and system-on-chip (SoC) architecture.

Currently, India produces tens of thousands of electronics graduates every year, but only a fraction are industry-ready. The gap lies in practical exposure, specialized training, and familiarity with real-world tools. For India to move beyond being a service center and emerge as a global innovation leader, a sharper focus on semiconductor design and VLSI roles is essential.

Equally critical is the manufacturing side of semiconductors, which demands not just technical know-how but also safety and operational excellence. Specialized programs such as Industrial Safety for Semiconductor Manufacturing – Hazchem and Electrical Safety in Semiconductor Facilities are vital to prepare a workforce capable of managing the highly sensitive and hazardous environments of fabrication plants. By strengthening both design expertise and manufacturing readiness, India can build a holistic talent pipeline for the semiconductor ecosystem.

Role of ESSCI: Building the Skills Bridge

This is where ESSCI plays a transformative role. Recognizing the talent gap, ESSCI has developed 32 NSQF-aligned qualifications spanning the entire semiconductor ecosystem, from chip design to  advanced packaging, cleanroom operations to safety protocols.

The courses are designed to cater to:

  • Engineering graduates looking to specialize,
  • Diploma and ITI students preparing to enter the workforce, and
  • Professionals seeking to upskill or switch domains.

Some of the most critical roles where demand is already soaring include:

  • VLSI Design Engineers – Designing advanced digital and analog circuits at nanoscale.
  • Physical Design Engineers – Specializing in floor-planning, power optimization, and timing.
  • Verification Engineers – Ensuring chips are error-free before fabrication.
  • Analog & Mixed-Signal Designers – Vital for sensors, RF communication, and power management.
  • Wafer Processing and Packaging Engineers – Especially relevant as fabs emerge in India.

Each of these roles commands global relevance and premium salaries, but only if the workforce is trained at world-class standards. The full range of programs is available on ESSCI’s website, offering aspirants a structured path to join the semiconductor workforce. Beyond curriculum design, ESSCI collaborates with industry leaders ensuring that Indian talent is benchmarked against international standards.

Why This Matters for Young Engineers

For India’s youth, the semiconductor wave represents more than jobs, it represents a chance to lead global technological change. Whether in AI, electric vehicles, 5G, space technology, or IoT, semiconductors are at the heart of every emerging sector.

A career in this industry offers:

  • High-paying roles with global exposure,
  • Opportunities to work on frontier technologies, and
  • The satisfaction of contributing to national self-reliance and global leadership.

Conclusion:

India’s semiconductor journey has moved from dream to execution. Fabs are being built, policies are in place, and global players are betting big on India. But without a deeply skilled workforce, the dream of becoming a global semiconductor hub risks falling short.

The responsibility now lies with all stakeholders, universities, industry, government, and skill councils like ESSCI, to align efforts and ensure our engineers are not just employable, but world-class innovators.

For young Indians, the message is simple: this is your moment. Equip yourself with the right skills, embrace the semiconductor revolution, and help India design not just chips, but its future.

Because if we skill right, India won’t just participate in semiconductor innovation, it will lead it

The post India Can Lead in Semiconductor Innovation, If We Skill Right appeared first on ELE Times.

Open source 16 channels BCI board i made. Called Meower :3

Reddit:Electronics - Чтв, 09/25/2025 - 11:19
3

Hi :3

Some time ago i was trying to help friends with getting a BCI board for their project, but plans were changed and i made a new fully custom board based on ADS1299 (2 of them, 16 channels) and ESP32-C3. I hope they will use it one day, we just decided to post it :3

Board is open source, i’ve designed the entire pcb myself, as well as firmware and then BrainFlow integration and a python testing GUI (i have no idea how to add mor pictures here :3). You can order it from JLCPCB (project files are provided) if you want and it will be relatively cheap, and crazy cheap if you order like 10 or 20 — price goes down super fast. On esp side i’ve implemented sinc3 equalizer (7-tap FIR), DC removal and notch filters (50/60, 100/120 Hz). You can toggle them in real time independently. DC has several cutoff frequencies you can choose from also on the go. If you change sampling frequency filters will adapt of course (i made LUTs inside up to 4000 Hz)

I was trying to make sure board works as fast as it can and as stable as possible. I was doing a lot of optimizations here and there (embedded coders feel free to trash me, i will be only happy), but board can run all filters on all 16 channels and sustain 4000 Hz at max — all of that over Wi-Fi and UDP.

So, i have no idea if ADS1299 is dead already or maybe no one needs it or whatever, but if you’re interested — you can check git or ask here or whatever else. It just took me a ton of time to make it and i wasn’t even checking what other people do too much. We’ve checked freeEEG, then OpenBCI, then i thought maybe i can just make 16 channels and since then went into silent mode getting crushed under piles of datasheets and design guidelines.

I just want to share the board and not sure how to stay under this reddit guidelines, i hope it’s ok. So, whatever it goes, check git or text me — i will be happy to yap about signal processing and pcb design and share more details if anyone interested. https://github.com/nikki-uwu/Meower

submitted by /u/Meow-Corp
[link] [comments]

Reinventing In-Vehicle Connectivity: Driving the Future with Automotive Ethernet

ELE Times - Чтв, 09/25/2025 - 09:55

Modern cars are fast becoming connected, software-driven systems with all the trappings of a mobile data center. Automotive Ethernet forms the core of this transformation, providing the required speed, scalability, and security to build next-generation vehicles. From zonal architectures to software-defined platforms, the Ethernet enables the driving experience to become safer, smarter, and more connected.

Shaping Vehicle Architecture

The industry is shifting towards zonal architectures, with centralized processors managing multiple zones within a vehicle. The numerous benefits include reduced complexity, cost reduction, and enhanced efficiencies. By 2030, it is expected that half of new vehicles would incorporate zonal systems, thereby underscoring Ethernet’s role as the backbone of automotive design.

Why Automotive Ethernet Is Important

With over a decade of leadership and ports in the hundreds of millions deployed already, ethernet has made its way into becoming the de facto standard for in-vehicle networking. Listed below are the reasons:

  • speed, secure-data transmission
  • low power
  • standards-based reliability
  • scalability for future applications

The ecosystem is continuously growing, from PHY technology that stretches twisted pair connections to switches and MCUs that empower advanced networking.

Unlocking Next-Gen Capabilities

Automotive Ethernet is the data backbone fostering innovation through development in the following areas:

  • Assisted driving & safety: ADAS and LIDAR-powered systems for intelligent navigation.
  • Cloud connectivity: Fast, secure data transfer between vehicles and the cloud.
  • Software-defined platforms: Supporting OTA updates and feature rollouts.
  • Predictive maintenance: Real-time diagnostics and personalization.

Open Standards for Interoperability

Automotive connectivity depends heavily on open standards that guarantee interoperability from platform to platform. Collaborations with global standards bodies such as IEEE are further enhancing Ethernet to provide:

  • Higher data rates
  • Lower power consumption
  • Lower emission of noise

Ethernet technologies, well-proven in enterprise data center applications, are now being modified for automotive applications to deliver the same level of security and performance.

Looking Ahead

Ethernet will continue to provide the data backbone propelling the transformation of automobiles into sophisticated, software-defined platforms. Vehicles will become safer, more intelligent, and constantly better thanks to ongoing technological advancements.

Broadcom is assisting automakers all over the world in embracing this change and influencing the direction of connected mobility with its extensive experience and leadership in Automotive Ethernet.

(This article has been adapted and modified from content on Broadcom.)

The post Reinventing In-Vehicle Connectivity: Driving the Future with Automotive Ethernet appeared first on ELE Times.

Vishay Intertechnology Releases New 1 A and 2 A Gen 7 1200 V FRED Pt Hyperfast Rectifiers in SlimSMA HV (DO-221AC) Package

ELE Times - Чтв, 09/25/2025 - 09:01

Reducing Switching Losses and Increasing Efficiency, Devices Combine Low Qrr Down to 105 nC With VF Down to 1.45 V and Low Junction Capacitance and Recovery Time

Vishay Intertechnology, Inc. expanded its Gen 7 platform of 1200 V FRED Pt Hyperfast rectifiers with four new devices in the eSMP series SlimSMA HV (DO-221AC) package. Optimized for industrial and automotive applications, the 1 A and 2 A rectifiers not only offer the best trade-off between reverse recovery charge (Qrr) and forward voltage drop for devices in their class, but also provide the lowest junction capacitance and recovery time.

The Vishay Semiconductors rectifiers released, include the VS-E7JX0112-M3 and VS-E7JX0212-M3, and AEC-Q101 qualified VS-E7JX0112HM3 and VS-E7JX0212HM3. To reduce switching losses and increase efficiency, the devices combine fast recovery times down to 45 ns with Qrr down to 105 nC typical, forward voltage drop down to 1.45 V, and junction capacitance down to 2.5 pF. The robust rectifiers offer non-repetitive peak surge current up to 21 A in a compact package measuring 2.6 mm by 5.2 mm with a low 0.95 mm profile, compared to 2.3 mm for the competing SMA package with a similar footprint. Combined with a minimum 3.2 mm creepage distance and molding compound with a comparative tracking index (CTI) ≥ 600 (Material Group I), the devices reduce component counts and lower BOM costs based on IEC 60664-1 requirements for high voltage applications.

The VS-E7JX0112-M3, VS-E7JX0112HM3, VS-E7JX0212-M3, and VS-E7JX0212HM3 will serve as clamp, snubber, and freewheeling diodes in flyback auxiliary power supplies and high frequency rectifiers for bootstrap driver functionality, while providing desaturation protection for the latest fast switching IGBTs and high voltage Si / SiC MOSFETs. Typical applications for the devices include industrial drives and tools, on-board chargers and motors for electric vehicles (EV), energy generation and storage systems, and Ćuk converters and industrial LED SEPIC circuitry.

The rectifiers feature a planar structure and platinum doped lifetime control that guarantee system reliability and robustness without compromising on performance, while their optimized stored charge and low recovery current minimize switching losses and reduce power dissipation. RoHS-compliant and halogen-free, the devices feature a Moisture Sensitivity Level of 1 in accordance with J-STD-020 and offer high temperature operation to +175 °C.

The post Vishay Intertechnology Releases New 1 A and 2 A Gen 7 1200 V FRED Pt Hyperfast Rectifiers in SlimSMA HV (DO-221AC) Package appeared first on ELE Times.

MSI Adopts Anritsu’s High-Speed Interface Evaluation Solution

ELE Times - Чтв, 09/25/2025 - 08:22

Micro-Star International Co., Ltd. (MSI), a global leading vendor for computer hardware, has selected Anritsu’s signal integrity evaluation solution, Signal Quality Analyzer-R MP1900A and Vector Network Analyzer MS46524B, for high-speed digital interfaces, including the PCI Express (PCIe) standard.

The rapid proliferation of AI and IoT technologies has driven a significant increase in data transmission volumes. As a result, high-speed digital interfaces such as PCIe, USB, and DisplayPort, widely used in servers, data centers, and consumer products, are required to support even greater speeds and capacities. Faced with this, MSI has risen to the challenge of adapting to next-generation interface standards in its server and motherboard development, ensuring digital signal quality and improving development efficiency.

To achieve this, MSI selected Anritsu’s MP1900A and MS46524B. Utilizing both products, MSI established a comprehensive environment for evaluating the digital signal quality in terms of the time domain (such as the bit error rate), as well as the frequency domain (such as S-parameters). This enabled MSI to verify the signal quality of its high-speed digital interfaces and devices when installed in systems, while also enhancing its development efficiency.

Anritsu will continue to strongly support customers in solving challenges and driving technological innovation in the high-speed digital market through the provision of cutting-edge measurement technologies and comprehensive testing solutions.

The post MSI Adopts Anritsu’s High-Speed Interface Evaluation Solution appeared first on ELE Times.

Paradox of Developing Test Gear: An Electronics Chicken-or-the-Egg Problem

AAC - Чтв, 09/25/2025 - 02:00
The paradox of test equipment design is that it must be more advanced than the devices it tests. We spoke with Tektronix in person to gain insights on this unique technical challenge.

Нова лабораторія Ajax Systems у КПІ ім. Ігоря Сікорського!

Новини - Срд, 09/24/2025 - 22:18
Нова лабораторія Ajax Systems у КПІ ім. Ігоря Сікорського!
Image
kpi ср, 09/24/2025 - 22:18
Текст

24 вересня 2025 року, була відкрита Mechanical Design & Robotics Lab, створена Ajax Systems у співпраці з КПІ.

Лабораторія конструювання та робототехніки КПІ — це:

Cree LED and ADJ enter settlement and limited license agreement

Semiconductor today - Срд, 09/24/2025 - 20:53
Cree LED Inc of Durham, NC, USA (a Penguin Solutions brand) and ADJ Products LLC have reached a mutually beneficial settlement resolving a patent infringement dispute involving Cree LED’s patents related to LED components commonly used in LED displays. As part of the settlement, Cree LED granted ADJ a limited license to certain patents covering LED components...

Designing UWB Antennas for Angle of Arrival Applications

AAC - Срд, 09/24/2025 - 20:00
Learn what makes Ultra-Wideband (UWB) unique, what makes it useful, and how to address the design challenges it presents.

Purpose-built AI inference architecture: Reengineering compute design

EDN Network - Срд, 09/24/2025 - 17:30

Over the past several years, the lion’s share of artificial intelligence (AI) investment has poured into training infrastructure—massive clusters designed to crunch through oceans of data, where speed and energy efficiency take a back seat to sheer computational scale.

Training systems can afford to be slow and power-hungry; if it takes an extra day or even a week to complete a model, the result still justifies the cost. Inference, by contrast, plays an entirely different game. It sits closer to the user, where latency, energy efficiency, and cost-per-query reign supreme.

And now, the market’s center of gravity is shifting. While tech giants like Amazon, Google, Meta, and Microsoft are expected to spend more than $300 billion on AI infrastructure this year—still largely on training—analysts forecast explosive growth on the inference side. Gartner, for example, projects a 42% compound annual growth rate for AI inference in data centers over the next few years.

This next wave isn’t about building smarter models; it’s about unlocking value from the ones we’ve already trained.

Figure 1 In the training versus inference equation, while training is about brute force at any cost, inference is about precision. Source: VSORA

Training builds, inference performs

At its core, the difference between training and inference comes down to cost, latency, and efficiency.

Training happens far from the end user and can run for days, weeks or even months. Inference, by contrast, sits directly in the path of user interaction. That proximity imposes a hard constraint: ultra-low latency. Every query must return an answer in milliseconds, not minutes, or the experience breaks.

Throughput is the second dimension. Inference isn’t about eventually finishing one massive job—it’s about instantly serving millions or billions of tiny ones. The challenge is extracting the highest possible number of queries per second from a fixed pool of compute.

Then comes power. Every watt consumed by inference workloads directly hits operating costs, and those costs are becoming staggering. Google, for example, has projected a future data center that would draw three gigawatts of power—roughly the output of a nuclear reactor.

That’s why efficiency has become the defining metric of inference accelerators. If a data center can deliver the same compute with half the power, it can either cut energy costs dramatically or double its AI capacity without expanding its power infrastructure.

This marks a fundamental shift: where training chased raw performance at any cost, inference will reward architectures that deliver more answers faster and with far less energy.

This is why efficiency—not sheer performance—is becoming the defining metric of inference accelerators. If you can get the same answers using half the power, you can either slash your energy bill or double your AI capacity without building new power infrastructure.

Training was about brute force at any cost. On the other hand, inference is about precision.

GPUs are fast—but starved

GPUs have become the workhorses of modern computing, celebrated for their staggering parallelism and raw speed. But beneath their blazing throughput lies a silent bottleneck that no amount of cores can hide—they are perpetually starved for data.

To understand why, it helps to revisit the foundations of digital circuit design.

Every digital system is built from two essential building blocks: computational logic and memory. The logic executes operations—from primitive Boolean functions to advanced digital signal processing (DSP) and multi-dimensional matrix calculations. The memory stores everything the logic consumes or produces—input data, intermediate results, and outputs.

The theoretical throughput of a circuit, measured in operations per second (OPS), scales with its clock frequency and degree of parallelism. Double either and you double throughput—on paper. In practice, there’s a third gatekeeper: the speed of data movement. If data arrives every clock cycle, the logic runs at full throttle. If data arrives late, the logic stalls, wasting cycles.

Registers are the only storage elements fast enough to keep up: single-cycle, address-free, and directly indexed. But they are also the most silicon-expensive, which makes building large register banks economically impossible.

This cost constraint gave rise to the memory hierarchy, which spans from the bottom up:

  • Massive, slow, cheap storage (HDDs, SSDs, tapes)
  • Moderate-speed, moderate-cost DRAM and its many variants
  • Tiny, ultra-fast, ultra-expensive SRAM and caches

All of these, unlike registers, require addressing and multiple cycles per access. And moving data across them burns vastly more energy than the computation itself.

Despite their staggering parallelism, GPUs are perpetually starved for data. Their thousands of cores can blaze through computations, but only if fed on time. The real bottleneck isn’t compute. It’s memory because data must traverse a slow, energy-hungry hierarchy before reaching the logic, and every stall wastes cycles. Registers are fast enough to keep up but too costly to scale, while larger memories are too slow.

This imbalance is the GPU’s true Achilles’ heel and fixing it will require rethinking computer architecture from the ground up.

Toward a purpose-built inference architecture

Trying to repurpose a GPU—an architecture originally centered on massively parallel training workloads—to serve as a high-performance inference engine is a dead end. Training and inference operate under fundamentally different constraints. Training tolerates long runtimes, low compute utilization, and massive power consumption. Inference demands sub-millisecond latency, throughput efficiency approaching 100%, and energy frugality at scale.

Instead of bending a training-centric design out of shape, we must start with a clean sheet and apply a new set of rules tailored to inference from the ground up.

Rule #1—Replace caches with massive register files

Traditional GPUs rely on multi-level caches (L1/L2/L3) to hide memory latency in highly parallel workloads. Inference workloads are small, bursty, and demand predictable latency. Caches introduce uncertainty (hits versus misses), contention, and energy overhead.

A purpose-built inference architecture should discard caches entirely and instead use huge, directly addressed register-like memory arrays with index-based access instead of address-based lookup. This allows deterministic access latency and constant-time delivery of operands. Aim for tens or even hundreds of millions of bits of on-chip register storage, positioned physically close to the compute cores to fully saturate their pipelines (Figure 1).

Figure 1 Here is a comparison of memory hierarchy in traditional processing architectures (left) versus an inference-driven register-like, tightly-coupled memory architecture (right). Source: VSORA

Rule #2—Provide extreme memory bandwidth

Inference cores are only as fast as the data feeding them. Stalls caused by memory bottlenecks are the single biggest cause of underutilized compute in AI accelerators today. GPUs partially mask this with massive over-provisioning of threads, which adds latency and energy cost—both unacceptable in inference.

The architecture must guarantee multi-terabyte-per-second bandwidth between registers and cores, sustaining continuous operand delivery without buffering delays. This requires wide, parallel datapaths and banked memory structures co-located with compute to enable every core to run at full throttle, every cycle.

Rule #3—Execute matrices natively in hardware

Most modern AI workloads are built from matrix multiplications, yet GPUs break these down into scalar or vector ops stitched together by compilers. This incurs instruction overhead, excess memory traffic, and scheduling complexity.

Inference cores should treat matrices as first-class hardware objects with dedicated matrix execution units that can perform multiply–accumulate across entire tiles in a single instruction. This eliminates scalar orchestration overhead, slashes instruction counts and maximizes both performance and energy efficiency per operation.

Rule #4—Expand the instruction set beyond tensors

AI is rapidly evolving beyond basic tensor algebra. Many new architectures—for instance, transformers with sparse attention, hybrid symbolic-neural models, or signal-processing-enhanced models—need richer functional primitives than today’s narrow tensor op sets can offer.

Equip the ISA with a broad library of DSP-style operators; for example, convolutions, FFTs, filtering, non-linear transforms, and conditional logic. This empowers developers to build innovative new model types without waiting for hardware revisions, enabling rapid architectural experimentation on a stable silicon base.

Rule #5—Orchestrate cores via a smart, reconfigurable NoC

Inference workloads are highly structured but vary layer by layer: some are dense, others sparse; some are compute-bound, others bandwidth-bound. A static interconnect leaves many cores idle depending on the model phase.

Deploy a dynamic network-on-chip (NoC) that can reconfigure on-the-fly allowing the algorithm itself to control dataflow. This enables adaptive clustering of cores, localized register sharing, and fine-grained scheduling of sparse layers. The result is maximized utilization and minimal data movement energy, tuned dynamically to each workload phase.

Rule #6—Build a compiler that hides complexity

A radically novel architecture risks becoming unusable if programmers must hand-tune for it. To drive adoption, complexity must be hidden behind clean software abstractions.

Provide a smart compiler and runtime stack that automatically maps high-level models to the underlying architecture. It should handle data placement, register allocation, NoC reconfiguration, and operator scheduling automatically, exposing only high-level graph APIs to developers. This ensures users see performance, not complexity, making the architecture accessible to mainstream AI developers.

Reengineering the inference future

Training celebrated brute-force performance. Inference will reward architectures that are data-centric, energy-aware, and precision-engineered for massive real-time throughput.

These design rules, pioneered by semiconductor design outfits like VSORA in their development of efficient AI inference solutions, represent an engineering breakthrough—a highly scalable architecture that redefines inference speed and efficiency, from the world’s largest data centers to edge intelligence powering Level 3–5 autonomy.

Lauro Rizzatti is a business advisor to VSORA, an innovative startup offering silicon IP solutions and silicon chips, and a noted verification consultant and industry expert on hardware emulation.

 

Related Content

The post Purpose-built AI inference architecture: Reengineering compute design appeared first on EDN.

Diy flexible eink smartwatch

Reddit:Electronics - Срд, 09/24/2025 - 15:36
Diy flexible eink smartwatch

Been working on a flexpcb smart watch. It finally works! Uses an nrf52840, Components are on rigid/ flex pcbs spread around the wrist. Has heatrate, blood oxygen, gps, 9 axis imu and screen backlight.

Working on putting it all into a flexible case, but kind of like the "bare" look.

submitted by /u/Zestyclose-Bar8108
[link] [comments]

Power Tips #145: EIS applications for EV batteries

EDN Network - Срд, 09/24/2025 - 15:08

Rechargeable batteries are the primary components in EVs, mobile devices, and energy storage systems. The batteries’ working conditions, including state of health (SOH), state of charge (SOC), and temperature, are essential to reliably and efficiently operate devices or equipment. Predicting battery SOH and SOC is becoming a priority in order to increase their performance and safety.

Physically, you can represent the batteries as an electrical circuit model, as shown in Figure 1. The resistors (Rs) and capacitors (Cs) in the model have good correlations with battery states. Electrochemical impedance spectroscopy (EIS) technologies are crucial to characterize the elements of the model in order to obtain the batteries’ working conditions.

Figure 1 The equivalent circuit of a battery showing Rs and Cs that have a good correlation with battery states. Source: Texas Instruments

Rs and Cs change when the batteries are in different states, leading to impedance changes. With EIS techniques, applying AC signals to the batteries and measuring their voltage and current response enables calculations of the impedance data of the batteries in the frequency domains. By analyzing the impedance data, you can know the battery’s SOC, internal temperature, and battery life. EV manufacturers are now researching how to apply EIS techniques to a battery management system (BMS).

Nyquist tool

Applying an AC voltage to a circuit excites the AC current. Equation 1 calculates the impedance, which varies as frequencies change if the circuit is not a pure resistance load.

Figure 2 illustrates Ohm’s law for an AC voltage. You can plot the impedance by applying many frequencies. Typically, a battery is modeled as Rs and Cs in combination, as shown in Figure 1. Figure 3 illustrates the impedance plot using a Nyquist tool.

Figure 2 Ohm’s law in an AC circuit, impedance can be plotted by applying many frequencies. Source: Texas Instruments

Figure 3 The plot of impedance using the Nyquist tool. Source: Texas Instruments

Methods of excitation current generation

You can use the EIS technique for one cell, multiple cells, modules, or a pack. Performing an EIS measurement requires the application of AC current to the batteries. For different battery system voltages, there are four different methods to generate the excitation current. Let’s review them.

Method #1: Resistor dissipation at the cell level and module level

In Figure 4, the power switch (S1), power resistor (Rlimit), sense resistor (Rsense), and a controller produce the excitation source. The controller generates a sinusoidal pulse-width modulation (SPWM) signal for S1. One or several battery cells are connected in series with the excitation source. Turning on S1 draws the current from the batteries through Rlimit. The energy burns and dissipates. When the voltage is high, the power dissipation is significantly large.

Figure 4 EIS with a resistor load where S1, Rsense, and the controller source produce the excitation circuit. Source: Texas Instruments

You can use this method at the cell level and small module level with low voltage, but it is not a practical solution for high-voltage batteries in EVs or hybrid EVs (HEVs) because the power dissipation is too great. 

Method #2: An isolated DC/DC converter at the pack level

In an EV powertrain, high-voltage batteries charge low-voltage batteries through an isolated DC/DC converter (as shown in Figure 5), which you can design to support bidirectional power flow. During EIS excitation, power transfers from high- to low-voltage batteries during the positive cycle; power is then reversed from the low- to high-voltage side during the negative cycle. This method uses existing hardware without adding extra costs. However, the excitation source is limited by the capacity of the low-voltage batteries. It is particularly challenging for 800V-12V battery systems.

Figure 5 EV power train with high-voltage batteries charging low-voltage batteries through an isolated DC/DC converter. Source: Texas Instruments 

Method #3: A non-isolated DC/DC converter in stack mode for the pack

This method uses a non-isolated DC/DC converter to generate excitation current between two battery modules. During EIS excitation, the charge transfers from Vbat1 to Vbat2 during the positive cycle, and the charge transfers back to Vbat1 from Vbat2. In Figure 6, two battery modules are connected in stack mode. Two active half-bridges are connected in series, and their switching nodes are connected through an inductor and a capacitor.

There are several advantages to this method: one is the use of low-voltage rating switches in a high-voltage system; the other is that the switches operate under zero-voltage switching (ZVS) conditions. Additionally, this method enables the production of a larger excitation current without adding stress.

Figure 6 A non-isolated DC/DC converter connecting two battery modules in stack mode. Source: Texas Instruments

Method #4: A non-isolated DC/DC converter in parallel mode for the pack

This method connects two battery modules in parallel mode, as shown in Figure 7. Two modules share a common ground. The charges are transferred to the inductor and capacitor from VBat1; then the charges stored in the inductor and capacitor are transferred to VBat2. The parallel mode and stack mode are swappable by properly reconfiguring two modules to meet different charging stations or battery voltage systems ,such as 400 V or 800 V.

Figure 7 A non-isolated DC/DC converter in parallel mode for the pack. Source: Texas Instruments

EIS measurement

Figure 8 divides the battery pack into two modules. S2 and S3 are battery-management ICs. The BQ79826 measures the voltage of every cell through an analog front end. Applying the AC current to battery modules builds up the AC voltage of each cell, which the BMICs then measure. A current measurement IC is used to measure the excitation current sensed by a current shunt. A communication bridge IC connects all BMICs through a daisy-chain communication bus. The BQ79826 uses the EIS engine to calculate the impedance, which is transmitted to a microcontroller for the Nyquist plot. The Controller Area Network (CAN) protocol provides communication while MCU1 controls the generation of excitation current.

Figure 8 Block diagram of an EIS measurement that divides the battery pack into two modules, each monitored by BMICs. Source: Texas Instruments

 In a simulation of a non-isolated stacked active bridge (SAB) based excitation circuit, the conditions were VBat1 = VBat2 = 400 V, Fs =100 kHz, and current amplitude = 5 A. Figure 9 shows the excitation current waveform from the simulation. The blue trace is VBat1 current, while the green trace is VBat2 current.

Figure 9 Excitation current generated by stacked active bridge, the blue trace is the VBat1 current, the green trace is the VBat2 current. Source: Texas Instruments

The frequency synchronization between the controller of the excitation source and the BQ79826 is essential to minimize measurement errors. One solution is to take the SPWM signal generated by the BQ79826 as the reference of the excitation source (Figure 10). The excitation source and EIS engine of BQ79826 are automatically synchronized.

Figure 10 Block diagram for system timing synchronization of the excitation source and the BQ79826 in order to minimize measurement errors. Source: Texas Instruments

When building hardware to evaluate an EIS measurement, the excitation source should have high efficiency to minimize charge losses in the batteries. The total current harmonics should also be small in order to increase the signal-to-noise ratio (SNR). Figure 11 shows the efficiency measurement of the converter using a DC voltage and a DC load. With a higher excitation current, the power efficiency is higher because of the larger ZVS range. Above a 1-A amplitude of excitation current, efficiency is >95%. All the power dissipates in the traditional method of using a load resistor.

Figure 11 Efficiency measurement of a stacked active bridge-based power stage. Source: Texas Instruments

A fast Fourier transform (FFT) is a tool to evaluate the SNR of the excitation current. Placing six 18650 batteries in series for one module, with two modules connected to the stacked active bridges, demonstrates the quality of excitation current. In Figure 12, two tones of 10 Hz and 100 Hz are generated simultaneously to reduce the excitation time.

Figure 12 FFT of excitation current where two 10 Hz and 100 Hz tones are generated simultaneously. Source: Texas Instruments

Figure 13 is a Nyquist plot showing the impedances of different cells using two 200-V battery modules. At lower excitation frequencies, the difference between the measurements is small. There are more discrepancies at higher excitation frequencies (shown on the left side of the graphs), but the impedances within this range are not important.

Figure 13 Nyquist plot showing the impedances of different cells using two 200-V battery modules. Source: Texas Instruments

EIS technique

EIS is an evolutionary technique with applications for EV and HEV batteries. EIS techniques enable users to obtain real-time information about the SOC, SOH, and temperature during battery system operation.

Achieving good EIS results still requires resolving challenges such as developing accurate algorithms, utilizing reliable excitation systems, and minimizing noise sensitivity.

Sean Xu currently works as a system engineer in Texas Instruments’ Power Design Services team to develop power solutions using advanced technologies for automotive applications. Previously, he was a system and application engineer working on digital control solutions for enterprise, data center, and telecom power. He earned a Ph.D. degree from North Dakota State University and a Master’s degree from Beijing University of Technology, respectively.

Related Content

The post Power Tips #145: EIS applications for EV batteries appeared first on EDN.

Power Integrations’ CFO Sandeep Nayyar leaving firm on 4 October

Semiconductor today - Срд, 09/24/2025 - 13:15
Power Integrations Inc of San Jose, CA, USA (which provides high-voltage integrated circuits for energy-efficient power conversion) says that Sandeep Nayyar (its chief financial officer since 2010) is leaving the firm (effective 4 October) to pursue a new opportunity...

Matter 1.4.2 Introduces Stronger Security for Smart Homes

ELE Times - Срд, 09/24/2025 - 13:07

As Smart Homes are getting more connected, strong security is overdue to compliment them. Matter 1.4.2, the latest standard, provides a proactive layer of protection for preventing attacks in the first place and giving strong defenses to devices and consumers in the modern Smart Home setting.

Matter Security Evolution

Since October 2022, the Matter standard has revolutionized Smart Home connectivity, allowing devices from different vendors to be interoperable and easy to set up. Security has been a core theme behind Matter since inception. Instead of password entry, Matter allows consumers to add devices by scanning QR or NFC tags, which triggers an automated verification and commissioning process.

What Has Matter 1.4.2 Improved in Security

  1. Certificate Revocation

Matter devices get a unique Device Attestation Certificate (DAC) to prevent cloning. From a theoretical point of view, an attacker may attempt to extract a DAC from a legitimate device to create clones. Matter 1.4.2 now introduces a standard certificate revocation mechanism whereby manufacturers can invalidate compromised DACs: In this way, cloned devices are marked down even before any attack will take place, thus protecting consumers better.

  1. Validating Vendor ID (VID)

The Multi-Admin capability associated with Matter allows consumers to control devices from multiple vendors at the same time. Previously, a malicious controller may have misrepresented its vendor ID. Matter 1.4.2 prevents this through the validation of vendor identities so that only trusted controllers can gain access to the Smart Home network.

  1. Access Restriction Lists (ARL)

With Access Restriction Lists, certified Matter Home Routers- and Access Points (HRAP) become able to allow only authorized devices to alter sensitive network settings. This enforces the principle of least privilege-an effort to contain vulnerabilities and avoid accidental or malicious disruptions in the home network.

Proactive Home-Raised Security

Security for Smart Homes is an ever-present challenge. Infineon engineers-perhaps among others-were instrumental in the very design of these new protections in conjunction with the Matter Working Group. This group works alongside researchers worldwide in naming and fixing vulnerabilities so that they cannot be exploited. Open standards and open-source software add further layers of transparency and safety.

Looking Ahead

Matter 1.4.2 demonstrates how quickly smart home security is developing. The standard makes sure users stay one step ahead of any attackers by foreseeing future dangers and incorporating proactive safeguards.

(This article has been adapted and modified from content on Infineon.)

The post Matter 1.4.2 Introduces Stronger Security for Smart Homes appeared first on ELE Times.

NUBURU evaluating acquisition for accelerated growth in defense applications of blue lasers

Semiconductor today - Срд, 09/24/2025 - 12:13
NUBURU Inc of Centennial, CO, USA — which was founded in 2015 and developed and previously manufactured high-power industrial blue lasers — has signed an agreement with an as-yet unnamed key strategic partner to establish a framework for evaluating a controlling interest acquisition aimed at developing and commercializing defense applications by integrating the blue laser technology...

У Центрі підготовки енергоменеджерів

Новини - Срд, 09/24/2025 - 10:46
У Центрі підготовки енергоменеджерів
Image
kpi ср, 09/24/2025 - 10:46
Текст

КПІ ім. Ігоря Сікорського продовжує готувати нову генерацію енергоменеджерів, навчаючи не лише молодь, а й уже досвідчених фахівців

GPS-Buddy RFID-Tag and ST’s LIS2DUX12: Let’s talk about the new era of tracking assets and safety monitoring

ELE Times - Срд, 09/24/2025 - 09:50

It’s becoming outrageously expensive not to track assets and monitor people’s safety, especially when a device like the GPS-Buddy RFID-Tag, from the company GPS-Buddy, can provide so much more information than just a location on a map. Equipped with an LIS2DUX12 accelerometer from ST, it can track much more data than geographical location, such as the operation of a motor, a person’s activities, and more, thanks to an implementation that leverages our sensors and our MEMS-Studio software.

It thus opens up the GPS-Buddy RFID Tag to numerous use cases and helps future-proof its design. The tracker can even collect and label training data to build new machine learning algorithms and leverage innovative AI capabilities. Put simply, GPS-Buddy is asset tracking for those who need to track much more than just assets.

GPS-Buddy RFID-Tag and the LIS2DUX12 solving the optimization problem The GPS-Buddy RFID-TagThe GPS-Buddy RFID-Tag Why GPS asset trackers are not always popular

Too often, companies use assets that are hard to track. Many tools don’t use a battery or power, and even those that do may not have the space to accommodate a large tracker, or could route that power to a wireless radio. Additionally, some with motors or generating strong vibrations can disrupt regular trackers. The idea of asset tracking is far from new. Even tracking assets using a GPS is not an original idea, and avid readers of the ST Blog will remember that we featured an ST solution featuring TomTom in 2018Yet, many companies don’t opt for a GPS tracker because they tend to be large, costly, and power hungry.

How a small sensor can have enormous consequences

To overcome many of these challenges and make GPS asset tracking more ubiquitous, the GPS-Buddy RFID-Tag settled, in part, on the LIS2DUX12. Thanks to its high-g 10,000g shock survivability, it can withstand the most rugged environments. Additionally, its small package (2 mm x 2 mm) and low power consumption (between 3 µA in ultra-low power mode and 9.3 µA in high-frequency/high-performance mode), mean that the tracker is small enough to fit on a wide range of tools, while its small battery can last for years in the field. Moreover, GPS-Buddy also offers a web-based dashboard, allowing companies to utilize their existing cloud infrastructure, thereby further reducing development and operating costs.

GPS-Buddy RFID-Tag and the LIS2DUX12 convincing skeptics Why GPS asset trackers are not always practical

Another problem is that companies still struggle to find a practical application for asset tracking within their operations. In many instances, a smaller operation will simply absorb asset losses rather than invest in trackers. The problem is that the costs of misplacing assets are more consequential than many initially think. Indeed, while most focus only on replacement costs, there are other issues, such as rising insurance premiums, increased liabilities when the lost items are used nefariously, and damage to a brand’s reputation, to name a few. However, until more companies derive greater value from asset tracking, the technology will struggle to achieve meaningful adoption.

Split view of the the GPS-Buddy RFID-TagSplit view of the the GPS-Buddy RFID-Tag How a smart sensor can expand the meaning of “tracking”

To help companies realize the benefits of asset tracking, the GPS-Buddy RFID-Tag does more than just asset tracking, thanks to the LIS2DUX12. For instance, besides sensing motion and vibration, the sensor’s Qvar feature, which connects two electrodes to measure quasi-electrostatic potential variations, can detect when a motor is on and an appliance is in use, rather than sitting idle. Specifically, it means that companies can now track the location of their assets and determine whether they are being utilized or not. It can help companies better monitor activities and optimize workflows to ensure more positive working conditions and vastly more optimal operations.

Moreover, GPS-Buddy utilizes software tools like the ST MEMS Studio to process data and develop machine learning applications. Specifically, the GPS-Buddy RFID-Tag features four buttons: Drive (DR), Work (WK), Private (PR), and Pause (PA). By pushing one of these, the tag logs sensor data and uses those modes as labels. It thus becomes possible to train a model to better distinguish between driving and a work activity, for instance, and add to the list of what the GPS-Buddy RFID-Tag can monitor. Furthermore, this is possible without hiring data scientists or spending tens of thousands of dollars on training data, since the tag itself is responsible for data collection and labeling.

How does the tracker work with people?

The GPS-Buddy RFID-Tag is so versatile that the company is even using it for entirely new applications like fall detection. Indeed, thanks to the ST sensors and our machine learning capability, the Tag tracks more than assets but also people’s safety. Concretely, if a person wearing the GPS-Buddy RFID-Tag falls and stays immobile for an extended period, the device triggers an alarm and sends an emergency notification to supervisors. Called the “Man Down” alarm, it can make a tremendous difference in alerting medical personnel and save valuable time, thus helping ensure the well-being of the entire crew.

A new era in asset tracking

GPS-Buddy shared how their collaboration with ST is shaping their product. For example, a device like the GPS-Buddy RFID-Tag requires significant optimization to fit all algorithms inside a small device that can last for years in the field. Yet, by starting with an implementation of ST’s sensor models, they can accelerate development and shorten their time to market. Our work together also leads to the implementation of new features. For instance, we are working together on a solution that can detect elevation to turn off the radio when a plane is taking off, thus saving battery life when an asset is airborne.

For end users, the GPS-Buddy RFID-Tag is a way to help companies function like a tech company without being a tech company. Indeed, a landscaping business, for instance, can now benefit from a data-driven approach and optimize its operations using AI, just like a tech company would, without having to hire an army of data scientists and developers. Put simply, the GPS-Buddy RFID-Tag is emblematic of a new era that is redefining what tracking assets and people mean and how it impacts operations.

The post GPS-Buddy RFID-Tag and ST’s LIS2DUX12: Let’s talk about the new era of tracking assets and safety monitoring appeared first on ELE Times.

6G Development: Turning 5G’s Roadblocks into Future Opportunities

ELE Times - Срд, 09/24/2025 - 09:20

The global rollout of 5G was expected to be transformative, providing blazing-fast speeds with incredibly low latencies. Through quite visionary and futuristic applications, 5G would have helped in applications such as autonomous driving and telemedicine.

Having made some progress, the realization has often been short of early expectations. Many industries have yet to create palpable value, whilst those in the operator’s arena are facing the challenge of recovering massive investments made in 5G infrastructure.

Now, with accelerated 6G research, the telecom industry has undergone an unusual opportunity to address these shortfalls and, learning from the trials of 5G, set engineers and innovators on the path of designing a wireless future that is smarter, better, and more inclusive.

Key Lessons from 5G Adoption

  1. Too Much Promises, Not Enough Delivery

The hype around 5G had fueled expectations for revolutionary new applications. Mobile broadband speeds improved, yet many flagship applications-think massive IoT or mission-critical ultra-reliable low-latency communications (URLLC)-have not scaled in the way expected. Particularly hesitant are enterprises, leaving operators holding significant investments but slowly gaining returns.

  1. Deployment Complexity

Deployment from 4G to 5G was further complicated than expected. Small cell deployments for (mmWave) frequencies, spectrum allocation complications, and the very expensive upgrades to various parts of networks caused more delays in the commercial rollout. While the 5G core gave operators and enterprises a lot of flexibility, operational costs also increased, along with deployment time, to their aggravation.

  1. Ecosystem Gaps

Like in many solutions, 5G has been held back by its ecosystem. The smartphones were fast to adapt, while industrial IoT and other enterprise-critical devices took their own sweet time. Unable to muster an ecosystem of applications, services, and devices, industries were left unable to embrace 5G fully-or monetize it.

How 6G Aims to Deliver Where 5G Fell Short

Building on a Stronger Foundation

There is no point in going back to the drawing board for 6G. It is a natural evolution of the 5G capabilities. Enhancements to (eMBB), massive IoT, and URLLC will come alongside AI-native and machine-learning-driven networks, making such systems smarter, more reliable, and adaptable to the real world such as AR, industrial automation, and connected healthcare.

Simplifying Network Complexity

One of the intended goals of 6G is to ease all network operations. Simplifying the core architecture will make deployment much faster, cheaper, and easier to manage; operators and enterprises now have fewer barriers.

Expanding the Ecosystem

Taking note of the slow ecosystem development in 5G, 6G R&D now places device and application readiness alongside infrastructure. This, in turn, means that industries will have the hardware and software required to harness the capabilities of 6G from day one, thus speeding up adoption in manufacturing, logistics, healthcare, and transportation.

Smarter Spectrum Utilization

6G will scale spectrum efficiency on top of any mmWave operations into upper mid-band (7–24 GHz) and terahertz ranges. This means that it can deliver faster data throughput, lower latency, and wider coverage, higher on spectrum-sharing issues that made 5G under-perform in densely populated urban setups.

Sustainability and Equity

Of course, spacetime performance will identify markets on energy-efficient and global-inclusive standards. By limiting power usage and operational footprints, 6G will be geared towards sustainability. At the same time, they aim to extend reliable connectivity to underserved regions, narrowing the digital divide.

Testing’s Contribution to 6G Success

Testing that is precise, adaptable, and standards-based will be essential to the deployment of 6G. In addition to verifying 6G capabilities, tools must also guarantee 5G backward compatibility. More significantly, collaborations with skilled testing suppliers will contribute to ensuring seamless development cycles, adherence to regulations, and practical performance.

Conclusion:

In conclusion, 6G presents an opportunity to address 5G’s drawbacks by emphasising sustainability, ecosystem preparedness, and pragmatism. The upcoming ten years provide engineers, operators, and businesses the chance to transform wireless communication into something quicker, easier, and more significant for communities and industries around the globe.

(This article has been adapted and modified from content on Anritsu.)

The post 6G Development: Turning 5G’s Roadblocks into Future Opportunities appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів