Українською
  In English
Microelectronics world news
Designing energy-efficient AI chips: Why power must be an early consideration

AI’s demand for compute is rapidly outpacing current power infrastructure. According to Goldman Sachs Global Institute, upcoming server designs will push this even further, requiring enough electricity to power over 1,000 homes in a space the size of a filing cabinet.
As workloads continue to scale, energy efficiency is now as critical as raw performance. For engineers developing AI silicon, the central challenge is no longer just about accelerating models, but maximizing performance for every watt consumed.
A shift in design philosophy
The escalation of AI workloads is forcing a paradigm shift in chip development. Energy optimization must be addressed from the earliest design phases, influencing decisions throughout concept, architecture, and production. Considering thermal behavior, memory traffic, architectural tradeoffs, and workload characteristics as part of a single power-aware design flow enables the development of systems that scale efficiently without breaching data center or edge-device energy limits.
Traditionally, design teams have primarily focused on timing and performance, only addressing energy consumption at the end of the process. Today, that strategy is outdated.
Synopsys customer surveys across numerous design projects show that addressing power at the architectural stage can yield 30-50% savings, whereas waiting until implementation typically achieves only marginal improvements. Early exploration enables decisions about architecture, memory hierarchy, and workload mapping before they become fixed, allowing trade-offs that balance throughput, area, and efficiency.
Architecture analysis as a power tool
Before RTL is finalized, a comprehensive power analysis flow helps reveal where energy is being spent and what trade-offs exist between voltage, frequency, and performance. Architectural modeling enables rapid evaluation of techniques—such as dynamic voltage and frequency scaling (DVFS), power gating to shut down inactive circuits, and optimizing data flow within the network-on-chip (NoC)—and supports smarter, more energy-efficient design choices.
Transaction-level simulation allows teams to measure expected workloads and predict the impact of configuration changes. This early insight informs hardware-software partitioning, interface sizing, and memory placement, all critical factors in the chip’s overall efficiency.
Data movement: The hidden power sink
Computation isn’t the only factor driving energy use. In many AI chips, data movement consumes more power than the arithmetic itself. Each transfer between memory hierarchies or across chiplets adds significant overhead. This is the essence of the so-called memory wall: compute capability has outpaced memory bandwidth.
To close that gap, designers can reduce unnecessary transfers by introducing compute-in-memory or analog approaches, choosing high-bandwidth memory (HBM) interfaces, or adopting sparse algorithms that minimize data flow. The earlier the data paths are analyzed, the greater the potential savings, because late-stage fixes rarely recover wasted energy caused by poor partitioning.
The growing thermal challenge
As designs move toward multi-die and chiplet architectures, thermal density has become a first-order constraint. Packing several dies into one package creates concentrated heat zones that are difficult to manage later in the flow. Effective thermal planning, therefore, starts with system partitioning: examining how compute blocks are distributed and how heat will flow through the stack or interposer.
By modeling various configurations early, before layout or floor planning, engineers can avoid thermally stressed regions and plan for cooling strategies that support consistent performance under load.
Optimizing the real workload
Unlike traditional semiconductors, AI chips are rarely general-purpose. Whether a device runs edge inference, data center training, or specialized analytics, its efficiency depends on how closely the hardware matches the target workload. Simulation, emulation, and prototyping before tapeout make it possible to test representative use cases and fine-tune hardware parameters accordingly.
Profiling multiple operating modes, from idle to sustained training, exposes inefficiencies that might otherwise remain hidden until silicon returns from the fab. And it helps ensure the design can maintain high utilization and consistent energy performance across all conditions.
Extending efficiency beyond tapeout
Energy monitoring and management must persist even after chips are manufactured. Variability, aging, and environmental factors can shift operating characteristics over time. Integrating on-chip telemetry and control using silicon lifecycle management (SLM) solutions allows engineers to track power behavior in the field and apply adjustments to sustain optimal performance per watt throughout the product’s lifecycle.
The next breakthroughs in AI hardware will come not just from faster chips, but from smarter engineering that treats power as a foundational design dimension, not an afterthought. For today’s AI hardware, efficiency is performance.
Godwin Maben is a Synopsys Fellow.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
- Designing edge AI for industrial applications
- Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs
- Bridging the gap: Being an AI developer in a firmware world
- Why Power Delivery Is Becoming the Limiting Factor for AI
- Silicon coupled with open development platforms drives context-aware edge AI
The post Designing energy-efficient AI chips: Why power must be an early consideration appeared first on EDN.
Vishay Intertechnology launches New Commercial and Automotive Grade Power Inductors
Vishay Intertechnology, Inc. introduced four new power inductors in the 2.0 mm by 1.6 mm by 1.2 mm 0806 and 3.2 mm by 2.5 mm by 1.2 mm 1210 case sizes. The commercial IHLL-0806AZ-1Z and IHLL-1210AB-1Z and Automotive Grade IHLP-0806AB-5A and IHLP-1210ABEZ-5A achieve the same performance as the next-smallest competing inductor in 11 % (1210) and 64 % (0806) smaller footprints, while offering higher operating temperatures, a wider range of inductance values, and lower DCR for increased efficiency.
Offering inductance values from 0.24 µH to 4.70 µH and typical DCR down to 6.6 mΩ, the terminals of the IHLL-0806AZ-1Z and IHLL-1210AB-1Z are plated on the bottom only, enabling a smaller land pattern for more compact board spacing. The terminals of the IHLP-0806AB-5A and IHLP-1210ABEZ-5A are plated on the bottom and sides, allowing for the formation of a solder fillet that adds mounting strength against great mechanical shock, while simplifying solder joint inspection. The AEC-Q200 qualified devices provide reliable performance up to +165 °C, which is 10 °C higher than the closest competing composite inductor, and typical DCR down to 15.0 mΩ.
Delivering improved performance over ferrite-based technologies, all four devices feature a robust powdered iron body that completely encapsulates their windings — eliminating air gaps and magnetically shielding against crosstalk to nearby components — while their soft saturation curve provides stability across the entire operating temperature and rated current ranges. Packaged in a 100 % lead (Pb)-free shielded, composite construction that reduces buzz to ultra-low levels, the inductors offer high resistance to thermal shock, moisture, and mechanical shock, and handle high transient current spikes without saturation.
RoHS-compliant, halogen-free, and Vishay Green, the Vishay Dale devices released today are designed for DC/DC converters, noise suppression, and filtering in a wide range of applications. The IHLP-0806AB-5A and IHLP-1210ABEZ-5A are ideal for automotive infotainment, navigation, and braking systems; ADAS, LiDAR, and sensors; and engine control units. The IHLL-0806AZ-1Z and IHLL-1210AB-1Z are intended for CPUs, SSD modules, and data networking and storage systems; industrial and home automation systems; TVs, soundbars, and audio and gaming systems; battery-powered consumer healthcare devices; medical devices; telecom equipment; and precision instrumentation.
Device Specification Table:
| Series |
IHLL-0806AZ-1Z |
IHLP-0806AB-5A |
IHLL-1210AB-1Z |
IHLP-1210ABEZ-5A |
| Inductance @ 100 kHz (μH) |
0.24 to 4.70 |
0.22 to 0.47 |
0.24 to 4.70 |
0.47 to 4.70 |
| DCR typ. @ 25 °C (mΩ) |
16.0 to 240.0 |
15.0 to 21.0 |
6.6 to 115.0 |
18.0 to 150.0 |
| DCR max. @ 25 °C (mΩ) |
20.0 to 288.0 |
18.0 to 25.0 |
10.0 to 135.0 |
22.0 to 180.0 |
| Heat rating current typ. (A)(¹) |
1.3 to 6.3 |
4.6 to 5.8 |
2.3 to 9.2 |
1.8 to 5.1 |
| Saturation current typ. (A)(²) |
1.5 to 6.5 |
4.5 to 5.1 |
2.5 to 9.0 |
2.0 to 6.5 |
| Saturation current typ. (A)(³) |
1.8 to 7.2 |
5.4 to 7.5 |
2.9 to 11.5 |
2.5 to 8.2 |
| Case size |
0806 |
0806 |
1210 |
1210 |
| Temperature range (°C) |
-55 to +125 |
-55 to +165 |
-55 to +125 |
-55 to +165 |
| AEC-Q200 |
No |
Yes |
No |
Yes |
(¹) DC (A) that will cause an approximate ΔT of 40 °C
(²) DC (A) that will cause L0 to drop approximately 20 %
(³) DC (A) that will cause L0 to drop approximately 30 %
The post Vishay Intertechnology launches New Commercial and Automotive Grade Power Inductors appeared first on ELE Times.
Loom Solar Introduces Revolutionary, Scalable CAML BESS Solution up to 1 MWh to Replace Diesel Generators for C&I Sector
Loom Solar, one of India’s leading solar manufacturing companies, announced the launch of its scalable 125kW/261kWh CAML Battery Energy Storage System (BESS) up to 1MWh, a next-generation solution designed to deliver uninterrupted, seamless power tothe Commercial and Industrial (C&I) sector, significantly reducing production losses caused by power outages.
Unlike conventional diesel generator-based systems, which typically involve switch-over downtimes ranging from 30 seconds to 3 minutes, Loom Solar’s scalable 125kW/261kWh BESS ensures instantaneous power availability, eliminating operational disruptions in critical industrial processes. The system is engineered for a cleaner, quieter, and safer microgrid application that addresses low-voltage situations and power cuts while delivering continuous power for over two hours, with deep-discharge capability, making it a reliable alternative for businesses that demand high uptime and operational efficiency.
With a lifecycle of up to 6,000 charge–discharge cycles, the scalable 125kW/261kWh BESS offers long-term durability and superior economic value.
Developed through Loom Solar’s strong focus on in-house research and development, and validated through rigorous product testing facilities, the solution reflects the company’s commitment to innovation and reliability. The system is IoT-enabled and compatible with connected energy ecosystems, allowing real-time monitoring, intelligent energy management, and seamless integration with renewable power sources such as solar.
Commenting on the launch, Amod Anand, Co-Founder and Director, Loom Solar, said, “The scalable 125kW/261kWh BESS is a solution-led product designed specifically for India’s C&I sector, where even a few seconds of downtime can translate into significant losses. Our focus has been to replace reactive power backup with intelligent, seamless energy continuity. This solution not only ensures uninterrupted operations but also helps businesses optimise energy costs and move closer to energy independence through renewable integration.”
With this launch, Loom Solar strengthens its position as a key enabler of India’s energy transition, offering integrated solar and energy storage solutions that support energy security, sustainability, and long-term resilience for businesses.
The post Loom Solar Introduces Revolutionary, Scalable CAML BESS Solution up to 1 MWh to Replace Diesel Generators for C&I Sector appeared first on ELE Times.
Had to replace flooring under my bench, forced a cleaning and sorting that was desperately needed. I added an isolation transformer for the test equipment. First time placing my component sorting containers on their side to avoid digging out the one I...
| | my test bench is combined with music production, for no reason other than convenience. [link] [comments] |
Been meaning to sort these for about five years...
| submitted by /u/One-Cardiologist-462 [link] [comments] |
Quick rant - Circuits West in Colorado just went out of business
| | Argh. I'm just here to complain. Circuits West in Longmont Colorado closed their doors on Monday. I realize the responses I'll get are "Use JLC or PCB Way" and yes, those are great options, but I do quick-turn (usually 2-day) fabs and on top of that it's CNY. Argh. Just annoying. Can't do anything about it. Guess it's Advanced Circuits (APCT, AdvancedPCB) as a single-source in-Colorado fab shop :( I don't have an image; I'm posting their logo. [link] [comments] |
smolBrain - my own version of slimeVR trackers based on nRF52 chip series. Just want to share my project, maybe people find notes there interesting.
| | Hi hi :3 Upfront - with a huge help from SlimeVR devs and community I was able to make a final version of my SlimeVR smolBrain trackers. So thanks a lot for the help to them <3 Why share here you may ask? It looks like there are a lot of supa smart people who may give feedback on whatever I made, especially for low power devices. That was the first time for me working with low power devices and since I'm not exactly the best hardware engineer I had to learn a lot. Leakage here, sleep mode there, Iq currents for every device on the board and so on. Was pretty fun. But also - I tried to add to the schematic and readme a ton of measurements of the board and reasons why I used components or what they do. Very often it is something I really want to have on other people's works, like dev notes, and it is not always there. So I decided to make it myself :3 Is description and notes good or not I do not know, there is a chance I still have some problematic parts or inconsistencies, but I tried to make this board as small and as good as I can, following all PCB routing rules. So I believe if you have never done something like this it can be a very interesting insight or an overview on behaviour of almost all components on the ready to use board. What you will find inside: It is open source as usual :3, feel free to check out my git project page if you feel like it. [link] [comments] |
A box full of old capacitors
| I love old capacitors, colour shining happiness \m/ [link] [comments] |
Supra launches to secure US supply of gallium, scandium and other critical minerals
Nimy ships high-grade gallium ore from Western Australia to M2i in USA
Classic constant current cascode

An important figure of merit for all precision constant current sources is their active impedance. Which is to say, just how “constant” is their output held against changes in applied voltage? Frequent and expert Design Idea (DI) commentator Ashutosh Sapre (Ashu) was kind enough to measure this parameter for a design of mine and share his results. The circuit, applied as a 4 to 20mA current mirror, is shown in Figure 1 and discussed in “Combine two TL431 regulators to make versatile current mirror.”
Figure 1 A 4 to 20mA current mirror with poor active impedance.
Said Ashutosh: “I tried the fig.2 circuit for 4-20mA mirroring, with R1 and R2 of 100E, and using a Tl431 (2.5V). It worked quite well. One issue I found was that the output impedance (di/dv) was quite low; there was a change of 40uA over a supply swing of 20V (if I remember correctly), not linear with supply voltage change. It is possibly due to the 2.5V reference voltage modulation with cathode voltage swing.
It could be compensated for, but some error will remain due to the non-linearity.”
Wow the engineering world with your unique design: Design Ideas Submission Guide
His observation and analysis were both absolutely correct. Table 6.6 in the TL431 datasheet reveals a maximum reference-voltage error of up to 2 mV per volt of cathode-to-anode voltage swing, consistent with the mediocre 20V/40µA = 500k active impedance he observed.
Fortunately, a simple and effective remedy is available and waiting in the pages of the common cookbook of current mirror circuits: the cascode. Figure 2 shows how it can be added (as D1 + Q2) to Figure 1.
Figure 2 D1/Q2 cascode reduces reference modulation error, improving active impedance by orders of magnitude.
The effect of the added parts is to isolate Z1’s cathode/anode voltage from voltage variation at the I2 node, thus holding the cathode/reference differential near zero and constant to within millivolts.
The resultant orders of magnitude reduction of reference modulation should produce a proportional increase in active impedance.
Thanks, Ashu! Another example of the magic of editor Aalyia Shaukat’s DI kitchen collaboration in action!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Combine two TL431 regulators to make versatile current mirror
- Active current mirror
- A current mirror reduces Early effect
- Active two-way current mirror
- Another silly simple precision 0/20mA to 4/20mA converter
The post Classic constant current cascode appeared first on EDN.
OIF interoperability demo at OFC highlights 800ZR, 400ZR, Multi-span Optics, CEI-448G, CEI-224G, Co-Packaging, CMIS, EEI
EPC launches its first seventh-generation eGaN power transistor
Infineon strengthens its leading position in sensors acquiring non-optical analogue/mixed-signal sensor portfolio from ams OSRAM
Infineon Technologies AG is expanding its sensor business with the acquisition of the non-optical analogue/mixed-signal sensor portfolio from ams OSRAM Group. The two companies have entered into an agreement for a purchase price of €570 million on a debt-free and cash-free basis. With the planned investment, Infineon will strengthen its position as a leader in sensors for automotive and industrial markets through a complementary portfolio and expand its product range in medical applications. The acquired business is expected to generate around €230 million in revenue in calendar year 2026 and will support Infineon’s profitable growth. The transaction will be accretive to earnings-per-share immediately upon closing, with future synergies enabling substantial additional value creation. As part of the transaction, around 230 employees with expertise in research and development (R&D) and business management will join Infineon. The agreement includes a multi-year supply agreement with ams OSRAM.
“The acquired business is a perfect strategic fit for Infineon and complements our strong offering in the analogue and sensor space. We will be able to provide our customers with even more comprehensive system solutions,” says Jochen Hanebeck, CEO of Infineon. “I am convinced that this is an outstanding technological, commercial and cultural match, generating growth opportunities in our current target markets as well as in emerging areas like humanoid robotics.”
The overall transaction is structured as a fabless asset deal covering sensor products, R&D capabilities, intellectual property and test & lab equipment. The transaction is subject to customary closing conditions, including regulatory approvals, and is expected to close in the second quarter of calendar year 2026. Infineon will fund the acquisition with additional debt, as part of its general corporate financing plans.
Sensors are the link between the physical and the digital world, as they detect and convert signals such as movement, sound, light waves, temperature and even heartbeat and strain into processible data. They are at the core of a wide array of applications like software-defined vehicles, health trackers, and physical AI applications such as humanoid robots. The market potential of the sensor and radio frequency markets is projected to exceed $20 billion by 2027.
The acquired Mixed Signal Products business will add leading medical imaging and sensor interfaces to the portfolio of Infineon, including X-ray solutions and sensors used for valve control, building control technology and metering. The Positioning & Temperature Sensors assets will strengthen Infineon’s high-precision position, capacitive and temperature sensing for automotive, industrial and medical applications, such as chassis position sensing and hands-on detection in vehicles, angle and position sensing for robotics and glucose monitoring.
The acquisition fully supports Infineon’s strategy to grow its sensor business. Infineon established its Sensor Units & Radio Frequency (SURF) unit within its Power & Sensor Systems (PSS) division in January 2025. This aligns with the strategy to offer customers comprehensive system solutions through a powerful, interlinked portfolio in “analogue & sensors”, “power” and “control & connectivity”.
The post Infineon strengthens its leading position in sensors acquiring non-optical analogue/mixed-signal sensor portfolio from ams OSRAM appeared first on ELE Times.
Silicon coupled with open development platforms drives context-aware edge AI

Edge AI reached an inflection point in 2025. What had long been demonstrated in controlled pilots—local inference, reduced latency, and improved system autonomy—began to transition into scalable, production-ready deployments across industrial and embedded markets. This shift has exposed a deeper architectural reality: many existing silicon platforms and development environments are poorly matched to the demands of modern, context-aware edge AI.
As AI workloads move from centralized cloud infrastructure to distributed edge devices, design priorities have fundamentally changed. Edge systems must execute increasingly complex models under strict constraints on power, thermal envelope, cost, and real-time determinism. Addressing these requirements demands both a new class of AI-native silicon and a development platform that is open, extensible, and aligned with modern machine learning workflows.
Why legacy architectures are no longer sufficient
Conventional microprocessors and application processors were not designed for sustained AI workloads at the edge. While they can support inference through software or add-on accelerators, their architectures typically lack three essential characteristics required for modern Edge AI:
- Dedicated AI acceleration capable of efficiently executing convolutional, transformer-based, and multimodal workloads.
- Deterministic real-time processing for latency-sensitive industrial and embedded applications.
- Energy efficiency at scale, enabling always-on intelligence without excessive thermal or power budgets.
As edge AI applications expand beyond simple classification toward sensor fusion, contextual reasoning, and on-device generative inference, these limitations become more pronounced. The result is a growing gap between what software frameworks can express and what deployed hardware can efficiently execute.
Edge AI design as a full value chain
Successful edge AI deployment requires a system-level view spanning the entire design value chain:
Data collection and preprocessing
Industrial edge systems, for example, operate in noisy, variable environments. Training data must reflect real-world conditions such as lighting changes, mechanical vibration, sensor drift, and interference.
Hardware-accelerated execution
Today’s edge designs rely on heterogeneous compute architectures: AI-native NPUs handle dense matrix and tensor operations, while CPUs, GPUs, DSPs, and real-time cores manage control logic, signal processing, and exception handling.
Model training, adaptation, and optimization
Although training is often performed off-device, edge deployment constraints must be considered early. Transfer learning and hybrid model architectures are commonly used to balance accuracy, explainability, and compute efficiency. Hardware-aware compilation enables models to be transformed to match accelerator capabilities while maintaining deterministic performance characteristics.
The role of open development platform
Historically, edge AI development has been fragmented across proprietary toolchains, closed runtimes, and framework-specific optimizations. This fragmentation has slowed adoption and increased development risk, particularly as model architectures evolve rapidly.
An open development platform addresses fragmentation challenges with:
- Framework diversity: Edge developers increasingly rely on PyTorch, ONNX, JAX, TensorFlow, and emerging toolchains. Supporting this diversity requires compiler infrastructures that are framework-agnostic.
- Rapid model evolution: The rise of transformers and large language models (LLMs) has introduced new operator patterns that closed toolchains struggle to support efficiently.
- Long product lifecycles: Industrial and embedded devices often remain in service for a decade or more, requiring platforms that can adapt to new models without hardware redesign.
Additionally, open compiler and runtime infrastructures based on standards such as MLIR and RISC-V enable a separation between model expression and hardware execution. This decoupling allows silicon to evolve while preserving software investment.

Figure 1 Synaptics’ open edge AI development platform features Astra SoCs, the Torq compiler, and the industry’s first deployment of Google’s Coral NPU. Source: Synaptics
Context-aware AI and the move toward multimodal inference
A defining trend of edge AI in 2025 was the transition from single-sensor inference toward context-aware, multimodal systems. Rather than processing isolated data streams, edge devices increasingly combine vision, audio, motion, and environmental inputs to build a richer understanding of their surroundings.
This shift places new demands on edge platforms which must now support:
- Heterogeneous data types and operators
- Efficient execution of attention mechanisms and transformer-based models
- Low-latency fusion of multiple sensor streams

Figure 2 The Grinn OneBox AI-enabled industrial single-board computer (SBC), designed for embedded edge AI applications, leverages a Grinn AstraSOM compute module and the Synaptics SL1680 processor. Source: Grinn Global
Designing for scalability and future workloads
One of the key architectural challenges in edge AI is scalability—not only across product tiers, but across time. AI-native silicon must scale from low-power endpoints to higher-performance systems while maintaining software compatibility.
This is typically achieved through:
- Modular accelerator architectures that scale performance without changing programming models.
- Heterogeneous compute integration, allowing workloads to migrate between NPUs, CPUs, and GPUs as needed.
- Standardized toolchains that preserve model portability across devices.
For designers, this approach reduces risk by allowing a single software stack to span multiple products and generations.
Testing, validation, and long-term reliability
Edge AI systems operate continuously and often autonomously. Validation must extend beyond functional correctness to include:
- Worst-case latency and power analysis
- Thermal stability under sustained workloads
- Behavior under degraded or unexpected inputs
Monitoring and logging capabilities at the edge enable post-deployment diagnostics and iterative model improvement. As models become more complex, explainability and auditability will become increasingly important, particularly in regulated environments.
Looking ahead
In 2026, AI is expected to move further into mainstream embedded system design. The focus is shifting from proving feasibility to optimizing performance, reliability, and lifecycle cost. This transition highlights the importance of aligning silicon architecture, software openness, and system-level design practices.
A new class of AI-native silicon, coupled with an open and extensible development platform, provides a foundation for this next phase. For system designers, the challenge—and opportunity—is to treat edge AI not as an add-on feature, but as a core architectural element spanning the entire design value chain.
Neeta Shenoy is VP of marketing at Synaptics.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
- Designing edge AI for industrial applications
- Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs
- Bridging the gap: Being an AI developer in a firmware world
- Why Power Delivery Is Becoming the Limiting Factor for AI
The post Silicon coupled with open development platforms drives context-aware edge AI appeared first on EDN.
The Rare Earths Catch-22: Why It Exists and How It Can Be Fixed
Speaking at the Auto EV Tech Vision Summit 2025, Bhaktha Keshavachara, CEO, Chara Technologies, highlights the Rare Earth challenges as faced by the world today and what potential policies can resolve them!
As the world strides towareds more sustainable solutions, the technologies we use become more rare-earth dependent, ranging from batteries to motors and the magnets used in the motors. To couple this phenomenon, a simultaneous energy transition is also taking shape. We are gradually moving towards achieving our energy goals from electrons, as compared to hydrocarbons previously, especially in transportation. This necessitates the need to locate supply chains in a stable region or wholly become self-sustainable in the raw materials, which are Rare Earths, the 17 elements put separately in the periodic table, as Bhaktha Keshavachara, CEO, Chara Technologies, puts it!
With Rare Earths, the global catch-22 lies for two specific reasons, namely:
- It is expensive to buy
- It is hazardous to extract
Since these materials are critical for our future, or the future dominated by Electric technologies like EVs, E-Buses, etc – It becomes imperative for us to search for ways to locate them in stable regions or make oneself self-sufficient in their production, or simply find ways. Let’s see what Bhaktha had to say about it!
Start Mining or Find Alternatives
“We have to start mining and extraction,” Bhaktha reiterates as he presents his first solution for the Rare-Earth catch-22. He goes on to recount the strategies adopted by the nations globally, including the US, which has interestingly reopened its mines in California for rare-earth minerals. Further, he underlines the ongoing global efforts to find alternative materials to build rare-earth magnets without using rare earths. He underlines NIRON from the US, which is experimenting with iron nitride magnets. He also points to Europe’s efforts towards finding an alternative in potassium-strontium magnets.
The problem with rare-earth mining is the hazardous nature of the process that leaves populations and people cancer-ridden for a long time. “If you see pictures on the net of the west coast of China, actually in central China, there are like cancer villages,” Bhaktha recounts.
Alternative Motor Technologies or Materials
Further, he suggests using alternative motor technologies to reduce the materials component of rare earths in the overall product. He refers to the various motor types in the same continuation, including electrically excited synchronous motors (EESM), induction motors (IM), and synchronous reluctance motors (SynRM). He also touches upon the light rare-earth materials, calling for more use of them as opposed to the heavy rare-earth materials that China holds a stronghold over, as he mentioned in his address
India’s Situation
Talking about India’s situation, Bhkatha says, “We have rare earths, but not all the 17 rare earths, but still we can do with whatever we have, and potentially we can import ore which has dysprosium and other rare earth materials.” He also recounts some past events wherein global price fluctuations anchored by China led to two big companies in India dropping projects of magnet manufacturing as the project suddenly became unviable in business terms.
In the same sequence, he reiterates the example of the US government that has stepped in to cap the minimum prices for the magnets irrespective of the global market fluctuations, to basically support the industry and also enable localisation of the technology and materials.
National efforts, Global Repercussions
In the midst of all these challenges, Bhaktha reaffirms his determination to face the storm in the face, calling upon the industry to innovate for the better. He says, “I think if we do the innovation and take the leadership role in prioritizing this, we not only have a huge opportunity to do something new in India, but there is a huge opportunity to export to the rest of the world because the rare-earth problem is a global problem.”
The post The Rare Earths Catch-22: Why It Exists and How It Can Be Fixed appeared first on ELE Times.
New Power Module Enhances AI Data Centre Power Density and Efficiency
The increasing AI and high-performance computing workloads demand power solutions that combine efficiency, reliability and scalability. Integrated power modules help streamline design, reduce energy use and deliver the stable performance required for advanced data centres. Microchip Technology announces the launch of the MCPF1525 Power Module, a highly integrated device with a 16V Vin buck converter that can deliver 25A per module, stackable up to 200A. The MCPF1525 enables higher power delivery within the same rack space and is combined with a programmable PMBus and I2C controls. This device is designed to power the latest generation of PCIe switches and high-performance compute MPU applications needed for AI deployments.
The MCPF1525 is packaged in an innovative vertical construction that maximises board space efficiency and can offer up to a 40% board area reduction when compared to other solutions. The compact power module is approximately 6.8 mm x 7.65 mm x 3.82 mm, making it an optimal solution for space-constrained AI servers.
For increased reliability, the MCPF1525 includes multiple diagnostic functions reported over PMBus, including over-temperature, over-current and over-voltage protection to minimise undetected faults. With a thermally enhanced package, the device is engineered to work within an operating junction temperature range of -40°C to +125°C. An on-board embedded EEPROM allows users to program the default power-up configuration.
“By leveraging Microchip’s comprehensive solutions, including PCIe Switchtec technology, FPGAs, MPUs and Flashtec NVMe controllers, the MCPF1525 power module can help customers achieve the system efficiency, reliability and scalability required for high-performance data centre and industrial computing applications,” said Rudy Jaramillo, vice president of Microchip’s analogue power and interface division. “Seamless integration across Microchip’s portfolio simplifies development and lowers risk, helping designers accelerate time-to-market.”
The MCPF1525 features a customised integrated inductor for low conducted and radiated noise, enhancing signal integrity, data accuracy and reliability of high-speed computing, helping reduce repeated data transmissions that waste valuable system power and time.
The post New Power Module Enhances AI Data Centre Power Density and Efficiency appeared first on ELE Times.
EDN announces Product of the Year Awards

EDN has announced the winners of the annual Electronic Products Product of the Year Awards in the January/February digital magazine. Now in its 50th year, EDN editors looked at over 100 products across 13 component categories to select the best new components. These categories include analog/mixed-signal ICs, development kits, digital ICs, electromechanical devices, interconnects, IoT platforms, modules, optoelectronics, passives, power, RF/microwave, sensors, and test and measurement.
These award-winning products demonstrate a significant advancement in a technology or its application, an exceptionally innovative design, a substantial achievement in price/performance, improvements in design performance, and/or the potential for new product designs and opportunities. This year, the awards have two ties, in the categories of power and sensors.
Also in the January/February issue, we look at some of the most advanced electronic components launched at the Consumer Electronics Show (CES). This year’s show highlighted the rise of AI across applications from automotive to smart glasses. Chipmakers are placing big bets on edge AI as a key growth area along with robotics, IoT, and automotive.
A few new AI chip advances announced at CES include Ambarella Inc.’s CV7 edge AI vision system-on-chip, optimized for a wide range of AI perception applications, and Ambiq Micro’s industry-first ultra-low-power neural processing unit built on its Subthreshold Power Optimized Technology platform and designed for real-time, always-on AI at the edge.
Though chiplets hold big promises in delivering more compute capacity and I/O bandwidth, design complexity has been a challenge. Cadence Design Systems Inc. and its IP partners may have made this a bit easier with pre-validated chiplets, targeting physical AI, data center, and high-performance-computing applications. At CES, Cadence announced a partner ecosystem to deliver pre-validated chiplet solutions, based on the Cadence physical AI chiplet platform. The new chiplet spec-to-packaged parts ecosystem is designed to reduce engineering complexity and accelerate time to market for developing chiplets while reducing risk.
We also spotlight the top 10 edge AI chips with an updated ranking, curated by AspenCore’s resident AI expert, EE Times senior reporter Sally Ward-Foxton. As highlighted by several CES product launches, more and more AI chips are being designed for every application niche as edge devices become AI-enabled. These devices range from handling multimodal large language models in edge devices to those designed for vision processing and minimizing power consumption for always-on applications.
Giordana Francesca Brescia, contributing writer for Embedded.com, looks at microcontrollers with on-chip AI and how they are transforming embedded hardware into intelligent nodes capable of analyzing and generating information. In addition to hardware innovations, she also covers software development and key areas of application such as biomedical and industrial automation.
We also spotlight several emerging trends in 2026, from 800-VDC power architectures in AI factories and battery energy storage systems (BESSes) to advances in autonomous farming and power devices for satellites.
The wide adoption of AI models has led to a redesign of data center infrastructure, according to contributing writer Stefano Lovati. Traditional data centers are being replaced with AI factories to meet the computational capacity and power requirements needed by today’s machine-learning and generative AI workloads.
However, a single AI factory can integrate several thousand GPUs, reaching power consumption levels in the megawatt range, Lovati said. This has led to the design of an 800-VDC power architecture, which is designed to support the multi-megawatt power demand required by the compute racks of next-generation AI factories.
Lovati also discusses how wide-bandgap semiconductors such as silicon carbide and gallium nitride can deliver performance and efficiency benefits when implementing an 800-VDC architecture.
The adoption of BESSes is primarily being driven by the need to improve efficiency and stability in power distribution networks. BESSes can balance supply and demand by storing energy from both renewable sources and the conventional power grid, Lovati said. This helps stabilize power grids and optimize power uses.
Lovati covers emerging trends in BESSes, including advances in battery technologies, hybrid energy storage systems—integrating batteries with alternative energy storage technologies such as supercapacitors or flywheels—and AI-based solutions for optimization. Some of the alternatives to lithium-ion discussed include flow batteries and sodium-ion and aluminum-ion batteries.
We also look at the challenges of selecting the right power supply components for satellites. Not only do they need to be rugged and small, but they must also be configurable for customization.
The configurability of power supplies is an important factor for meeting a variety of space mission specifications, according to Amit Gole, marketing product manager for the high-reliability and RF business unit at Microchip Technology.
Voltage levels in the electrical power bus are generally standardized to certain values; however, the voltage of the solar array is not always standardized, Gole said, which calls for a redesign of all of the converters in the power subsystems, depending on the nature of the mission.
Because this redesign can result in cost and development time increases, it is important to provide DC/DC converters and low-dropout regulators across the power architecture that have standard specifications while providing the flexibility for customization depending on the system and load voltages, he said.
Gole said functions such as paralleling, synchronization, and series connection are of key importance for power supplies when considering the specifications of different space missions.
We also look at the latest advances in smart farming. With technological innovations required to improve the agricultural industry and to meet the growing global food demands, smart farming has emerged to support farming operations thanks to the latest advancements in robotics, sensor technology, and communication technology, according to Liam Critchley, contributing writer for EE Times.
One of the key trends in smart farming is the use of drones, which help optimize a variety of farming operations. These include monitoring the health of the crops and soil and communicating updates to the farmer and active operations such as planting seeds and field-spraying operations. Drones leverage technologies such as advanced sensors, communication, IoT technologies and, in some cases, AI.
Critchley said one of the biggest developing areas is the integration of AI and machine learning. While some drones have these features, many smart drones will soon use AI to identify various pests and diseases autonomously, eliminating the need for human intervention.
Cover image: Adobe Stock
The post EDN announces Product of the Year Awards appeared first on EDN.
Cree LED unveils OptiLamp LEDs with active intelligence in every pixel
[OC] repairing the pads on an ASIC
| | Howdy, first time poster here. I’m a professional ASIC repairman; love my work and just like showing it off sometimes. Trace repairs are my favorite This is only a small part of a much larger repair (The full post got zero attention anyways lol) but feel free to ama. All the surrounding smds are 0201 sizing. [link] [comments] |



