Новини світу мікро- та наноелектроніки

Silicon Photonics: The Lightspeed Revolution That Will Transform AI Computing

ELE Times - Чтв, 02/05/2026 - 12:28

Courtesy: Lam Research

Lam Research is setting the agenda for the wafer fabrication equipment industry’s approach to a silicon photonics revolution, driving the breakthroughs in Speciality Technologies that will enable sustainable AI scaling through precision optical manufacturing.

The artificial intelligence boom has created an energy crisis that threatens to consume more electricity than entire nations. As data centres race to keep pace with AI’s insatiable appetite for computational power, technology leaders like Lam are shaping a fundamental shift that could redefine how we think about high-performance computing. One solution lies in replacing the electrical interconnects that have powered computing for decades with something far more efficient: light.

AI’s Energy Crisis: Why Power Demand Is Surging in Data Centres

Goldman Sachs projects a 160% increase in data centre power demand by 2030, reaching 945 terawatt-hours annually — equivalent to Japan’s entire electricity consumption.

The problem runs deeper than software inefficiency. According to Bloomberg, AI training facilities house hundreds of thousands of NVIDIA H100 chips, each drawing 700 watts, nearly eight times the power consumption of a large TV. Combined with cooling systems, some hyperscale facilities now require as much power as 30,000 homes, driving tech companies to seriously consider dedicated nuclear plants.

Source: Nvidia. Apart from average estimates, power data for racks are based on Nvidia specifications. 2025 and later are estimates. AI server racks refer to GPU racks. General-purpose racks refer to CPUs.

The Paradigm Shift

Meeting this challenge requires a fundamental change in how chips are designed and connected. Silicon photonics—using light to transmit data—has the potential to provide dramatic improvements in speed and efficiency over traditional electrical interconnects. Precision optical manufacturing makes this shift possible, enabling scalable processes that can support the next era of energy-efficient, high-performance computing.

Silicon photonics represents a fundamental reimagining of how data moves within computing systems. Instead of pushing electrons through copper wires, this technology uses photons—particles of light—to carry information through silicon waveguides that function like nanoscale fibre optic cables, integrated directly onto chips.

The efficiency gains are dramatic. Optical interconnects consume just 0.05 to 0.2 picojoules per bit of data transmitted, compared to the much higher energy requirements of electrical interconnects over similar distances. As transmission distances increase, even within a single package, the energy advantage of photonics becomes overwhelming.

TSMC has published several research papers on silicon photonics since late 2023. The company has announced public partnerships with NVIDIA to integrate optical interconnect architectures into next-generation AI computing products. Lam is leading the industry’s approach to the transition to silicon photonics. As a technology leader with deep expertise in precision manufacturing, we are defining the roadmap for silicon photonics production, working closely with leading foundries and fabless companies to address the unique challenges presented by optical interconnects.

According to Yole Group, the market for silicon photonics is expected to grow from $95 million in 2023 to more than $863 million in 2029, with a 45% annual growth rate that reflects the technology’s expected rapid commercial adoption.

The Limits of Copper: Why Traditional Interconnects Can’t Scale With AI

At the heart of this energy crisis lies a fundamental bottleneck that has been building for years. While computing performance has advanced at breakneck speed, the infrastructure connecting these powerful processors has not kept pace. Hardware floating-point operations (FLOPS) have improved 60,000-fold over the past two decades, but DRAM bandwidth has increased only 100-fold, and interconnect bandwidth just 30-fold over the same period.

This creates what engineers call the “memory wall,” a constraint where data cannot move between processors and memory fast enough to fully use the available computing power. In AI applications, where massive datasets must flow seamlessly between graphics processors, high-bandwidth memory, and other components, these interconnect limitations become critical performance bottlenecks.

The solution that worked for previous generations—simply shrinking copper interconnects and packing them more densely—is reaching physical limits. As these copper traces become smaller and more numerous, they consume more power, generate more heat, and introduce signal integrity issues that become increasingly difficult to manage. Each voltage conversion in a data centre’s power delivery system introduces inefficiencies, and copper interconnects compound these losses throughout the system.

Modern AI architectures require what engineers call “high access speeds within the stack.” Chips become thinner, interconnects evolve from through-silicon vias (TSVs) to hybrid bonding, and memory modules must connect directly to graphics processors at unprecedented speeds. But when that high-speed memory connection has to traverse copper tracks on a circuit board to reach another processor, much of the bandwidth advantage disappears.

Silicon Photonics Meets AI: Co-Packaged Optics for Next-Gen Performance

Silicon photonics is not entirely new; it has powered telecommunications networks for years through pluggable transceivers that connect data centre racks. These proven systems use silicon photonic dies combined with separate lasers and micro-lens technologies packaged into modules that can be easily replaced if they fail.

But AI’s demands are pushing photonics into uncharted territory. Instead of simply connecting separate systems, the technology must now integrate directly with processors, memory, and other components in what engineers call “co-packaged optics.” This approach promises to bring optical interconnects closer to the actual computation, maximising bandwidth while minimising energy consumption.

The challenge is reliability. While pluggable transceivers can be easily swapped out if they fail, co-packaged optical systems integrate directly with expensive graphics processors and high-bandwidth memory, making them a more reliable option. If an optical component fails in such a system, the repair becomes exponentially more complex and costly. Early implementations from major chip developers are still in pilot phases, carefully assessing long-term reliability before full-scale deployment.

Accelerating Adoption: How Industry Timelines Are Moving Faster Than Expected

Industry roadmaps that once projected capabilities for 2035 are already being met by leading manufacturers. The combination of urgent market need, massive investment, and three decades of accumulated photonics research has created what amounts to a perfect storm for commercialisation.

The implications extend far beyond data centres. As optical interconnects become more cost-effective and established, they have the potential to revolutionise everything from autonomous vehicles to edge computing devices. The same technology that enables sustainable AI scaling could ultimately transform how electronic systems communicate across virtually every application.

Source: Yole Group

The Future of Computing Is Optical Interconnects for Sustainable AI Growth

The question is how quickly it can be implemented and scaled. With leading manufacturers already investing billions and pilot systems entering data centres, the light-speed future of computing is no longer a distant possibility. Companies like Lam, through our customer-centric approach and advanced manufacturing solutions, enable this transformation by providing the precision tools that make commercial silicon photonics possible.

Silicon photonics represents a fundamental technology shift that could determine which companies lead the next phase of the digital revolution. Just as the introduction of copper interconnects enabled previous generations of performance scaling, optical interconnects have the potential to break through the barriers that threaten to constrain AI development.

For an industry grappling with the sustainability challenges of exponential AI growth, silicon photonics offers a path forward that doesn’t require choosing between performance and environmental responsibility. By replacing electrical inefficiency with optical precision, this technology could enable the continued advancement of AI while dramatically reducing its environmental footprint.

The revolution is just beginning, but one thing is clear: the future of high-performance computing is increasingly bright, and Lam is at the centre of it.

The post Silicon Photonics: The Lightspeed Revolution That Will Transform AI Computing appeared first on ELE Times.

AI-Augmented Test Automation at Enterprise Scale

ELE Times - Чтв, 02/05/2026 - 12:00

Courtesy: Keysight Technologies

Enterprise test automation does not break because teams lack tools.

It breaks when browser-level automation is asked to validate systems far beyond the browser.

At enterprise scale, software quality depends on the ability to test entire user journeys across the full technology stack, from web and APIs to desktop, packaged applications, and highly graphical systems, without fragmenting tooling or multiplying maintenance effort.

This distinction explains why Keysight Technologies was positioned as a Leader in the 2025 Gartner Magic Quadrant for AI-Augmented Software Testing Tools, recognised for both Ability to Execute and Completeness of Vision.

Gartner defines AI-augmented software testing tools as solutions that enable increasingly autonomous, context-aware testing across the full software development lifecycle. In practice, that definition only matters if it holds up in complex, regulated enterprises.

One notable deployment is American Electric Power (AEP).

Why Browser-Only Automation Hits a Ceiling at Enterprise Scale

Most enterprises already use Selenium successfully for its intended purpose.

Browser automation works well when:

  • The system under test is web-based
  • Interactions are DOM-driven
  • The scope is limited to UI flows

Problems emerge when enterprises attempt to extend browser-centric automation to validate full end-to-end systems that include:

  • Highly graphical or non-DOM interfaces
  • Desktop or packaged applications
  • Field mobility tools and operational systems
  • Integrated workflows spanning UI, APIs, and backend logic

At that point, teams are forced to stitch together multiple tools, frameworks, and scripts. The result is not resilience-it is complexity, fragmentation, and rising maintenance cost.

The issue is not Selenium.

The issue is using a single-layer tool to validate multi-layer systems.

What Gartner Means by AI-Augmented Software Testing

According to Gartner, the market is moving toward platforms that combine and extend automation capabilities, rather than replacing them.

Modern AI-augmented testing platforms are expected to:

  • Orchestrate testing across UI, API, and visual layers
  • Combine browser automation with image-based and model-based techniques
  • Abstract complexity so teams test behaviour, not implementation details
  • Reduce maintenance through models, self-healing, and intelligent exploration
  • Scale across cloud, on-premises, and air-gapped environments

This is not an argument against existing tools.

It is recognition that enterprise testing requires a unifying layer above them.

Enterprise Reality: Complexity, Scale, and Risk at AEP

AEP operates one of the largest electricity transmission networks in the United States, serving 5.5 million customers across 11 states. Its software landscape includes:

  • Customer-facing web applications
  • Financial and billing systems
  • Highly graphical, map-based field mobility applications

Before modernising its testing approach, AEP faced a common enterprise constraint:

  • Browser automation covered part of the estate
  • Critical operational systems remained difficult to validate
  • Manual testing persisted in high-risk workflows
  • Defects continued to escape into production

The challenge was not adopting another tool.

It was testing the full system end-to-end, consistently, and at scale.

How AEP Scaled Full-Stack, AI-Driven Testing

AEP began where confidence was lowest.

Rather than extending browser automation incrementally, the team selected a highly graphical, map-based field mobility application-a system that sat outside the reach of traditional browser-only approaches.

Using AI-driven, model-based testing, the application was automated end-to-end, validating behaviour across visual interfaces, workflows, and integrated systems.

That success changed internal perception.

As AEP’s Lead Automation Developer and Architect explained, proving that even their most complex system could be tested reliably shifted the conversation from “Can we automate this?” to “How broadly can we apply this approach?”

The key was not replacing existing automation, but extending it into a unified, full-stack testing strategy.

Measured Results: Time, Defects, and Revenue Impact

Once deployed across teams, the outcomes were measurable:

  • 75% reduction in test execution time
  • 65% reduction in development cycle time
  • 82 defects identified and fixed before production
  • 1,400+ automated scenarios executed
  • 925,000 exploratory testing scenarios discovered using AI
  • 55 applications tested across the organisation
  • $1.2 million in annual savings through reduced rework and maintenance

In one instance, AI-driven exploratory testing uncovered 17 critical financial defects that had escaped prior to validation approaches. Resolving those issues resulted in a $170,000 revenue increase within 30 days.

This is not broader coverage for its own sake.

It is risk reduction and business impact.

Empowering Teams Beyond Test Engineers

Another enterprise constraint is who can contribute to quality.

At AEP, non-technical users were able to create tests by interacting with models and workflows rather than code. This reduced dependency on specialist automation engineers and allowed quality ownership to scale with the organisation.

Gartner highlights this abstraction as critical: enterprises need testing platforms that extend participation without increasing fragility.

What Enterprise Leaders Should Look for in AI Testing Platforms

The strategic question is not whether a tool supports Selenium.

The question is whether the platform can:

  • Combine browser automation with visual, API, and model-based testing
  • Validate entire user journeys, not isolated layers
  • Reduce maintenance while expanding coverage
  • Operate across the full enterprise application stack
  • Scale trust before scaling usage

AEP’s experience illustrates Gartner’s broader market view: AI-augmented testing succeeds when it unifies existing capabilities and extends them, rather than forcing enterprises to choose between tools.

The Strategic Takeaway

Enterprise software quality now depends on full-stack validation, not single-layer automation.

Selenium remains valuable. But enterprise testing requires a platform that goes beyond the browser, orchestrates multiple techniques, and scales across real-world complexity.

Independent analyst research defines the direction. Real enterprise outcomes prove what works. AEP’s results show what becomes possible when AI-augmented testing is treated as a strategic, unifying capability. Not a collection of disconnected tools.

The post AI-Augmented Test Automation at Enterprise Scale appeared first on ELE Times.

Murata Launches New Tech Guide to Enhance Power Stability in AI-driven Data Centres

ELE Times - Чтв, 02/05/2026 - 11:40

Murata Manufacturing Co., Ltd. has launched a new technology guide entitled: ‘Optimising Power Delivery Networks for AI Servers in Next-Generation Data Centres.’ Available on the company’s website, the guide introduces specific power delivery network optimisation solutions for AI servers that enhance power stability and reduce power losses across the data centre infrastructure.

The guide addresses the rapid advancement and adoption of AI, a trend driving the continuous rollout of new data centres worldwide. As the industry moves toward higher voltage operations and increased equipment density, the resulting increase in overall power consumption has made stable power delivery a critical business issue for data centre operators. Consequently, the guide focuses on power circuit design for data centres, providing a detailed overview of market trends, evolving technologies in power delivery, and the key challenges the sector currently faces.

To assist engineers and designers, the guide is structured to provide a market overview that breaks down power consumption and technology trends within power lines. It further addresses market challenges and solutions by examining key considerations in power-line design and exploring how the evolution of power placement architectures can facilitate power stabilisation and loss reduction.

Murata supports these architectural improvements with a broad product lineup that addresses advanced and evolving power delivery methods, including multilayer ceramic capacitors (MLCC), silicon capacitors, polymer aluminium electrolytic capacitors, inductors, chip ferrite beads, and thermistors. Furthermore, the company provides comprehensive design-stage support, using advanced analysis technologies to assist with component placement and selection. Backed by a robust global supply and support network, Murata continues to deliver tangible value by solving power-related challenges in data centres.

You can download the full technology guide here: Optimising Power Delivery Networks for AI Servers in Next-Generation Data Centres 

The post Murata Launches New Tech Guide to Enhance Power Stability in AI-driven Data Centres appeared first on ELE Times.

Designing energy-efficient AI chips: Why power must be an early consideration

EDN Network - Чтв, 02/05/2026 - 09:54

AI’s demand for compute is rapidly outpacing current power infrastructure. According to Goldman Sachs Global Institute, upcoming server designs will push this even further, requiring enough electricity to power over 1,000 homes in a space the size of a filing cabinet.

As workloads continue to scale, energy efficiency is now as critical as raw performance. For engineers developing AI silicon, the central challenge is no longer just about accelerating models, but maximizing performance for every watt consumed.

A shift in design philosophy

The escalation of AI workloads is forcing a paradigm shift in chip development. Energy optimization must be addressed from the earliest design phases, influencing decisions throughout concept, architecture, and production. Considering thermal behavior, memory traffic, architectural tradeoffs, and workload characteristics as part of a single power-aware design flow enables the development of systems that scale efficiently without breaching data center or edge-device energy limits.

Traditionally, design teams have primarily focused on timing and performance, only addressing energy consumption at the end of the process. Today, that strategy is outdated.

Synopsys customer surveys across numerous design projects show that addressing power at the architectural stage can yield 30-50% savings, whereas waiting until implementation typically achieves only marginal improvements. Early exploration enables decisions about architecture, memory hierarchy, and workload mapping before they become fixed, allowing trade-offs that balance throughput, area, and efficiency.

Architecture analysis as a power tool

Before RTL is finalized, a comprehensive power analysis flow helps reveal where energy is being spent and what trade-offs exist between voltage, frequency, and performance. Architectural modeling enables rapid evaluation of techniques—such as dynamic voltage and frequency scaling (DVFS), power gating to shut down inactive circuits, and optimizing data flow within the network-on-chip (NoC)—and supports smarter, more energy-efficient design choices.

Transaction-level simulation allows teams to measure expected workloads and predict the impact of configuration changes. This early insight informs hardware-software partitioning, interface sizing, and memory placement, all critical factors in the chip’s overall efficiency.

Data movement: The hidden power sink

Computation isn’t the only factor driving energy use. In many AI chips, data movement consumes more power than the arithmetic itself. Each transfer between memory hierarchies or across chiplets adds significant overhead. This is the essence of the so-called memory wall: compute capability has outpaced memory bandwidth.

To close that gap, designers can reduce unnecessary transfers by introducing compute-in-memory or analog approaches, choosing high-bandwidth memory (HBM) interfaces, or adopting sparse algorithms that minimize data flow. The earlier the data paths are analyzed, the greater the potential savings, because late-stage fixes rarely recover wasted energy caused by poor partitioning.

The growing thermal challenge

As designs move toward multi-die and chiplet architectures, thermal density has become a first-order constraint. Packing several dies into one package creates concentrated heat zones that are difficult to manage later in the flow. Effective thermal planning, therefore, starts with system partitioning: examining how compute blocks are distributed and how heat will flow through the stack or interposer.

By modeling various configurations early, before layout or floor planning, engineers can avoid thermally stressed regions and plan for cooling strategies that support consistent performance under load.

Optimizing the real workload

Unlike traditional semiconductors, AI chips are rarely general-purpose. Whether a device runs edge inference, data center training, or specialized analytics, its efficiency depends on how closely the hardware matches the target workload. Simulation, emulation, and prototyping before tapeout make it possible to test representative use cases and fine-tune hardware parameters accordingly.

Profiling multiple operating modes, from idle to sustained training, exposes inefficiencies that might otherwise remain hidden until silicon returns from the fab. And it helps ensure the design can maintain high utilization and consistent energy performance across all conditions.

Extending efficiency beyond tapeout

Energy monitoring and management must persist even after chips are manufactured. Variability, aging, and environmental factors can shift operating characteristics over time. Integrating on-chip telemetry and control using silicon lifecycle management (SLM) solutions allows engineers to track power behavior in the field and apply adjustments to sustain optimal performance per watt throughout the product’s lifecycle.

The next breakthroughs in AI hardware will come not just from faster chips, but from smarter engineering that treats power as a foundational design dimension, not an afterthought. For today’s AI hardware, efficiency is performance.

Godwin Maben is a Synopsys Fellow.

Special Section: AI Design

The post Designing energy-efficient AI chips: Why power must be an early consideration appeared first on EDN.

Vishay Intertechnology launches New Commercial and Automotive Grade Power Inductors

ELE Times - Чтв, 02/05/2026 - 09:05

Vishay Intertechnology, Inc. introduced four new power inductors in the 2.0 mm by 1.6 mm by 1.2 mm 0806 and 3.2 mm by 2.5 mm by 1.2 mm 1210 case sizes. The commercial IHLL-0806AZ-1Z and IHLL-1210AB-1Z and Automotive Grade IHLP-0806AB-5A and IHLP-1210ABEZ-5A achieve the same performance as the next-smallest competing inductor in 11 % (1210) and 64 % (0806) smaller footprints, while offering higher operating temperatures, a wider range of inductance values, and lower DCR for increased efficiency.

Offering inductance values from 0.24 µH to 4.70 µH and typical DCR down to 6.6 mΩ, the terminals of the IHLL-0806AZ-1Z and IHLL-1210AB-1Z are plated on the bottom only, enabling a smaller land pattern for more compact board spacing. The terminals of the IHLP-0806AB-5A and IHLP-1210ABEZ-5A are plated on the bottom and sides, allowing for the formation of a solder fillet that adds mounting strength against great mechanical shock, while simplifying solder joint inspection. The AEC-Q200 qualified devices provide reliable performance up to +165 °C, which is 10 °C higher than the closest competing composite inductor, and typical DCR down to 15.0 mΩ.

Delivering improved performance over ferrite-based technologies, all four devices feature a robust powdered iron body that completely encapsulates their windings — eliminating air gaps and magnetically shielding against crosstalk to nearby components — while their soft saturation curve provides stability across the entire operating temperature and rated current ranges. Packaged in a 100 % lead (Pb)-free shielded, composite construction that reduces buzz to ultra-low levels, the inductors offer high resistance to thermal shock, moisture, and mechanical shock, and handle high transient current spikes without saturation.

RoHS-compliant, halogen-free, and Vishay Green, the Vishay Dale devices released today are designed for DC/DC converters, noise suppression, and filtering in a wide range of applications. The IHLP-0806AB-5A and IHLP-1210ABEZ-5A are ideal for automotive infotainment, navigation, and braking systems; ADAS, LiDAR, and sensors; and engine control units. The IHLL-0806AZ-1Z and IHLL-1210AB-1Z are intended for CPUs, SSD modules, and data networking and storage systems; industrial and home automation systems; TVs, soundbars, and audio and gaming systems; battery-powered consumer healthcare devices; medical devices; telecom equipment; and precision instrumentation.

Device Specification Table:

Series

IHLL-0806AZ-1Z

IHLP-0806AB-5A

IHLL-1210AB-1Z

IHLP-1210ABEZ-5A

Inductance @ 100 kHz (μH)

0.24 to 4.70

0.22 to 0.47

0.24 to 4.70

0.47 to 4.70

DCR typ. @
25 °C (mΩ)

16.0 to 240.0

15.0 to 21.0

6.6 to 115.0

18.0 to 150.0

DCR max. @
25 °C (mΩ)

20.0 to 288.0

18.0 to 25.0

10.0 to 135.0

22.0 to 180.0

Heat rating current typ. (A)(¹)

1.3 to 6.3

4.6 to 5.8

2.3 to 9.2

1.8 to 5.1

Saturation current typ. (A)(²)

1.5 to 6.5

4.5 to 5.1

2.5 to 9.0

2.0 to 6.5

Saturation current typ. (A)(³)

1.8 to 7.2

5.4 to 7.5

2.9 to 11.5

2.5 to 8.2

Case size

0806

0806

1210

1210

Temperature range (°C)

-55 to +125

-55 to +165

-55 to +125

-55 to +165

AEC-Q200

No

Yes

No

Yes

 (¹) DC (A) that will cause an approximate ΔT of 40 °C
(²) DC (A) that will cause L0 to drop approximately 20 %
(³) DC (A) that will cause L0 to drop approximately 30 %

The post Vishay Intertechnology launches New Commercial and Automotive Grade Power Inductors appeared first on ELE Times.

Loom Solar Introduces Revolutionary, Scalable CAML BESS Solution up to 1 MWh to Replace Diesel Generators for C&I Sector

ELE Times - Чтв, 02/05/2026 - 08:25

Loom Solar, one of India’s leading solar manufacturing companies, announced the launch of its scalable 125kW/261kWh CAML Battery Energy Storage System (BESS) up to 1MWh, a next-generation solution designed to deliver uninterrupted, seamless power tothe  Commercial and Industrial (C&I) sector, significantly reducing production losses caused by power outages.

Unlike conventional diesel generator-based systems, which typically involve switch-over downtimes ranging from 30 seconds to 3 minutes, Loom Solar’s scalable 125kW/261kWh BESS ensures instantaneous power availability, eliminating operational disruptions in critical industrial processes. The system is engineered for a cleaner, quieter, and safer microgrid application that addresses low-voltage situations and power cuts while delivering continuous power for over two hours, with deep-discharge capability, making it a reliable alternative for businesses that demand high uptime and operational efficiency.

With a lifecycle of up to 6,000 charge–discharge cycles, the scalable 125kW/261kWh BESS offers long-term durability and superior economic value.

Developed through Loom Solar’s strong focus on in-house research and development, and validated through rigorous product testing facilities, the solution reflects the company’s commitment to innovation and reliability. The system is IoT-enabled and compatible with connected energy ecosystems, allowing real-time monitoring, intelligent energy management, and seamless integration with renewable power sources such as solar.

Commenting on the launch, Amod Anand, Co-Founder and Director, Loom Solar, said, “The scalable 125kW/261kWh BESS is a solution-led product designed specifically for India’s C&I sector, where even a few seconds of downtime can translate into significant losses. Our focus has been to replace reactive power backup with intelligent, seamless energy continuity. This solution not only ensures uninterrupted operations but also helps businesses optimise energy costs and move closer to energy independence through renewable integration.”

With this launch, Loom Solar strengthens its position as a key enabler of India’s energy transition, offering integrated solar and energy storage solutions that support energy security, sustainability, and long-term resilience for businesses.

The post Loom Solar Introduces Revolutionary, Scalable CAML BESS Solution up to 1 MWh to Replace Diesel Generators for C&I Sector appeared first on ELE Times.

Quick rant - Circuits West in Colorado just went out of business

Reddit:Electronics - Чтв, 02/05/2026 - 01:12
Quick rant - Circuits West in Colorado just went out of business

Argh. I'm just here to complain. Circuits West in Longmont Colorado closed their doors on Monday. I realize the responses I'll get are "Use JLC or PCB Way" and yes, those are great options, but I do quick-turn (usually 2-day) fabs and on top of that it's CNY. Argh. Just annoying. Can't do anything about it. Guess it's Advanced Circuits (APCT, AdvancedPCB) as a single-source in-Colorado fab shop :(

I don't have an image; I'm posting their logo.

submitted by /u/xtcdenver
[link] [comments]

smolBrain - my own version of slimeVR trackers based on nRF52 chip series. Just want to share my project, maybe people find notes there interesting.

Reddit:Electronics - Чтв, 02/05/2026 - 00:44
smolBrain - my own version of slimeVR trackers based on nRF52 chip series. Just want to share my project, maybe people find notes there interesting.

Hi hi :3

Upfront - with a huge help from SlimeVR devs and community I was able to make a final version of my SlimeVR smolBrain trackers. So thanks a lot for the help to them <3

Why share here you may ask? It looks like there are a lot of supa smart people who may give feedback on whatever I made, especially for low power devices. That was the first time for me working with low power devices and since I'm not exactly the best hardware engineer I had to learn a lot. Leakage here, sleep mode there, Iq currents for every device on the board and so on. Was pretty fun. But also - I tried to add to the schematic and readme a ton of measurements of the board and reasons why I used components or what they do. Very often it is something I really want to have on other people's works, like dev notes, and it is not always there. So I decided to make it myself :3

Is description and notes good or not I do not know, there is a chance I still have some problematic parts or inconsistencies, but I tried to make this board as small and as good as I can, following all PCB routing rules. So I believe if you have never done something like this it can be a very interesting insight or an overview on behaviour of almost all components on the ready to use board.

What you will find inside:
- schematic with ton of notes, almost for any component
- real measurements for current consumption in normal and deep sleep modes (using Nordic Power Profiler Kit 2)
- power efficiency measurements
- analysis of power supply voltage ripples after DCDC and LDO
- IMU performance using raw data for ICM-45686 verify does it match datasheet values or not
- some basic knowledge for routing. I know, it is not all, I know for small devices like this it does not matter sometimes, but as I said I was trying to keep an eye on stack, where and how I route
- information on DC-DC behaviour at 100% mode which causes 500 uA spikes of current out of nowhere... I mean I did not know, I do now :3
- transition times for active divider and why to use it if you have current leak anyway

It is open source as usual :3, feel free to check out my git project page if you feel like it.

submitted by /u/Meow-Corp
[link] [comments]

A box full of old capacitors

Reddit:Electronics - Срд, 02/04/2026 - 22:47
A box full of old capacitors

I love old capacitors, colour shining happiness \m/

submitted by /u/TosTapanE-7
[link] [comments]

Supra launches to secure US supply of gallium, scandium and other critical minerals

Semiconductor today - Срд, 02/04/2026 - 22:24
Amid mounting concerns about US critical mineral and rare-earth element supply chains, Supra Elemental Recovery Inc has launched as a spinout from the University of Texas at Austin, focused on selectively recovering high-purity critical minerals from waste streams. The firm is initially targeting elements such as gallium (Ga) and scandium (Sc)...

Nimy ships high-grade gallium ore from Western Australia to M2i in USA

Semiconductor today - Срд, 02/04/2026 - 22:13
Mining exploration company Nimy Resources Ltd of Perth, Western Australia has shipped its first high-grade gallium ore consignment from its Block 3 gallium deposit at the Mons Project in Western Australia to the USA under the collaboration agreement with US-listed company M2i Global, which specializes in the development and execution of a complete global value supply chain for critical minerals...

Classic constant current cascode

EDN Network - Срд, 02/04/2026 - 15:00

An important figure of merit for all precision constant current sources is their active impedance.  Which is to say, just how “constant” is their output held against changes in applied voltage?  Frequent and expert Design Idea (DI) commentator Ashutosh Sapre (Ashu) was kind enough to measure this parameter for a design of mine and share his results. The circuit, applied as a 4 to 20mA current mirror, is shown in Figure 1 and discussed in “Combine two TL431 regulators to make versatile current mirror.”

Figure 1 A 4 to 20mA current mirror with poor active impedance.

Said Ashutosh: “I tried the fig.2 circuit for 4-20mA mirroring, with R1 and R2 of 100E, and using a Tl431 (2.5V). It worked quite well. One issue I found was that the output impedance (di/dv) was quite low; there was a change of 40uA over a supply swing of 20V (if I remember correctly), not linear with supply voltage change. It is possibly due to the 2.5V reference voltage modulation with cathode voltage swing.

It could be compensated for, but some error will remain due to the non-linearity.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

His observation and analysis were both absolutely correct. Table 6.6 in the TL431 datasheet reveals a maximum reference-voltage error of up to 2 mV per volt of cathode-to-anode voltage swing, consistent with the mediocre 20V/40µA = 500k active impedance he observed.

Fortunately, a simple and effective remedy is available and waiting in the pages of the common cookbook of current mirror circuits: the cascode. Figure 2 shows how it can be added (as D1 + Q2) to Figure 1.

Figure 2 D1/Q2 cascode reduces reference modulation error, improving active impedance by orders of magnitude.

The effect of the added parts is to isolate Z1’s cathode/anode voltage from voltage variation at the I2 node, thus holding the cathode/reference differential near zero and constant to within millivolts.

The resultant orders of magnitude reduction of reference modulation should produce a proportional increase in active impedance.

Thanks, Ashu!  Another example of the magic of editor Aalyia Shaukat’s DI kitchen collaboration in action!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Classic constant current cascode appeared first on EDN.

OIF interoperability demo at OFC highlights 800ZR, 400ZR, Multi-span Optics, CEI-448G, CEI-224G, Co-Packaging, CMIS, EEI

Semiconductor today - Срд, 02/04/2026 - 12:21
At the Optical Fiber Communication Conference & Exposition (OFC 2026) at the Los Angeles Convention Center (15–19 March), the Optical Internetworking Forum (OIF) is presenting a live, multi-vendor interoperability demonstration...

EPC launches its first seventh-generation eGaN power transistor

Semiconductor today - Срд, 02/04/2026 - 12:08
Efficient Power Conversion Corp (EPC) of El Segundo, CA, USA — which makes enhancement-mode gallium nitride on silicon (eGaN) power field-effect transistors (FETs) and integrated circuits for power management applications — has started volume production of the EPC2366, the first of its seventh-generation (Gen 7) eGaN family of power transistors...

Infineon strengthens its leading position in sensors acquiring non-optical analogue/mixed-signal sensor portfolio from ams OSRAM

ELE Times - Срд, 02/04/2026 - 12:07

Infineon Technologies AG is expanding its sensor business with the acquisition of the non-optical analogue/mixed-signal sensor portfolio from ams OSRAM Group. The two companies have entered into an agreement for a purchase price of €570 million on a debt-free and cash-free basis. With the planned investment, Infineon will strengthen its position as a leader in sensors for automotive and industrial markets through a complementary portfolio and expand its product range in medical applications. The acquired business is expected to generate around €230 million in revenue in calendar year 2026 and will support Infineon’s profitable growth. The transaction will be accretive to earnings-per-share immediately upon closing, with future synergies enabling substantial additional value creation. As part of the transaction, around 230 employees with expertise in research and development (R&D) and business management will join Infineon. The agreement includes a multi-year supply agreement with ams OSRAM.

“The acquired business is a perfect strategic fit for Infineon and complements our strong offering in the analogue and sensor space. We will be able to provide our customers with even more comprehensive system solutions,” says Jochen Hanebeck, CEO of Infineon. “I am convinced that this is an outstanding technological, commercial and cultural match, generating growth opportunities in our current target markets as well as in emerging areas like humanoid robotics.”

The overall transaction is structured as a fabless asset deal covering sensor products, R&D capabilities, intellectual property and test & lab equipment. The transaction is subject to customary closing conditions, including regulatory approvals, and is expected to close in the second quarter of calendar year 2026. Infineon will fund the acquisition with additional debt, as part of its general corporate financing plans.

Sensors are the link between the physical and the digital world, as they detect and convert signals such as movement, sound, light waves, temperature and even heartbeat and strain into processible data. They are at the core of a wide array of applications like software-defined vehicles, health trackers, and physical AI applications such as humanoid robots. The market potential of the sensor and radio frequency markets is projected to exceed $20 billion by 2027.

The acquired Mixed Signal Products business will add leading medical imaging and sensor interfaces to the portfolio of Infineon, including X-ray solutions and sensors used for valve control, building control technology and metering. The Positioning & Temperature Sensors assets will strengthen Infineon’s high-precision position, capacitive and temperature sensing for automotive, industrial and medical applications, such as chassis position sensing and hands-on detection in vehicles, angle and position sensing for robotics and glucose monitoring.

The acquisition fully supports Infineon’s strategy to grow its sensor business. Infineon established its Sensor Units & Radio Frequency (SURF) unit within its Power & Sensor Systems (PSS) division in January 2025. This aligns with the strategy to offer customers comprehensive system solutions through a powerful, interlinked portfolio in “analogue & sensors”, “power” and “control & connectivity”.

The post Infineon strengthens its leading position in sensors acquiring non-optical analogue/mixed-signal sensor portfolio from ams OSRAM appeared first on ELE Times.

Silicon coupled with open development platforms drives context-aware edge AI

EDN Network - Срд, 02/04/2026 - 10:12

Edge AI reached an inflection point in 2025. What had long been demonstrated in controlled pilots—local inference, reduced latency, and improved system autonomy—began to transition into scalable, production-ready deployments across industrial and embedded markets. This shift has exposed a deeper architectural reality: many existing silicon platforms and development environments are poorly matched to the demands of modern, context-aware edge AI.

As AI workloads move from centralized cloud infrastructure to distributed edge devices, design priorities have fundamentally changed. Edge systems must execute increasingly complex models under strict constraints on power, thermal envelope, cost, and real-time determinism. Addressing these requirements demands both a new class of AI-native silicon and a development platform that is open, extensible, and aligned with modern machine learning workflows.

Why legacy architectures are no longer sufficient

Conventional microprocessors and application processors were not designed for sustained AI workloads at the edge. While they can support inference through software or add-on accelerators, their architectures typically lack three essential characteristics required for modern Edge AI:

  1. Dedicated AI acceleration capable of efficiently executing convolutional, transformer-based, and multimodal workloads.
  2. Deterministic real-time processing for latency-sensitive industrial and embedded applications.
  3. Energy efficiency at scale, enabling always-on intelligence without excessive thermal or power budgets.

As edge AI applications expand beyond simple classification toward sensor fusion, contextual reasoning, and on-device generative inference, these limitations become more pronounced. The result is a growing gap between what software frameworks can express and what deployed hardware can efficiently execute.

Edge AI design as a full value chain

Successful edge AI deployment requires a system-level view spanning the entire design value chain:

Data collection and preprocessing

Industrial edge systems, for example, operate in noisy, variable environments. Training data must reflect real-world conditions such as lighting changes, mechanical vibration, sensor drift, and interference.

Hardware-accelerated execution

Today’s edge designs rely on heterogeneous compute architectures: AI-native NPUs handle dense matrix and tensor operations, while CPUs, GPUs, DSPs, and real-time cores manage control logic, signal processing, and exception handling.

Model training, adaptation, and optimization

Although training is often performed off-device, edge deployment constraints must be considered early. Transfer learning and hybrid model architectures are commonly used to balance accuracy, explainability, and compute efficiency. Hardware-aware compilation enables models to be transformed to match accelerator capabilities while maintaining deterministic performance characteristics.

The role of open development platform

Historically, edge AI development has been fragmented across proprietary toolchains, closed runtimes, and framework-specific optimizations. This fragmentation has slowed adoption and increased development risk, particularly as model architectures evolve rapidly.

An open development platform addresses fragmentation challenges with:

  • Framework diversity: Edge developers increasingly rely on PyTorch, ONNX, JAX, TensorFlow, and emerging toolchains. Supporting this diversity requires compiler infrastructures that are framework-agnostic.
  • Rapid model evolution: The rise of transformers and large language models (LLMs) has introduced new operator patterns that closed toolchains struggle to support efficiently.
  • Long product lifecycles: Industrial and embedded devices often remain in service for a decade or more, requiring platforms that can adapt to new models without hardware redesign.

Additionally, open compiler and runtime infrastructures based on standards such as MLIR and RISC-V enable a separation between model expression and hardware execution. This decoupling allows silicon to evolve while preserving software investment.

Figure 1 Synaptics’ open edge AI development platform features Astra SoCs, the Torq compiler, and the industry’s first deployment of Google’s Coral NPU. Source: Synaptics

Context-aware AI and the move toward multimodal inference

A defining trend of edge AI in 2025 was the transition from single-sensor inference toward context-aware, multimodal systems. Rather than processing isolated data streams, edge devices increasingly combine vision, audio, motion, and environmental inputs to build a richer understanding of their surroundings.

This shift places new demands on edge platforms which must now support:

  • Heterogeneous data types and operators
  • Efficient execution of attention mechanisms and transformer-based models
  • Low-latency fusion of multiple sensor streams

Figure 2 The Grinn OneBox AI-enabled industrial single-board computer (SBC), designed for embedded edge AI applications, leverages a Grinn AstraSOM compute module and the Synaptics SL1680 processor. Source: Grinn Global

Designing for scalability and future workloads

One of the key architectural challenges in edge AI is scalability—not only across product tiers, but across time. AI-native silicon must scale from low-power endpoints to higher-performance systems while maintaining software compatibility.

This is typically achieved through:

  • Modular accelerator architectures that scale performance without changing programming models.
  • Heterogeneous compute integration, allowing workloads to migrate between NPUs, CPUs, and GPUs as needed.
  • Standardized toolchains that preserve model portability across devices.

For designers, this approach reduces risk by allowing a single software stack to span multiple products and generations.

Testing, validation, and long-term reliability

Edge AI systems operate continuously and often autonomously. Validation must extend beyond functional correctness to include:

  • Worst-case latency and power analysis
  • Thermal stability under sustained workloads
  • Behavior under degraded or unexpected inputs

Monitoring and logging capabilities at the edge enable post-deployment diagnostics and iterative model improvement. As models become more complex, explainability and auditability will become increasingly important, particularly in regulated environments.

Looking ahead

In 2026, AI is expected to move further into mainstream embedded system design. The focus is shifting from proving feasibility to optimizing performance, reliability, and lifecycle cost. This transition highlights the importance of aligning silicon architecture, software openness, and system-level design practices.

A new class of AI-native silicon, coupled with an open and extensible development platform, provides a foundation for this next phase. For system designers, the challenge—and opportunity—is to treat edge AI not as an add-on feature, but as a core architectural element spanning the entire design value chain.

Neeta Shenoy is VP of marketing at Synaptics.

Special Section: AI Design

The post Silicon coupled with open development platforms drives context-aware edge AI appeared first on EDN.

The Rare Earths Catch-22: Why It Exists and How It Can Be Fixed

ELE Times - Срд, 02/04/2026 - 09:09

Speaking at the Auto EV Tech Vision Summit 2025, Bhaktha Keshavachara, CEO, Chara Technologies, highlights the Rare Earth challenges as faced by the world today and what potential policies can resolve them!

As the world strides towareds more sustainable solutions, the technologies we use become more rare-earth dependent, ranging from batteries to motors and the magnets used in the motors. To couple this phenomenon, a simultaneous energy transition is also taking shape. We are gradually moving towards achieving our energy goals from electrons, as compared to hydrocarbons previously, especially in transportation. This necessitates the need to locate supply chains in a stable region or wholly become self-sustainable in the raw materials, which are Rare Earths, the 17 elements put separately in the periodic table, as Bhaktha Keshavachara, CEO, Chara Technologies, puts it!

With Rare Earths, the global catch-22 lies for two specific reasons, namely: 

  1. It is expensive to buy 
  2. It is hazardous to extract   

Since these materials are critical for our future, or the future dominated by Electric technologies like EVs, E-Buses, etc – It becomes imperative for us to search for ways to locate them in stable regions or make oneself self-sufficient in their production, or simply find ways. Let’s see what Bhaktha had to say about it! 

Start Mining or Find Alternatives

“We have to start mining and extraction,” Bhaktha reiterates as he presents his first solution for the Rare-Earth catch-22.  He goes on to recount the strategies adopted by the nations globally, including the US, which has interestingly reopened its mines in California for rare-earth minerals.  Further, he underlines the ongoing global efforts to find alternative materials to build rare-earth magnets without using rare earths. He underlines NIRON from the US, which is experimenting with iron nitride magnets. He also points to Europe’s efforts towards finding an alternative in potassium-strontium magnets. 

The problem with rare-earth mining is the hazardous nature of the process that leaves populations and people cancer-ridden for a long time. “If you see pictures on the net of the west coast of China, actually in central China, there are like cancer villages,” Bhaktha recounts. 

Alternative Motor Technologies or Materials

Further, he suggests using alternative motor technologies to reduce the materials component of rare earths in the overall product.  He refers to the various motor types in the same continuation, including electrically excited synchronous motors (EESM), induction motors (IM), and synchronous reluctance motors (SynRM). He also touches upon the light rare-earth materials, calling for more use of them as opposed to the heavy rare-earth materials that China holds a stronghold over, as he mentioned in his address 

India’s Situation 

Talking about India’s situation, Bhkatha says, “We have rare earths, but not all the 17 rare earths, but still we can do with whatever we have, and potentially we can import ore which has dysprosium and other rare earth materials.” He also recounts some past events wherein global price fluctuations anchored by China led to two big companies in India dropping projects of magnet manufacturing as the project suddenly became unviable in business terms. 

In the same sequence, he reiterates the example of the US government that has stepped in to cap the minimum prices for the magnets irrespective of the global market fluctuations, to basically support the industry and also enable localisation of the technology and materials. 

National efforts, Global Repercussions  

In the midst of all these challenges, Bhaktha reaffirms his determination to face the storm in the face, calling upon the industry to innovate for the better. He says, “I think if we do the innovation and take the leadership role in prioritizing this, we not only have a huge opportunity to do something new in India, but there is a huge opportunity to export to the rest of the world because the rare-earth problem is a global problem.”  

The post The Rare Earths Catch-22: Why It Exists and How It Can Be Fixed appeared first on ELE Times.

New Power Module Enhances AI Data Centre Power Density and Efficiency

ELE Times - Срд, 02/04/2026 - 08:13

The increasing AI and high-performance computing workloads demand power solutions that combine efficiency, reliability and scalability. Integrated power modules help streamline design, reduce energy use and deliver the stable performance required for advanced data centres. Microchip Technology announces the launch of the MCPF1525 Power Module, a highly integrated device with a 16V Vin buck converter that can deliver 25A per module, stackable up to 200A. The MCPF1525 enables higher power delivery within the same rack space and is combined with a programmable PMBus and I2C controls. This device is designed to power the latest generation of PCIe switches and high-performance compute MPU applications needed for AI deployments.

The MCPF1525 is packaged in an innovative vertical construction that maximises board space efficiency and can offer up to a 40% board area reduction when compared to other solutions. The compact power module is approximately 6.8 mm x 7.65 mm x 3.82 mm, making it an optimal solution for space-constrained AI servers.

For increased reliability, the MCPF1525 includes multiple diagnostic functions reported over PMBus, including over-temperature, over-current and over-voltage protection to minimise undetected faults. With a thermally enhanced package, the device is engineered to work within an operating junction temperature range of -40°C to +125°C. An on-board embedded EEPROM allows users to program the default power-up configuration.

“By leveraging Microchip’s comprehensive solutions, including PCIe Switchtec technology, FPGAs, MPUs and Flashtec NVMe controllers, the MCPF1525 power module can help customers achieve the system efficiency, reliability and scalability required for high-performance data centre and industrial computing applications,” said Rudy Jaramillo, vice president of Microchip’s analogue power and interface division. “Seamless integration across Microchip’s portfolio simplifies development and lowers risk, helping designers accelerate time-to-market.”

The MCPF1525 features a customised integrated inductor for low conducted and radiated noise, enhancing signal integrity, data accuracy and reliability of high-speed computing, helping reduce repeated data transmissions that waste valuable system power and time.

The post New Power Module Enhances AI Data Centre Power Density and Efficiency appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії підбірка - Новини світу мікро- та наноелектроніки