Збирач потоків

🖼️ Запрошуємо на виставку «Міська мозаїка» Лариси Пуханової

Новини - Чтв, 04/09/2026 - 18:32
🖼️ Запрошуємо на виставку «Міська мозаїка» Лариси Пуханової
Image
kpi чт, 04/09/2026 - 18:32
Текст

🖼 У Державному політехнічному музеї КПІ ім. Ігоря Сікорського відкрили виставку «Міська мозаїка» Лариси Пуханової. Це персональна виставка відомої київської художниці, яка має унікальну мистецьку мову, витончений смак і впізнаваний стиль. Її творчість — чуттєва, захоплива й жива.

Advancing AI performance with HBM4, SPHBM4 DRAM solutions

EDN Network - Чтв, 04/09/2026 - 18:15

Over the past two decades, the raw compute capability of processors used in high‑performance computing (HPC) and artificial intelligence (AI) systems has increased at an extraordinary pace. Figure 1 illustrates this trend: XPU floating‑point performance has scaled by more than 90,000×, while DRAM bandwidth and interconnect bandwidth have improved by only about 30× over the same period.

Figure 1 The above chart highlights increases in XPU performance and interconnect bandwidth over 20 years.

This growing disparity between compute capability and data movement—often described as the memory wall and the I/O wall—has become one of the most significant constraints on achievable system performance.

For system designers, this imbalance translates directly into underutilized compute resources, rising power consumption, and increasing architectural complexity. As a result, memory bandwidth and packaging technologies have become just as critical to AI performance scaling as transistor density or core count.

HBM as a foundation for modern AI architectures

To address these bandwidth challenges, HPC and AI systems have increasingly adopted disaggregated architectures built around chiplets. While LPDDR and DDR memories continue to play important roles, high bandwidth memory (HBM) has emerged as the highest‑bandwidth DRAM solution available and a key enabler for modern accelerators.

HBM devices consist of a buffer (or base) die at the bottom and multiple 3D‑stacked DRAM layers above it. The buffer die uses very fine‑pitch micro‑bumps, allowing the memory stack to be co‑packaged with an ASIC using advanced packaging technologies such as silicon interposers or silicon bridges. Supported by rigorous standardization through the JEDEC HBM task group, HBM has become one of the most successful and widely adopted examples of chiplet‑based integration in production systems.

Figure 2 shows a representative side view of an HBM DRAM stack connected to an ASIC through a silicon interposer.

Figure 2 Here is how an HBM DRAM stack is connected to an ASIC through a silicon interposer. Source: Eliyan

A widely deployed example of HBM in practice is Nvidia’s B100 Blackwell accelerator, shown in Figure 3. The package contains two large, reticle‑sized XPU dies connected to one another through high‑bandwidth links, with HBM devices placed along the top and bottom edges of each die. Each XPU die integrates four HBM stacks—two on each long edge—resulting in a total of eight HBM devices per package.

Figure 3 Nvidia’s B100 Blackwell accelerator uses two XPUs connected to eight HBMs in a single package. Source: Nvidia

Using typical HBM3 specifications available at the time the JEDEC standard was adopted, each HBM3 device could employ an 8‑high stack of 16-Gb DRAM layers, providing 16 GB of capacity per stack. With a data rate of 6.4 Gb/s and 1,024 I/Os, each HBM3 device delivers approximately 0.8 TB/s of bandwidth. Across eight devices, this configuration provides 128 GB of total memory capacity and roughly 6.6 TB/s of aggregate bandwidth.

HBM4: Scaling bandwidth and capacity

To continue scaling memory performance alongside compute, JEDEC recently published JESD270‑4, the HBM4 standard. HBM4 introduces a number of architectural improvements over HBM3 that directly address the growing bandwidth and capacity requirements of AI workloads.

One of the most significant changes in HBM4 is a doubling of the channel count, increasing the number of I/Os from 1,024 to 2,048. In parallel, supported data rates have increased into the 6–8 Gb/s range and beyond. Memory density has also scaled, with 24 Gb and 32 Gb DRAM layers specified, along with support for 12‑high and 16‑high stacks. Reliability, availability, and serviceability (RAS) features—including DRFM—have also been enhanced.

Taken together, these advances enable substantial improvements in bandwidth, power efficiency, and capacity relative to HBM3. As an illustrative example, an HBM4e device using a 16‑high stack of 32 Gb layers provides 64 GB of capacity per device, as shown in Figure 4.

Figure 4 Eight HBM4 devices are shown in an example package accomplishing increased total capacity and bandwidth. Source: Eliyan

With 2,048 I/Os operating at 8 Gb/s, such a device can deliver up to 2 TB/s of bandwidth. In a package containing eight HBM4 devices, total memory capacity scales to 512 GB—four times that of the earlier HBM3 example—while aggregate bandwidth exceeds 16 TB/s, a 2.5× increase.

Custom HBM and the role of the base die

As HBM4 adoption accelerates, some system designers are exploring the development of custom HBM solutions optimized for specific applications. A key enabler of this trend is the evolution of the HBM base die.

In earlier HBM generations, the base die was typically manufactured using a DRAM‑optimized process, well suited for capacitor structures but less optimal for high‑speed logic. With HBM4, most suppliers are transitioning to standard advanced logic processes for the base die. This shift aligns more closely with the processes already familiar to SoC designers and opens the door to customization opportunities.

Whether using standard or custom HBM4 devices, these solutions continue to rely on advanced packaging and silicon substrates—such as interposers or bridges—to accommodate the large number of fine‑pitch connections between the memory and the ASIC.

SPHBM4: Bringing HBM‑class bandwidth to organic packaging

Despite its performance advantages, traditional HBM integration requires advanced packaging, which can increase cost and complexity. Many system designers, particularly those focused on volume production and reliability, prefer standard organic substrates. To address this gap, JEDEC has announced that it is nearing completion of a new standard for Standard Package High Bandwidth Memory (SPHBM4).

SPHBM4 devices use the same DRAM core dies as HBM4 and provide equivalent aggregate bandwidth, but they introduce a new interface base die designed for attachment to standard organic substrates. Figure 5 illustrates a side view of an SPHBM4 DRAM mounted directly on an organic package substrate, alongside an ASIC. The ASIC itself may also reside on the organic substrate, or it may remain on advanced packaging such as a silicon bridge for multi‑XPU integration.

Figure 5 Side-view of an SPHMB4 DRAM and ASIC connection is shown with the SPHBM4 DRAM attached directly to the organic package substrate. Source: Eliyan

To achieve HBM4‑class throughput with fewer pins, SPHBM4 employs higher interface frequencies and serialization. While HBM4 defines 2,048 data signals, SPHBM4 is expected to use 512 data signals with 4:1 serialization, enabling the relaxed bump pitch required for organic substrates.

Because SPHBM4 uses the same DRAM stacks as HBM4, per‑stack capacity remains unchanged. However, organic substrate routing supports longer channel lengths between the SoC and the memory, which can enable new system‑level trade‑offs. In particular, longer routing distances and angled trace routing can allow more memory stacks to be placed around a given die.

Figure 6 illustrates this effect. When HBM devices are mounted on silicon substrates, they must be placed immediately adjacent to the XPU, limiting the number of stacks to two per 25-mm die edge. With SPHBM4 on an organic substrate, three memory devices can be connected along the same edge, increasing both memory capacity and bandwidth by approximately 50%.

Figure 6 This is how 12 SPHBM4 devices in example package boost capacity and total bandwidth. Source: Eliyan

Even when a silicon substrate is still used beneath the XPU—for example, to support high‑bandwidth XPU‑to‑XPU links—the overall interposer size can be significantly reduced when memory devices are moved to the organic package. This reduction can translate into meaningful benefits in system cost, manufacturability, and test complexity.

Looking ahead

AI workloads continue to push the limits of memory bandwidth, capacity, and packaging technology. JEDEC’s HBM4 standard represents a major step forward in addressing these demands, while the emerging SPHBM4 standard expands the design space by enabling HBM‑class performance on standard organic substrates.

For system architects, these technologies offer new flexibility in balancing performance, cost, and integration complexity. As memory and packaging increasingly shape overall system capability, early consideration of options such as HBM4, custom HBM, and SPHBM4 will be essential to fully unlocking the next generation of AI and HPC performance.

Kevin Donnelly is VP of strategic marketing at Eliyan.

Related Content

The post Advancing AI performance with HBM4, SPHBM4 DRAM solutions appeared first on EDN.

HRL’s T3L 40nm GaN-on-SiC technology achieves Manufacturing Readiness Level 6

Semiconductor today - Чтв, 04/09/2026 - 16:02
HRL Laboratories LLC of Malibu, CA, USA (a corporate R&D lab co-owned by The Boeing Company and General Motors) says that its T3L 40nm gallium nitride (GaN) on silicon carbide (SiC) technology achieved Manufacturing Readiness Level (MRL) 6 through the US Office of the Under Secretary of War. The firm considers the milestone to represent a significant step in the maturation of its RF GaN manufacturing technology for defense and high-performance commercial applications...

Rethinking machine vision in industrial automation

EDN Network - Чтв, 04/09/2026 - 16:00
Applications of event-based vision in industrial automation.

Machine vision has always played a critical role in ensuring safe, efficient, and reliable operation in many industrial settings. However, as vision-enabled machines become more numerous and the type and volume of data they can collect expand, challenges are forcing system makers to look at new approaches to efficiently acquire, process, and utilize visual data.

If we look at the current challenges, they span the spectrum in terms of improving operational efficiency, accuracy, and reliability.

Data overload and processing efficiency that limits throughput are major issues as industries move toward more advanced, faster automation, tasking vision systems with capturing and analyzing vast amounts of data. Traditional vision systems often struggle with the sheer volume of images they capture, much of which can be redundant. The requirement now is not just about capturing high-resolution images but doing so in a way that first and foremost accelerates throughput (in part by minimizing irrelevant data) while maximizing the precision and relevance of the information captured.

Real-time processing is becoming increasingly important, especially in environments where machines need to make instantaneous decisions, such as in quality control or defect detection on production lines. This requires more efficient processing methods and data reduction techniques.

High-speed and high-precision demands increase as production lines get faster. High-speed processing, low latency, and the ability to capture minute changes in a scene in real time are critical. Traditional frame-based systems struggle with motion blur and data overload when capturing fast-moving objects. For example, in applications such as high-speed counting, even the slightest delay in image acquisition and processing can lead to errors.

Sustainability is a growing priority, as many industrial systems operate in environments where power efficiency is key. Vision systems need to operate for extended periods without consuming significant amounts of energy. Traditional image-processing systems, especially those that capture entire frames at a fixed rate, can be power-intensive and require sophisticated cooling or energy management.

Complex lighting and environmental conditions are common in many settings, including extreme brightness, low light, or dynamic lighting scenarios. Vision systems need to cope with high-dynamic-range requirements to capture high-quality images without losing detail in either the darkest or brightest areas. Conventional frame-based systems have struggled in such conditions, leading to the need for more adaptable and sensitive vision technologies.

Predictive maintenance and condition monitoring are growing needs. Vision systems must not only react to issues but also help to predict potential problems before they occur. Predictive maintenance requires vision systems that can monitor machine vibrations, detect wear and tear, and identify early signs of equipment failure.

These challenges point to a more fundamental limitation: Traditional frame-based vision was designed for image capture and human viewing, not for machines that must detect, interpret, and react to changes in real time. As industrial systems move toward higher levels of automation and autonomy, vision is becoming a core component of the perception pipeline.

This shift is driving demand for sensing approaches that reduce latency, limit unnecessary data, and enable faster, more reliable decisions across applications such as monitoring, inspection, counting, and control.

Event-based vision addresses these challenges

Event-based vision, inspired by the human eye and brain, is increasingly used in industrial machine vision to address these challenges. By mimicking biological vision, this technology utilizes efficient sensing and collection techniques that capture changes within a specific scene. This reduces processing requirements compared with traditional frame-based methods while revealing details that conventional systems miss, opening new possibilities for precision and performance in industrial applications.

Event-based vision is particularly suited for industrial automation, IoT, automotive, and edge applications that demand high performance, low power consumption, and operation in challenging lighting conditions. The technology offers significant advantages in speed, power efficiency, dynamic range, and low latency, driving use cases such as high-speed counting, preventive maintenance, and inspection.

From frame-based imaging to event-based perception

In conventional video systems, entire images (i.e., the light intensity at each pixel) are recorded at fixed intervals, known as the frame rate. Standard movies are recorded at 24 fps, with some videos using higher frame rates like 60 fps (16.7-ms intervals). While effective for representing the “real world” on a screen, this method oversamples unchanged parts of an image, especially at high frame rates, while undersampling the most dynamic areas. As a result, critical motion information can be missed between frames.

In contrast, the human eye samples changes up to 1,000× per second without focusing on static backgrounds at such high frequencies. Event-based sensing offers a biologically inspired solution to this under- and oversampling. Unlike traditional cameras, event sensors don’t use a uniform acquisition rate (frame rate) for all pixels. Instead, each pixel defines its sampling points by reacting to changes in the amount of light it detects. Information about contrast changes is encoded in “events”—data packets containing the pixel’s coordinates and the precise time of the event.

Frame-based vs. event-based sensing—discrete frame sampling vs. continuous motion capture.Figure 1: Frame-based vs. event-based sensing—discrete frame sampling vs. continuous motion capture (Source: Prophesee)

Prophesee’s patented event-based sensors, for instance, allow each pixel to activate intelligently based on detected contrast changes. This enables continuous acquisition of essential motion information at the pixel level. The pixels operate asynchronously (unlike traditional CMOS cameras) and at much higher speeds, as they don’t need to wait for a complete frame before reading data.

The advantages of event sensors include high-speed operation (equivalent to 10,000 fps), extremely efficient power consumption (down to the microwatt range), low latency, reduced data processing requirements (10× to 10,000× less than frame-based systems), and high dynamic range (up to 140 dB).

Because only changes are transmitted, event-based data streams are inherently sparse and temporally precise, allowing downstream processing systems—including AI-based processing—to focus on what matters: motion, variation, and anomalies rather than static background information. These attributes make event-based vision systems suited for a wide range of applications and products.

This technology is being commercialized more widely, such as in Prophesee’s Metavision, which has evolved over the past decade to deliver high performance through integrated hardware and software solutions.

Real-time industrial automation with event-based vision

Event-based vision excels in a variety of industrial automation applications. Typical use cases (see Figure 2) range from object tracking and high-speed counting to predictive maintenance and quality control.

Applications of event-based vision in industrial automation.Figure 2: Applications of event-based vision in industrial automation (Source: Prophesee) Safety: Object tracking

Event-based vision systems excel at tracking moving objects, leveraging their low data rate and sparse information capabilities. This approach allows for precise object tracking with minimal computational resources, eliminating traditional “blind spots” between frame acquisitions. Additionally, event sensors offer native segmentation, focusing solely on movement and disregarding static backgrounds for improved tracking accuracy and efficiency. Event-based vision enhances safety by monitoring worker and machine interactions in real time, even in complex lighting, without capturing images.

Productivity: high-speed counting

Real-time vision systems powered by event-based sensing enable objects to be counted at unprecedented speeds with high accuracy and minimal motion blur. Sensors independently trigger each pixel as objects pass through the field of view, achieving a throughput of over 1,000 objects per second and an accuracy of more than 99.5%, ensuring rapid and precise counting in high-speed environments.

Predictive maintenance: vibration monitoring

Event-based vision enables continuous, remote vibration monitoring with pixel-level precision. By tracking the temporal evolution of each pixel in the scene, the sensors record each event’s coordinates, polarity of change, and exact timestamp. This data provides valuable insights into vibration patterns across frequencies from 1 Hz to the kilohertz range, aiding in predictive maintenance.

Event-based vibration monitoring in industrial systems.Figure 3: Event-based vibration monitoring in industrial systems; frame-based imaging shown for reference (Source: Prophesee) Quality: particle/object size monitoring

In high-speed production environments, event-based sensing allows for real-time control, counting, and measurement of particle or object sizes on conveyors or channels. The sensors capture instantaneous quality statistics, ensuring accurate process control at speeds of up to 500,000 pixels per second with a counting precision of 99%, optimizing quality assurance in production lines.

High-speed event-based particle counting and size monitoring.Figure 4: High-speed event-based particle counting and size monitoring; frame-based image shown for reference (Source: Prophesee) Quality control

Event-based vision systems help lower reject rates with real-time feedback and advanced processing down to a 5-µs time resolution and blur-free asynchronous event output. One specific use case is in the automatic detection and classification of the finest imperfections in manufacturing materials—for example, in automotive parts to perform paint defect inspection, scratch detection, and planarity testing (see Figure 5).

Event-based surface contamination and defect detection in industrial production.Figure 5: Event-based surface contamination and defect detection in industrial production (Source: Prophesee)

As event-based vision continues to evolve and address diverse market needs, it is establishing itself as a new industry standard. Over the past several years, the technology has expanded to serve a wide array of applications.

Thousands of product developers are now adopting event-based vision for sophisticated camera and perception systems, supported by open-source technology and a growing inventors’ community. These advancements are transforming how machines perceive, process, and react to visual information in real time, bringing greater precision, efficiency, and intelligence to industrial automation operations.

Thibaut Willeman is head of business development and go-to-market at Prophesee, where he works on the market development of event-based vision systems for industrial automation, robotics, and defense applications. He previously held strategy and innovation roles at companies such as Boston Consulting Group, working on growth strategy, product strategy, and innovation initiatives for industrial and technology companies. He holds an engineering degree and a master’s degree in innovation and technology management.

The post Rethinking machine vision in industrial automation appeared first on EDN.

Humidifiers and such: How much “smart” is too much?

EDN Network - Чтв, 04/09/2026 - 15:00

This engineer’s new humidifier is—he kids you not—Wi-Fi enabled, therefore “smart”. What upsides does such a product deliver? And at what tradeoffs?

Within one of last month’s writeups, I mentioned that my wife and I had recently acquired two DREO 4 liter-capacity ionizing humidifiers. That purchase led to my interest in hygrometers (humidity measurement devices) such as the TP-Link Tapo T315, which ended up supplanting the bad data I’d previously relied upon, coming from my furnaces’ touchscreen thermostats.

Ionizing advancements

The baseline DREO HM311:

relies on front panel buttons for user control purposes. It works well, and I enjoy the dynamic bubbling-water “light show” projected through the center mist tube, particularly visible at night:

The ionizing design approach is also interesting; just make sure to remember to keep ‘em clean:

Its slightly more expensive “smart” sibling, the HM311S, adds Wi-Fi support, thereby making it controllable (and more broadly manageable) via a mated smartphone or other mobile device:

or even, courtesy of its integrated Amazon Alexa and Google Assistant support, your voice:

And the tri-color mist tube (which I’d been calling a “pillar” until I revisited the user manual just now), is a handy visual reference to the current measured humidity level (I’ve yet to see blue):

Light Color

Humidity Level

Yellow

≤30%

Green

31-60%

Blue

≥61%

Binary impermanence

Believe it or not, the HM311S is even the beneficiary of periodic firmware updates, such as the one that I was prompted to install as part of initial out-of-box setup:

Another update, I noticed, was available as I re-accessed the device via my smartphone two-plus months later, just prior to writing these words:

And yes, the humidifier’s status and settings are even accessible over the Internet; note the cellular-only connection in the following screenshot (per the reported 436 hours of use to date, this was an Amazon Warehouse-sourced, apparently previously-used unit, even though it arrived in seemingly brand-new condition):

Weighing pros and cons

Nifty. But also potentially (more than) a bit scary. First off, what’s the realistic benefit (if any) of remote status monitoring from my mobile device? It’s not like I have a robot sitting at home in my absence that can alternatively grab a water pitcher, fill it and transfer its contents to the humidifier if it empties, after all. Not yet, at least:

More generally, is it convenient to turn on and off (and raise and lower the output intensity) of the humidifier from the couch, using either the aforementioned smartphone or my voice? Sure. But on the other hand, I could also always use the exercise. And what do I give up in exchange for all this supposed connectivity “goodness”?

For one thing, I’m sharing WAN IP address, device usage and ambient analytics data with the manufacturer. For a humble DREO humidifier, maybe this degree of reveal isn’t such a big deal. But what about my Google Nest Wifi mesh network, similarly managed via the cloud? Or my Blink security camera setup, which leverages cloud services not only for monitoring and control purposes but also to store recordings (at least currently; stay tuned for next week’s teardown)?

And what happens if those cloud services, not only from DREO (or its Amazon Alexa partner), Google or Blink but any other similar supplier, get hacked? Sure, it’s annoying to have someone remotely switching on and off your humidifier out of your control. That time someone used my then-firewall-exposed IPP port to spit pages (and pages and pages) of gibberish out of my laser printer was a bit more annoying. But that’s not what I’m talking about when I say “scary”.

The hackers now know who I am from my account profile and can easily determine my location via an online search using my name. Since they know my WAN IP address, they can now attempt to hack me. They also know my Wi-Fi network credentials, which makes it even easier to get inside my LAN if, since they now know my location, they’re motivated to pull up and park on the street outside. They know my account username and password, which theoretically should be unique to this particular cloud service but—get real—is undoubtedly reused elsewhere. And for a paid cloud service, they also now know my credit card and/or bank account info. Fun times!

Is elementary (especially) convenience worth the potential consequences? If you’re a consumer, it’s a question you should be asking yourself pre-purchase…although you’re likely to be unaware of the possible downsides. Therefore, if you’re a manufacturer, it’s a question you should be asking on behalf of your potential customers during the initial development process…although you’ve also got marketing breathing down your neck for new features, and your competitors may have already unveiled similar capabilities, so you’re also under late-to-market pressure, so…🤷‍♂️

When, if ever, is a product too “smart”? Or taking the thought to the other end of the extremist spectrum, should products be “smart” at all, at least for the mass market? As always, I welcome your thoughts in the comments!

Brian Dipert is the associate editor, as well as a contributing editor, at EDN Magazine.

Related Content

The post Humidifiers and such: How much “smart” is too much? appeared first on EDN.

Directed Energy Systems: Where Capability Ends and Control Begins

ELE Times - Чтв, 04/09/2026 - 12:50

by Sukhendu Deb Roy, Industry Consultant

Key Takeaways
  • The economics of warfare have flipped, with cost asymmetry emerging as a primary battlefield dynamic
  • Directed energy systems shift defence from inventory-driven models to energy-driven ones
  • Future defence architectures will be AI-orchestrated, integrated, and multi-domain
  • Semiconductor capability is central to defence sovereignty
Introduction: The Shift in Modern Warfare

Modern warfare is undergoing a structural and economic shift—one that is redefining how conflicts are fought and sustained. Across theatres, adversaries are increasingly deploying low-cost, high-volume threats designed not just to penetrate defences, but to exhaust them. This is not merely a tactical evolution; it is an economic strategy aimed directly at the cost structure of defence systems rather than their technical limits.

In response, Directed Energy Weapons (DEW), particularly high-energy laser (HEL) systems, are emerging as a compelling alternative. By reducing the cost per engagement to near-zero and removing dependence on finite ammunition, they signal a transition toward energy-based warfare—where power availability replaces inventory as the primary constraint.

Operational systems today, typically in the 100–300 kW class, are already capable of countering drones, small boats, and select aerial threats. However, their performance remains constrained by power density, beam quality, and thermal dissipation limits.

Figure 1. Emerging multi-layered defence architectures integrating kinetic and directed energy systems through AI-driven command and control.

The Problem: Capability Without Control

This advantage, however, is not absolute. Real-world deployments continue to reveal persistent constraints—thermal limits, atmospheric attenuation, beam dwell time, and power scalability challenges. These are not isolated engineering challenges; they are systemic constraints.

More importantly, they reveal a deeper dependency: the effectiveness of directed energy systems is inseparable from the ecosystem that supports them. Performance is not defined solely by the platform, but by the electronics, semiconductors, and supply chains beneath it.

This creates a structural risk. A nation may deploy advanced directed energy systems, yet remain dependent on external control at the component and semiconductor level.

The future of defence, therefore, will not be determined by the deployment of advanced platforms alone, but by the ability to secure control over the enabling ecosystem that makes those platforms viable at scale.

Figure 2. Directed energy systems deliver visible capability, but remain dependent on underlying electronics and semiconductor ecosystems—creating hidden vulnerabilities in control.

The Economic War of Attrition

At the heart of this transformation lies a fundamental imbalance shaping modern conflict. Defenders are increasingly forced to deploy high-value interceptors against low-cost threats, creating an unsustainable economic equation. Systems such as surface-to-air missiles or kinetic interceptors become prohibitively expensive when faced with saturation attacks.

This imbalance is not incidental—it is being deliberately operationalized through drone swarm attacks and loitering munitions designed to overwhelm defences through sheer volume rather than technological sophistication. The objective is clear: to stretch defensive resources to their limits and exploit the cost asymmetry inherent in traditional systems.

Directed energy systems fundamentally alter this equation. By shifting from consumable munitions to energy-based engagement, they dramatically reduce marginal costs and enable sustained operation without the constraints of inventory—as long as sufficient power is available.

This represents more than a technological evolution. It is a financial reset in how defence is structured and sustained. This is the defining shift from inventory-based warfare to energy-based warfare.

Figure 3. Cost asymmetry in modern warfare—low-cost threats forcing disproportionately expensive kinetic responses, driving unsustainable defence economics.

Without such a transition, the long-term economics of defence operations risk becoming untenable in the face of increasingly scalable, low-cost threats.

The Illusion of Sovereignty

The visible success of a directed energy intercept can be compelling. It signals speed, precision, and technological sophistication—creating the impression of true strategic independence. But that impression can be deceptive.

Beneath every such system lies a tightly integrated ecosystem of power electronics, thermal systems, optical assemblies, RF components, and semiconductors. If these critical elements are externally sourced, control has not been achieved—it has merely shifted out of view. Dependence is not eliminated; it is reconfigured.

In practice, this dependence surfaces through export controls, defence supply chain choke points, firmware constraints, and restricted access to advanced semiconductor nodes. Under normal conditions, these limitations may remain hidden. Under geopolitical stress, they translate directly into operational risk.

Capability alone does not ensure sovereignty.

Control does.

Where Control Actually Resides

To understand where control truly resides, directed energy systems must be viewed not as standalone platforms, but as layered architectures.

At the surface lies the platform layer—the visible capability, including laser systems deployed on land, sea, or air platforms. Beneath this sits the system layer, where command-and-control frameworks, targeting systems, and sensor fusion enable coordinated operation.

Deeper still is the engineering layer, which determines real-world performance. This includes power electronics that stabilize output, thermal systems that govern endurance, and optical and beam control mechanisms that ensure precision.

At the foundation lies the control layer—the least visible, yet most decisive. This layer encompasses semiconductors, advanced materials, packaging, and the broader supply chain that sustains the system.

It is this lowest layer that anchors performance, scalability, and resilience. Any external dependence here propagates upward, constraining every layer above and limiting true autonomy.

Performance, scalability, and resilience are determined at the lowest layer. Any external dependence at the control layer propagates upward, constraining the entire system.

Sovereignty, in this context, is not a function of the platform—it is a function of control at the component and semiconductor level.

These constraints are not theoretical—they are engineered into the system itself.

Figure 4. Directed energy performance is constrained by tightly coupled power, thermal, and semiconductor systems—highlighting the central role of control-layer technologies such as GaN-based switching.

The Real Bottlenecks

The challenges facing directed energy systems are physical, not conceptual.

  • Thermal constraints limit sustained firing duration
  • Advanced power electronics define efficiency
  • Atmospheric conditions degrade beam propagation
  • Beam dwell time limits effectiveness against fast-moving targets
  • AI-enabled defence systems must operate at machine speed

Figure 5. Directed energy constraints are interdependent—thermal, power, and control limitations must be solved as an integrated system, not in isolation.

These constraints do not exist in isolation—they reinforce and amplify one another. Addressing a single limitation, whether in thermal management or power delivery, does not translate into real operational capability on its own. What is required is coordinated industrial depth across multiple domains, from materials science and semiconductor design to power systems and real-time computation.

A directed energy system is only as effective as the ecosystem that sustains it.

From Weapons to Systems

Directed energy is no longer a standalone capability. It is steadily becoming part of integrated, AI-orchestrated defence architectures—often described as Cognitive Hybrid Defence—where multiple systems operate in coordination rather than isolation. In this emerging model, directed energy systems function alongside electronic warfare, cyber capabilities, and kinetic interceptors, all unified through real-time command-and-control frameworks.

Figure 6. Transition from standalone weapons to AI-orchestrated, multi-layer defence systems, where threats are dynamically assigned to the most efficient response layer.

This shift is already visible in operational programs such as the U.S. Navy’s HELIOS system and Israel’s Iron Beam, both of which demonstrate how layered, multi-domain defence is replacing single-point solutions. The objective is no longer limited to individual interception—it is about orchestrating responses across domains with speed, precision, and economic efficiency. As this transition accelerates, control over the underlying technological ecosystem becomes even more critical.

Semiconductor Policy is Defense Policy

This convergence carries direct implications for national strategy. Defence capability and semiconductor capability can no longer be treated as separate domains—they are structurally interdependent. Initiatives such as India’s Electronics Component Manufacturing Scheme (ECMS) and the India Semiconductor Mission (ISM 2.0) must be viewed through this lens. These initiatives are central to building semiconductor sovereignty and securing India’s position in the global defence technology supply chain. They are not merely industrial policies; they are foundational to future defence capability.

Yet the challenge is not one of intent or conceptual understanding. It lies in industrial depth—particularly in manufacturing, materials ecosystems, and advanced semiconductor fabrication. Without control over critical technologies such as Gallium Nitride (GaN)-based power electronics systems, advanced packaging, and high-reliability electronics, there is a real risk of remaining a system integrator rather than a true control holder. Sovereignty, in this context, is not achieved through system assembly but through ownership of the components and technologies that define performance and resilience.

Figure 7. Defence capability is fundamentally anchored in semiconductor ecosystems—spanning materials, manufacturing, and advanced power electronics such as GaN-based systems.

Conclusion: Capability vs Control

What emerges is a broader shift in how warfare itself is understood. We are moving into a phase defined by energy, integration, and system-level thinking. Directed energy systems will become increasingly visible on the battlefield, delivering immediate and measurable impact. However, the true determinants of success will remain largely invisible—embedded in defence supply chains, semiconductor ecosystems, and industrial capability.

This creates a clear strategic imperative. Nations must move beyond assembling advanced platforms to controlling them end-to-end.

Forward Outlook

Looking ahead, the defining question of the next decade will not be who deploys directed energy systems first, but who can sustain and scale them under real-world conditions. Future conflicts may become power-limited rather than ammunition-limited, where grid resilience, energy density, and power electronics infrastructure and power distribution emerge as core defence parameters.

Meeting this challenge will require closer alignment between defence procurement and semiconductor strategy, sustained investment in power electronics, thermal systems, and advanced materials, and a decisive shift from platform-centric thinking to ecosystem-centric design.

Countries that recognize this transition early will build not just capability, but resilience. Those that do not will remain dependent—regardless of how advanced their visible systems may appear.

Figure 8. Future defence systems will be constrained by power, energy infrastructure, and semiconductor capability—marking the shift from ammunition-limited to energy-limited warfare.

Final Perspective

In the next generation of warfare, capability will be visible. Control will be decisive.

 

Author’s profile:
Sukhendu Deb Roy is a semiconductor and power electronics professional with over 15 years of experience, holding an M.Sc. in Laser Physics and M.Tech in Laser Science and Applications, with a focus on the intersection of directed energy systems, power electronics, and semiconductor ecosystems.  

The post Directed Energy Systems: Where Capability Ends and Control Begins appeared first on ELE Times.

Гурток ІПСАК «Інженерія та програмування систем» на ФІОТ

Новини - Чтв, 04/09/2026 - 12:00
Гурток ІПСАК «Інженерія та програмування систем» на ФІОТ
Image
KPI4U-1 чт, 04/09/2026 - 12:00
Текст

На Факультеті інформатики та обчислювальної техніки (ФІОТ) КПІ ім. Ігоря Сікорського запрацював гурток ІПСАК «Інженерія та програмування систем (на базі автономних комплексів)».

Vector Photonics demos free-space optical communication using PCSEL outside of a lab

Semiconductor today - Чтв, 04/09/2026 - 11:48
Vector Photonics Ltd of the West of Scotland Science Park (which was spun off from the University of Glasgow in 2020, based on research led by professor Richard Hogg) has announced the first successful public demonstration of photonic crystal surface-emitting lasers (PCSEL) technology for optical communication outside of a lab. On 31 March, the firm’s PCSELs were used to transmit data across the River Clyde from the Glasgow Science Centre to the Clydeside Distillery, using a system designed and built by Fraunhofer UK...

Новий студентський простір у ВПІ

Новини - Чтв, 04/09/2026 - 10:00
Новий студентський простір у ВПІ
Image
KPI4U-2 чт, 04/09/2026 - 10:00
Текст

У Навчально-науковому видавничо-поліграфічному інституті (НН ВПІ) КПІ ім. Ігоря Сікорського презентували оновлене укриття та новий студентський простір.

CSconnected extends deadline for fourth funding round of supply chain development program

Semiconductor today - Срд, 04/08/2026 - 21:19
The South Wales-based compound semiconductor cluster CSconnected is encouraging organizations to apply for the fourth and final funding round of its £1m supply chain development program, delivered in partnership with Cardiff Capital Region (CCR), which now closes at 4pm on 23 April (extended from 17 April)...

Supra extends pre-seed funding round with Rio Tinto as strategic investor

Semiconductor today - Срд, 04/08/2026 - 20:54
Supra Elemental Recovery Inc has announced a strategic investment from global mining and materials company Rio Tinto and Founders Factory through their mining technology accelerator. Structured as a combination of cash and in-kind services, the investment will enable Supra to build and commercialize its modular critical mineral recovery technology with insight and support from Rio Tinto...

My new Workbench and Setup!

Reddit:Electronics - Срд, 04/08/2026 - 18:29
My new Workbench and Setup!

Hi everyone!

I just finished my new workbench! I extended my existing one(the one facing the desk behind) with the edge-piece facing the wall. Also I sanded the desk surfaces and gave them a new finish. And last but not least, I added the shelf above for all devices.

As you can see it is not completely finished, I am still working on the LED strip that goes below the shelf and some other refinements. But so far I am very pleased with the results!

submitted by /u/FloTec09
[link] [comments]

Top 10 DC/DC converters and modules

EDN Network - Срд, 04/08/2026 - 16:00
XP Power’s BCT40T series.

DC/DC converters for demanding applications, ranging from industrial, railway systems, and satellites to communications and information technology equipment (ITE), are required to meet stringent requirements. They call for enhanced performance and high reliability, including operating in extreme conditions, while often requiring compact designs.

Over the past year, DC/DC converter manufacturers have focused on providing higher efficiency, offering greater flexibility with more options, saving board space with smaller packages, and delivering more cost-effective solutions. These devices are available in a variety of form factors, including brick types, DIPs, and modules.

Here’s a sampling of DC/DC converters introduced over the past year that deliver improvements in performance and packaging while providing the right-sized features for the application.

Meeting demanding requirements

Many of the latest families of DC/DC converters are designed to operate in demanding and harsh environments, including industrial, railway, ITE, and communications. They also often need to fit into tight spaces.

XP Power recently developed a family of DC/DC converters for space-constrained applications in demanding environments such as industrial, ITE, and communications systems. The BCT40T series of 40-W DC/DC converters offer high power density in a 1 × 1-inch (25.4 × 25.4-mm) package.

The BCT40T series features high efficiency, up to 89% depending on the model, and remote on/off functionality to enable energy savings and safe shutdowns. The series offers a wide 4:1 input voltage range, enabling operation across multiple input voltages. Models are available with nominal 24-VDC inputs (ranging from 9.0 V to 36.0 VDC) and 48-VDC inputs (ranging from 18.0 V to 75.0 VDC).

The devices operate over a wide operating temperature range of −40°C to 105°C and a broader full-load operating temperature range than many alternatives, XP Power said.

The BCT40T offers single regulated outputs ranging from 3.3 V to 24 VDC, as well as dual regulated outputs at ±12 VDC and ±15 VDC. The single-output models offer the flexibility of ±10% output voltage adjustment via an external trim resistor, enabling specific voltage requirements.

Targeting applications such as test and measurement, robotics, process control, analytical instruments, and communications equipment, these DC/DC converters feature an ultra-compact metal package that saves printed-circuit-board (PCB) area and allows more room for customer application circuitry, according to XP Power. In addition, these devices are smaller than many 40-W alternatives, which typically come in larger, 2 × 1-inch (50.8 × 24.4-mm) packages, reducing required board space by 50%.

The series meets worldwide safety approvals, including IEC/UL/EN62368-1 standards, as well as applicable CE and UKCA directives. It also complies with EN55032 Class A/B for conducted and radiated emissions and EN61000-4-x for immunity. The BCT40T series is available now.

XP Power’s BCT40T series.XP Power’s BCT40T series (Source: XP Power)

Murata Manufacturing launched a high-performance, 1-W DC/DC converter with reinforced isolation and ultra-low capacitance, targeting communications and analog front-end measurement circuits.

The NXJ1T series addresses the need for robust isolation, delivering high electrical isolation, noise immunity, and thermal reliability for industrial, energy, and medical applications with 4.2-kVDC isolation (Hi Pot Test) and compliance with UL62368 safety standards.

The NXJ1T series, housed in a compact, 10.55 × 13.70 × 4.04-mm footprint, is designed for safety and durability in demanding environments. It features an unregulated, 1-W 5-V input to 5-V/200-mA output design, which is suited for embedded systems.

Each device delivers reinforced insulation to 200 Vrms and basic insulation to 250 Vrms. This adds a layer of protection in high-voltage environments. The undervoltage lockout (UVLO) functionality enhances operational stability, which prevents erratic behavior under fluctuating power conditions, Murata said.

These devices can also be used in medical equipment, where low leakage current is critical for patient-connected applications. They feature ultra-low isolation capacitance, which helps minimize unwanted leakage, supporting compliance with stringent safety standards such as IEC 60601-1 when used within a certified system, the company said.

The DC/DC converters also leverage proprietary molding technology, providing high ingress protection against dust and particulates in harsh industrial environments and extreme temperatures. The device has successfully undergone 1,000 temperature cycles between −40°C and 125°C, demonstrating its ability to withstand the highest levels of thermal stress, Murata said.

The series also uses Murata’s proprietary block-coil transformer technology, providing high isolation and low leakage current, and facilitates lower switching frequencies (500 kHz to 2 MHz) and higher efficiencies of approximately 80%.

The result is exceptional common-mode transient immunity and significantly lower isolation capacitance, according to Murata, making it suited for high-performance power isolation in electrically noisy environments.

Recom GmbH developed a 20-W DC/DC converter in a compact, 1.6 × 1 × 0.4-inch (40.6 × 25.4 × 10.2 mm) package, calling it a new level of high efficiency in DC/DC performance. The RPA20-FR series, targeting rail applications, delivers 20 W over its full 36-VDC to 160-VDC input range (200-VDC peak for 1 second) from −40°C to 70°C and 105°C with derating.

The series offers fully regulated, low-noise, and protected single outputs (5 V, 5.1 V, 12 V, 15 V, and 24 VDC), trimmable by +20%/−10% minimum, with ±5-V, ±12-V and ±15-VDC options available. The devices feature remote on/off control with positive or negative logic, UVLO is included, and no minimum load is required.

The parts are designed specifically for rolling stock applications with nominal input voltages of 48 V, 72 V, or 110 VDC. They are EN 45545-2– and EN 50155–compliant and meet UL/IEC/EN 62368-1 for audio/video and IT applications. Full 3-kVAC/1-minute reinforced isolation is provided, and the parts comply with EMC “Class A” levels as well as rail EMC standard EN 50121-3-2. A separate protection module, RSP150-168, is available to protect against surges according to RIA12 and NF F01-51 standards.

The RPA20-FR series meets environmental standards required for rail applications, particularly EN 45545-2 for fire protection, EN 60068-2-1 for dry and damp heat, and EN 61373 for shock and vibration. Mean time between failure is rated over 1.5 Mhrs at 25°C according to MIL-HDBK-217F GB.

Cincon Electronics Co. Ltd. recently launched the EC3AW8 and EC4AW8 series, delivering 3 W and 6 W of regulated power, respectively, tailored for demanding industrial environments. Applications include instruments, industrial automation and control systems, telecom and data communication equipment, test and measurement, IPC and embedded systems, and IT systems.

The EC3AW8 and EC4AW8 DC/DC converters feature an ultra-wide 8:1 input voltage range. They are available with single-output voltages of 3.3, 5, 12, or 15 VDC and dual outputs of ±5, ±12, or ±15 VDC, and they offer an optional positive remote on/off control for ease of system integration.

With an ultra-wide input range from 9 to 75 VDC, the EC3AW8 and EC4AW8 series are suited for industrial and IT power systems such as 12 V, 24 V, and 48 V. They deliver high efficiency up to 87% and ensure reliable performance under harsh conditions. The operating temperature range is −40°C to 105°C (with de-rating), and the maximum case temperature is 115°C.

Other features include very low no-load input current (7 mA max. for 3 W; 8 mA max. for 6 W), reducing power consumption in standby mode, and a range of protection including input UVLO, output overvoltage protection, overcurrent protection, and continuous short-circuit protection.

These converters also meet key safety and electromagnetic-interference (EMI) standards, including EN 55032 Class A without an external filter, simplifying design and integration for space-constrained applications, Cincon said.

They are also compliant with MIL-STD-810F for shock and vibration and support operating altitudes up to 5,000 meters. They meet IEC/UL/EN 62368-1 safety standards and provide 3,000-VDC input-to-output isolation.

These DC/DC converters are housed in a standard industrial DIP-24 package measuring 1.25 × 0.8 × 0.4 inches (31.8 × 20.3 × 10.2 mm).

Space and satellites

Micross Components Inc. recently introduced a series of Class H+-screened DC/DC converters for harsh space-based applications. The AFLS28XX Series of DC/DC converters delivers a radiation-tolerant power conversion solution for low-Earth-orbit (LEO) satellite constellations, new space missions, launch vehicles, and other space-based systems.

The AFLS series of 28-V, 120-W DC/DC converters builds on the AFL series, with updated technology and design enhancements. These converters meet MIL-PRF-38534 Class H screening requirements and include additional tests such as PIND and radiography to support reliability in LEO and new space environments. The AFLS series offers radiation specifications of 50-krad (Si) TID and 60-MeV·cm2/mg SEE.

These devices are tailored for space missions requiring radiation tolerance at a lower cost than traditional space-grade-qualified power supplies, Micross said.

The hermetically packaged DC/DC converters are available in single- and dual-output voltage configurations ranging from 5 V to 28 V. They feature proprietary magnetic pulse feedback for optimized dynamic line and load regulation and parallel operation for outputs above 120 W, with synchronization capability to a system clock in the 525-kHz range.

Other features include internal current sharing for balanced load distribution and high power density with no de-rating across the full operating temperature range. In addition, they meet reduced size, weight, and power (SWaP) requirements by eliminating shielding requirements and delivering lower power consumption.

These parts are currently under test, and engineering samples are available within four to six weeks ARO.

Micross’s AFLS series.Micross’s AFLS series (Source: Micross Components Inc.)

Also targeting space applications is a series of off-the-shelf, 15-W DC/DC converters from Microchip Technology Inc. This space-grade, non-hybrid DC/DC isolated power converter with a companion EMI filter operates from a 28-V satellite bus in harsh environments.

The SA15-28 radiation-hardened DC/DC power converter with a companion SF100-28 EMI filter are designed to meet MIL-STD-461 specifications. The SA15-28 and SF100-28 are fully compatible with Microchip’s existing SA50 series of power converters and SF200 filter.

The SA15-28 operates across a wide temperature range from −55°C to 125°C and offers radiation tolerance up to 100 krad TID. It is available with 5-V triple outputs that can be used with point-of-load converters and low-dropout linear regulators to power FPGAs and microprocessors. The output voltage combinations can be customized.

The SA15-28 weighs 60 grams and is approximately 1.68 in.3 to meet SWaP requirements. Microchip provides comprehensive analysis and test reports including worst-case analysis, electrical stress analysis, and reliability analysis. The SA15-28 DC/DC power converter and SF100-28 external EMI filter are now available.

Microchip’s SA15-28 DC/DC converter.Microchip’s SA15-28 DC/DC converter (Source: Microchip Technology Inc.) Brick converters

Advanced Energy Industries Inc. recently added two quarter-brick modules to its ultra-efficient, non-isolated bus converter family for 48-V power conversion. These DC/DC converters target advanced information and communication technology equipment including AI servers, compute and networking, and industrial applications such as robotics and test and measurement.

The Advanced Energy Artesyn NDQ1300 1,300-W and NDQ1600 1,600-W quarter-brick modules operate with peak efficiencies up to 98%, making them suited for high-performance applications. Each of the modules can convert a 48-V input into a fully regulated, 12-V output for non-isolated, low-voltage, high-current power stages as well as PCIE slots and memory devices.

The NDQ devices feature a flat efficiency curve that ensures that the modules deliver optimized power conversion across a wide load range. They also feature an integrated PMBus interface to support flexible digital control and monitoring as well as current-share and remote-sensing options to enable the connection of multiple power supplies in parallel, supporting higher load current or redundancy.

The NDQ modules use an advanced baseplate for better thermal management and heat-sink integration. They also benefit from an inherently safe, transformer-based topology that is resilient to transient loads and makes designing applications for inrush current control on startup easier, the company said.

Advanced Energy’s NDQ1300 quarter-brick module.Advanced Energy’s NDQ1300 quarter-brick module (Source: Advanced Energy Industries Inc.)

Another new converter in a brick format is Bel Fuse’s compact, 100-W DC/DC converter for rugged applications such as industrial automation, railway systems, telecom infrastructure, and electric vehicles/e-mobility. The PRA100 Series is housed in a standard 1/8th brick format, addressing the increased need for higher power density. The devices provide enhanced thermal performance, wide input flexibility, and an environmentally robust design.

The PRA100 operates across a 9-VDC to 74-VDC input range and delivers up to 54-V output with 3,000-VDC isolation. The operating temperature is −40°C to 105°C. All models are fully compliant with EN 62368-1 and carry CE, UKCA, and UL/cUL certifications. It is also compliant with EN 50155, making it well-suited for railway applications. The series offers optional baseplate cooling and negative logic features to extend its versatility in harsh conditions and EV platforms, Bel Fuse said.

Bel Fuse’s PRA100 Series.Bel Fuse’s PRA100 Series (Source: Bel Fuse) DC/DC converter modules

TDK Corp. developed a series of its microPOL (μPOL) power modules with full telemetry (voltage, current, and temperature). The FS160* series μPOL DC/DC converters deliver high power density in the smallest package sizes.

All FS160* μPOL modules measure 3.3 × 3.3 × 1.35 mm, making it easier to place them near complex ICs such as ASICs, FPGAs, and SoCs. Full telemetry is accessible via an I2C interface. The modules operate across a broad junction temperature range from −40°C to 125°C.

There are several versions of each of the 3-A parts (the FS1603 series), 4-A parts (the FS1604 series), and 6-A parts (the FS1606 series). The FS line also includes models at 12 A (the FS1412) and 25 A (the FS1525). The selection of DC/DC converter modules that range from 3 A to 200 A (if eight FS1525’s are connected in parallel) covers a wide range of applications, including big data, machine learning, AI, 5G cells, IoT, and enterprise computing.

TDK calls the module family’s configuration innovative, integrating a high-performance controller, drivers, MOSFETs, and logic core, using a semiconductor embedded in substrate. This packaging eliminates wire bonds and enhances thermal performance. Also integrated are the modules’ inductor and passives into a chip-embedded package to minimize parasitic inductance, which improves the module’s efficiency. Boot and Vcc capacitors are also incorporated into the module.

The FS160* series DC/DC converters deliver 1-W/mm3 in modules that are roughly half the size of other products in the same class, according to the company. In addition, TDK said the modules are so effective that they require no airflow for up to 15 W to 30 W in up to 100°C ambient temperature.

TDK has created multiple design tools, including tools specific to FPGAs from each of the major FPGA suppliers. Additional design tools for the FS160* series include SPICE simulator designs on QSPICE.

Evaluation boards are available, one each for modules at 3 A, 4 A, and 6 A. Fast starter designs for schematic and PCB layout are available at Ultra Librarian.

TDK’s FS160 μPOL DC/DC converters.TDK’s FS160 μPOL DC/DC converters (Source: TDK Corp.)

Aimed at the industry’s shift to high-performance, 48-V systems, Vicor Corp. launched its 48-V to 12-V DCM DC/DC converter modules last year. The DCM3717 and DCM3735 DC/DC power modules, offering up to 2 kW of output power, support the shift to 48-V power delivery networks (PDNs) that provide greater power system efficiency, power density, and lower weight than 12-V-based PDNs in a variety of applications, including communications, computing, automotive, and industrial.

The DCM products are non-isolated, regulated DC/DC converters, operating from a 40-V to 60-V input to generate a regulated output adjustable from 10 V to 12.5 V. The DCM3717 family is available in two power ranges, 750 W and 1 kW, and the DCM3735 is a 2-kW device. These DCM products can be paralleled with up to four modules to scale system power levels.

Claiming industry-leading power density at 5 kW/in.3, these high-density power modules enable power system designers to deploy 48-V PDNs for legacy 12-V loads, delivering size, weight, and efficiency benefits. These devices deliver high efficiency at 96% in a low-height, surface-mount converter housed in package, delivering a 6× reduction in size.

The smaller module is the DCM3717, with a wide input range of 40–60 V (48-V nominal) and an output of 10–12.5 V (12-V nominal). It comes with two power options, 750 W and 1 kW, and 96.5% efficiency. The module is housed in a compact, 36.7 × 17.3 × 5.2-mm footprint.

In a side-by-side comparison with a top competing product, the DCM3717 is less than half the size, with 20% higher output power and 7× higher power density, according to the company.

The larger device, the DCM3735, offers the same wide input range of 40–60 V (48 V nominal) and output of 10–12.5 V (12 V nominal). The power option is 2 kW with 96.4% efficiency. The module is housed in a compact, 36.7 × 35.4 × 5.2-mm footprint.

Vicor’s DCM3717 and DCM3735 DC/DC power modules.Vicor’s DCM3717 and DCM3735 DC/DC power modules (Source: Vicor Corp.)

The post Top 10 DC/DC converters and modules appeared first on EDN.

My setup

Reddit:Electronics - Срд, 04/08/2026 - 15:35
My setup

Rate my setup. I know that the cable management is shit, but I have only one plug.

submitted by /u/kiklop777
[link] [comments]

Antilog PWM and 2-way current mirror make buffered triangle and square waves

EDN Network - Срд, 04/08/2026 - 15:00

It can be fun (and productive!) to transplant a previous Design Idea into a new context, and even more so when modifying and mixing multiple ideas.  Here we’ll combine and comingle the following: 

  1. 5 decade antilogarithmic PWM current source
  2. A two-way mirror—current mirror that is
  3. Dual RRIO op amp makes buffered and adjustable triangles and square waves

This gets the buffered triangle and square-wave output oscillator shown in Figure 1.  It’s linear-in-log tunable from 10 Hz to 1 MHz and controlled with 8-bit PWM.

Figure 1 Incoming 8-bit antilog PWM interface (U1, U2, A1, Q1) generates 80 nA to 8 mA current to control 10 Hz to 1 MHz oscillator (Q2, Q3, Q4, A2, A3). The asterisked parts are precision (metal film) resistors and (C0G) capacitors.

Wow the engineering world with your unique design: Design Ideas Submission Guide

We’ll now proceed to vivisect it.

A single MCU 500 kHz (2 μs per count) PWM output bit controls the anti-log current source.  It’s isolated in blue in Figure 2 and works as explained in reference 1 above.

Figure 2 The U1 U2 switching circuit periodically charges precision timing cap Ct to 1.24 V, then exponentially discharges it at (Rt + R1)Ct = 43.4 μs time-constant, storing the result on sample and hold Csh.

The final sample-and-hold antilog Csh voltage = 1.24v*exp(-Tpwm/43.4μs) = 1.184 V to 11.8 μV as Tpwm goes from 2 to 500 μs = 1 to 250 lsb for a Q1 five-decade collector current range of Vcsh/R4 = 8 mA to 80 nA.  R1 provides for time constant fine-tuning.

Steering and periodic inversion/reflection of the 80nA to 8mA Q1 collector current into integrator A2 is the job of the Q2, Q3, and Q4 two-way current mirror.  It’s covered in reference 2 and in blue in Figure 3.

Figure 3 A two-way current mirror Q2, Q3 ramps A2 C1 integrator up/down at dV/dts ranging from 8E1 to 8E6 volts per second (V/s).  Q4 reduces the loading of A3 at high current/frequency while acting as the reference 2 D1.

Comparator A3 switches current mirror polarity when A2’s output reaches the 0.5 V and 4.5 V limits, which are similar to the theory of operation of reference 3, and are determined here by the resistor networks shown below in Figure 4.

Figure 4 R5 R6 set comparator’s 0.5V/4.5V switching points and thus the triangle wave’s 4 Vpp amplitude.

The output frequency versus the PWM setting-controlled current sink is shown in Figure 5.

Figure 5 Frequency versus PWM setting: linear (black) vs log (red).

And that’s the name of that (antilogarithmic) tun(ing).

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

Related Content

  1. 5 decade antilogarithmic PWM current source
  2. A two-way mirror—current mirror that is
  3. Dual RRIO op amp makes buffered and adjustable triangles and square waves

The post Antilog PWM and 2-way current mirror make buffered triangle and square waves appeared first on EDN.

IMUs demystified: The hidden sense of machines

EDN Network - Срд, 04/08/2026 - 12:49

Motion is invisible until something makes it measurable. That is where inertial measurement units (IMUs) step in—the silent sensors that give machines their hidden sense of balance, orientation, and trajectory. From smartphones that know when you have rotated the screen, to drones that hold steady against the wind, IMUs translate raw acceleration and angular velocity into actionable awareness.

In this installment of Fun with Fundamentals, we will peel back the layers of these compact marvels, showing how they evolved from bulky gyroscopes into today’s precision-packed silicon companions.

The silent navigators: IMUs

An IMU is a compact, high-precision device that captures how an object moves and orients itself in space. Whether steering rockets into orbit, stabilizing drones overhead, or enabling smartphones to guide us through crowded streets, IMUs are the unseen systems that make modern navigation possible.

At the heart of an IMU are sensors that detect linear acceleration with accelerometers and rotational velocity with gyroscopes. Many designs also incorporate a magnetometer to provide heading information. A typical configuration combines a 3-axis accelerometer and a 3-axis gyroscope, forming a 6-axis IMU. When a 3-axis magnetometer is added, the system becomes a 9-axis IMU. Together, these sensors deliver measurements of specific force, angular rate, and surrounding magnetic fields—producing a complete dataset for motion and orientation tracking.

The accelerometers, gyroscopes, and—when included—magnetometers inside an IMU are collectively referred to as inertial sensors. These components form the foundation of inertial navigation, working together to capture motion and orientation data without relying on external signals. By fusing their outputs, engineers can derive precise information about how a device moves through space, even in environments where GPS or other external references are unavailable.

So, accelerometers measure linear acceleration, capturing how quickly an object speeds up or slows down. Gyroscopes sense angular velocity, revealing the rate and direction of rotation. Magnetometers, when included, detect magnetic fields and provide heading information relative to Earth’s magnetic north.

It’s worth noting that engineers still deploy both 6-axis and 9-axis IMUs, depending on the demands of the application. A 6-axis unit, built from accelerometers and gyroscopes, is often sufficient for tasks like stabilizing drones, balancing robots, or monitoring automotive motion, where relative movement and rotation are the primary concerns.

In contrast, a 9-axis IMU adds a magnetometer, giving it the ability to resolve absolute heading. This makes it the preferred choice in smartphones, wearables, and advanced navigation systems, where orientation relative to Earth’s magnetic field is critical. In practice, the simpler 6-axis design remains a cost-effective workhorse, while the 9-axis variant dominates in consumer electronics and navigation-heavy applications.

Figure 1 A vintage mechanical inertial navigation system (INS) component achieves autonomous navigation by integrating an inertial measurement unit with a computational unit. Source: Author’s archives

Simply put, a typical IMU places one accelerometer and one gyroscope along each of the three principal axes, ensuring motion and rotation are captured in all directions. In some designs, a magnetometer is also added per axis to provide heading information, but this is not always the case—many IMUs operate effectively without it.

Beyond these core sensors, certain IMUs incorporate auxiliary elements such as temperature monitors, since accelerometers and gyroscopes are prone to thermal fluctuations that can compromise accuracy. By recording temperature data, the system compensates for thermal drift, stabilizing sensor outputs and improving overall reliability.

Evolution and types of IMUs

From the gimbaled IMUs of the aerospace pioneers to today’s miniaturized MEMS-based devices, IMUs have undergone a remarkable transformation. Early gimbaled systems relied on mechanically stabilized platforms, bulky yet precise, before giving way to strapdown IMUs that fixed sensors directly to the vehicle body, reducing size and complexity.

With the rise of microelectromechanical systems (MEMS), silicon MEMS IMUs became the standard for consumer electronics, robotics, and drones, prized for their low cost, compact size, and efficiency. For tactical and industrial applications, Quartz MEMS IMUs emerged, offering greater stability and resilience under temperature and vibration compared to silicon designs.

At the high-end, ring laser gyroscope (RLG) IMUs and fiber-optic gyroscope (FOG) IMUs represent the pinnacle of precision, both exploiting the Sagnac Effect to measure rotation. RLGs use laser beams circulating in a closed cavity, while FOGs rely on long coils of optical fiber—an approach that reduces maintenance needs and improves durability while delivering comparable accuracy.

Today, engineers select from this spectrum—silicon MEMS for affordability and portability, quartz MEMS for tactical reliability, and RLG/FOG systems for uncompromising accuracy—depending on mission requirements.

Figure 2 The Motus ultra‑high‑accuracy MEMS IMU enables precision in autonomous system applications. Source: Advanced Navigation

As a side note, it’s worth mentioning that while IMUs deliver raw measurements of acceleration and angular velocity, an attitude and heading reference system (AHRS) builds on this foundation by applying sensor fusion algorithms to provide stabilized orientation outputs: pitch, roll, yaw, and heading. In practice, AHRS units are IMUs with embedded processing, making them more directly usable in aircraft, marine, and robotic platforms where orientation data is required in real time.

Advanced IMU categories

Beyond the broad spectrum of MEMS and optical gyroscope technologies, IMUs can also be classified by their functional purpose. A north-seeking IMU is designed to determine true north without relying on external references such as the global navigation satellite system (GNSS) or magnetic compasses.

By exploiting the Earth’s rotation and combining precise gyroscope measurements, these systems achieve sub-degree heading accuracy, making them invaluable in marine navigation, underground operations, and defense applications where absolute orientation is critical.

In contrast, a navigation IMU focuses on tracking motion and orientation over time. It provides raw acceleration and angular velocity data that, when processed within an inertial navigation system (INS), yields position, velocity, and displacement. Navigation IMUs are widely deployed in aerospace, robotics, and consumer electronics, where continuous motion tracking and drift management are more important than absolute north-finding.

Together, these advanced categories highlight how IMUs are not only differentiated by sensor technology—silicon MEMS, quartz MEMS, RLG, or FOG—but also by the specific role they play in navigation systems, from heading determination to full trajectory tracking.

Practical pointers for engineering minds

IMUs are no longer the nightmares they once seemed. Thanks to today’s accessible sensor modules, open-source libraries, and low-cost development boards, even a novice maker can experiment with inertial measurement units without needing aerospace-grade expertise. What was once the domain of defense labs and high-end avionics has now become approachable for hobbyists, students, and engineers alike, making hand-on exploration of motion sensing and navigation both practical and affordable.

First off, note that modern inertial modules often advertise “IMU, AHRS, and INS options” because the same hardware platform can deliver different levels of functionality depending on firmware and processing. At the most basic level, the unit acts as an IMU, outputting raw accelerometer and gyroscope data. With onboard sensor-fusion algorithms, it becomes an AHRS, providing stabilized orientation in pitch, roll, yaw, and heading.

When paired with a computational unit and often GNSS input, the same device scales up to a full INS, achieving autonomous navigation with position, velocity, and orientation. This tiered approach lets engineers choose the level of integration that matches their application, from hobbyist UAVs to aerospace systems.

Modern IMUs give engineers and makers practical choices across performance levels. High-end devices like Analog Devices’ ADIS16575/ADIS16576/ADIS16577 deliver factory calibration, low bias drift, and digital outputs for precision robotics, autonomous systems, and aerospace projects.

At the same time, compact modules such as Murata’s SCH16T-K01 integrate gyro and accelerometer sensing for embedded applications, wearables, and IoT nodes. Together, these platforms show how inertial technology now scales from aerospace-grade accuracy down to plug-and-play modules, offering practical options for projects at every level.

Figure 3 The SCH16T‑K01 module combines a high‑performance 3‑axis angular rate sensor and 3‑axis accelerometer, delivering precise motion tracking for embedded, wearable, and IoT applications. Source: Murata

Besides, makers and hobbyists do not need to wrestle with bare chips anymore—prewired IMU breakout boards are widely available and come with headers and libraries, making motion sensing experiments plug-and-play. For newer designs, boards built around ST’s LSM6DSO/LSM6DSOX deliver reliable performance in a maker-friendly format, ensuring parts that are safe for ongoing projects.

Figure 4 Today’s prewired cards like the LSM6DSOX module—and other readily available IMU boards—let makers explore motion sensing with ease and enable reliable integration into advanced embedded projects. Source: Author

IMUs in practice and everyday life

Well, we are not balanced yet, but we have touched some fundamental and practical points in a rather random way. Still, the journey through IMUs shows how these sensors are not just abstract components for engineers; they are part of our everyday lives. From the stabilizing gimbals that keep cameras steady, to the motion tracking inside wearables, gaming controllers, and even automotive systems, IMUs quietly enable the seamless experiences we take for granted.

Figure 5 Today’s IMUs act as the unseen hand across entertainment, healthcare, and navigation—guiding cameras, gimbals, ships, trains, satellites, and aerospace systems, while also enabling makers to explore motion sensing with ease and integrate it reliably into advanced projects. Source: Author

The call now is to explore further—experiment with modules, build small projects, and see firsthand how this complex yet easy topic can transform ideas into motion-aware innovations.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post IMUs demystified: The hidden sense of machines appeared first on EDN.

Metallium completes Phase I SBIR contract within six months

Semiconductor today - Срд, 04/08/2026 - 11:50
Metallium Ltd of Subiaco, Western Australia, says that its subsidiary Flash Metals Texas Inc of Houston, TX, USA has completed Phase I of its Small Business Innovation Research (SBIR) contract with the US Department of War (DoW) through the Defense Logistics Agency (DLA)...

Відкрито меморіальну дошку на честь Володимира Бойка

Новини - Срд, 04/08/2026 - 11:14
Відкрито меморіальну дошку на честь Володимира Бойка
Image
kpi ср, 04/08/2026 - 11:14
Текст

У червні у день народження Володимира Бойка, у 18-му корпусі ФІОТ Київської політехніки відкрили меморіальну дошку на його честь.

Turing jitter into true random numbers

Reddit:Electronics - Срд, 04/08/2026 - 10:04
Turing jitter into true random numbers

I discovered that adding a single 1N4004 diode to a Schmitt trigger RC oscillator increases edge jitter by 15x, turning a simple 4-component circuit into a cryptographic-quality hardware RNG for microcontrollers.

I've done (What I think is) a pretty comprehensive write up of the project here:

https://siliconjunction.top/2025/12/04/practical-hardware-entropy-for-arduino-projects/

submitted by /u/elpechos
[link] [comments]

Аспірант НН ІМЗ Роман Педань: "Працювати з новим завжди цікаво"

Новини - Срд, 04/08/2026 - 09:28
Аспірант НН ІМЗ Роман Педань: "Працювати з новим завжди цікаво"
Image
Інформація КП ср, 04/08/2026 - 09:28
Текст

Для підтримки найбільш здібних молодих дослідників та заохочення їхніх наукових результатів наказом МОН від 19.11.2025 №1526 аспірантам призначено академічні стипендії Президента України на 2025/2026 навчальний рік у розмірі 23 700 грн на місяць. Серед нагороджених – і двоє політехніків: Андрій Макарчук (НН ІАТЕ, див. "КП" №3-4 за 2026 р.) та Роман Педань (НН ІМЗ ім. Є.О. Патона).

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів