Збирач потоків

600-V MOSFET enables efficient, reliable power conversion

EDN Network - Птн, 01/30/2026 - 20:31

The first device in AOS’ αMOS E2 high-voltage Super Junction MOSFET platform is the AOTL037V60DE2, a 600-V N-channel MOSFET. It offers high efficiency and power density for mid- to high-power applications such as servers and workstations, telecom rectifiers, solar inverters, motor drives, and other industrial power systems.

Optimized for soft-switching topologies, the AOTL037V60DE2 delivers low switching losses and is well suited for Totem Pole PFC, LLC and PSFB converters, as well as CrCM H-4 and cyclo-inverter applications. The device is available in a TOLL package and features a maximum RDS(on) of 37 mΩ.

AOS engineered the αMOS E2 high-voltage Super Junction MOSFET platform with a robust intrinsic body diode to handle hard commutation events, such as reverse recovery during short-circuits or start-up transients. Evaluations by AOS showed that the body diode can withstand a di/dt of 1300 A/µs under specific forward current conditions at a junction temperature of 150 °C. Testing also confirmed strong Avalanche Unclamped Inductive Switching (UIS) capability and a long Short-Circuit Withstanding Time (SCWT), supporting reliable operation under abnormal conditions.

The AOTL037V60DE2 is available in production quantities at a unit price of $5.58 for 1000-piece orders.

AOTL037V60DE2 product page

Alpha & Omega Semiconductor 

The post 600-V MOSFET enables efficient, reliable power conversion appeared first on EDN.

Stable LDOs use small-output caps

EDN Network - Птн, 01/30/2026 - 20:31

Based on Rohm’s Nano Cap ultra-stable control technology, the BD9xxN5 series of LDO regulator ICs delivers 500 mA of output current. The series is intended for 12-V and 24-V primary power supply applications in automotive, industrial, and communication systems.

The BD9xxN5 series builds on the earlier BD9xxN1 series, increasing the output current from 150 mA to 500 mA while maintaining stability with small-output capacitors. The ICs provide low output voltage ripple (~250 mV) for load current changes from 1 mA to 500 mA within 1 µs. Using a typical output capacitance of 470 nF, they enable compact designs and flexible component selection.

All six new variants in the BD9xxN5 series are AEC-Q100 qualified and operate over a temperature range of –40°C to +125°C. Each device provides a single output of 3.3 V, 5 V, or an adjustable voltage from 1 V to 18 V, accurate to within ±2.0%. The absolute maximum input voltage rating is 45 V.

The BD9xxN5 LDO regulators are available now from Rohm’s authorized distributors. Datasheets for each variant can be accessed here.

Rohm Semiconductor 

The post Stable LDOs use small-output caps appeared first on EDN.

1200-V SiC modules enable direct upgrades

EDN Network - Птн, 01/30/2026 - 20:31

Five 1200-V SiC power modules in SOT-227 packages from Vishay serve as drop-in replacements for competing solutions. Based on the company’s latest generation of SiC MOSFETs, the modules deliver higher efficiency in medium- to high-frequency automotive, energy, industrial, and telecom applications.

The VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 power modules are available in single-switch and low-side chopper configurations. Each module’s SiC MOSFET integrates a soft body diode with low reverse recovery. This reduces switching losses and improves efficiency in solar inverters and EV chargers, as well as server, telecom, and industrial power supplies.

The modules support drain currents from 50 A to 200 A. The VS-SF50LA120 is a 50-A low-side chopper with 43-mΩ RDS(on), while the VS-SF50SA120 is a 50-A single-switch device rated at 47 mΩ. Single-switch options scale to 100 A, 150 A, and 200 A with RDS(on) values of 23 mΩ, 16.8 mΩ, and 12.1 mΩ, respectively.

Samples and production quantities of the VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 are available now, with lead times of 13 weeks.

Vishay Intertechnology 

The post 1200-V SiC modules enable direct upgrades appeared first on EDN.

CFIUS clears Wolfspeed issuance of equity to Renesas as part of court-approved restructuring

Semiconductor today - Птн, 01/30/2026 - 18:33
Wolfspeed Inc of Durham, NC, USA — which makes silicon carbide (SiC) materials and power semiconductor devices — says that the Committee on Foreign Investment in the United States (CFIUS) has formally cleared its issuance of equity to Renesas Electronics America Inc, completing a key component of Wolfspeed’s restructuring agreement with its lender group in support of its Chapter 11 process...

5N+ awarded $18.1m US grant to boost germanium production capacity

Semiconductor today - Птн, 01/30/2026 - 18:21
Specialty semiconductor and performance materials producer 5N Plus Inc (5N+) of Montréal, Québec, Canada has been awarded a US$18.1m grant by the US Government to expand capabilities and increase capacity to recycle and refine germanium at its St. George, Utah facility, to feed optics and solar germanium crystal supply chains...

Chandra X-Ray Mirror

EDN Network - Птн, 01/30/2026 - 15:00

There is a Neil deGrasse Tyson video covering the topic of the Chandra X-ray Observatory. This essay is in part derived from that video. I suggest that you view the discussion. It will be sixty-five minutes well spent.

This device doesn’t look anything like a planar mirror because X-ray photons cannot be reflected by any known surface in the way you see your reflection above your bathroom sink.

If you aim a stream of X-ray photons directly toward any particular surface, either a silvered mirror or some kind of intended lens, those photons will either pass right on through (which is what your medical X-rays do) or they will be absorbed. You will not be able alter the trajectory of an X-ray photon stream, at least not with any device like that.

However, X-ray photons can be grazed off a reflective surface to achieve a slight trajectory change if their initial angle of approach to the mirror surface is kept very small. With the surface of the Chandra X-ray mirror made extremely smooth, almost down to the atomic level, repeated grazing permits X-ray focus to be achieved. This is the operating principle of the Chandra X-ray Telescope’s mirror, as shown in Figure 1.

Figure 1 The Chandra X-Ray Observatory mirrors showing a perspective view, a cut-away view, and x-ray photon trajectories. (Source: StarTalk Podcast)

The Chandra Observatory was launched on July 23, 1999, and has been doing great things ever since. Regrettably, however, its continued operation is in some jeopardy. Please see the following Google search result.

Figure 2 Google search result of the Chandra Telescope showing science funding budget cuts for the Chandra X-ray Observatory going from $69 million to zero. (Source: Google, 2026)

I’m keeping my fingers crossed.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Chandra X-Ray Mirror appeared first on EDN.

Vishay Intertechnology’s 1200 V SiC MOSFET Power Modules for Power Efficiency

ELE Times - Птн, 01/30/2026 - 12:37

Vishay Intertechnology, Inc. has introduced five new 1200 V MOSFET power modules designed to increase power efficiency for medium to high frequency applications in automotive, energy, industrial, and telecom systems. The Vishay Semiconductors VS-SF50LA120VS-SF50SA120VS-SF100SA120VS-SF150SA120, and VS-SF200SA120 feature Vishay’s latest generation silicon carbide (SiC) MOSFETs in the industry-standard SOT-227 package.

Offered in single switch and low side chopper configurations, each power module released today features a SiC MOSFET integrated with a soft body diode offering low reverse recovery. The result is reduced switching losses and increased efficiency for solar inverters, off-board chargers for electric vehicles (EV), SMPS, DC/DC converters, UPS, and HVAC systems; large-scale battery storage systems, and telecom power supplies.

The compact SOT-227 package of the VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 allows the devices to serve as drop-in replacements for competing solutions in existing designs, enabling designers to adopt one of the newest SiC technologies without the expense of changing PCB layouts. The moulded package also offers electrical insulation up to 2500 V for one minute, lowering costs by eliminating the need for additional insulation between the component and heatsink.

The power modules provide continuous drain current from 50 A to 200 A and low on-resistance down to 12.1 mΩ. The RoHS-compliant devices deliver high-speed switching with low capacitance and offer a high maximum operating junction temperature of +175 °C.

Device Specification Table:

Part #

VDSS

ID

RDS(ON)

Configuration

Package

VS-SF50LA120

1200 V

50 A

43 mΩ

Low side chopper

SOT-227

VS-SF50SA120

50 A

47 mΩ

Single switch

VS-SF100SA120

100 A

23 mΩ

Single switch

VS-SF150SA120

150 A

16.8 mΩ

Single switch

VS-SF200SA120

200 A

12.1 mΩ

Single switch

Samples and production quantities of the VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 are available now, with lead times of 13 weeks.

The post Vishay Intertechnology’s 1200 V SiC MOSFET Power Modules for Power Efficiency appeared first on ELE Times.

Budget 2026-27: Can New PLI Schemes Drive India’s A&D Tech Sovereignty?

ELE Times - Птн, 01/30/2026 - 12:05

As the Union Budget approaches, the spotlight intensifies on India’s aerospace and defence technology sector, a vertical that has transitioned from a heavy importer to a nascent global manufacturing hub. The numbers tell a story of aggressive scaling: under the aegis of “Atmanirbhar Bharat,” domestic defence production surged to a historic ₹1.27 lakh crore in FY 2023-24.

However, for an industry eyeing a $5 trillion economy, this record high is viewed not as a finish line, but as a baseline. The shift toward indigenous manufacturing has fundamentally rewired the nation’s military-industrial complex, replacing foreign dependency with homegrown R&D and high-tech sovereignty. As the government prepares to lay out its fiscal roadmap, the industry is looking for more than just procurement orders; it is looking for deep-tech incentives, streamlined export pathways, and sustained capital outlay.

This budget will be a litmus test for India’s self-reliance commitment. Stakeholders are bracing for announcements that could further catalyse the aerospace ecosystem, ensuring that “Made in India” weaponry and avionics don’t just meet internal security needs but become a cornerstone of India’s global economic footprint.

The Tech-Sovereignty Mandate

As the lines between commercial innovation and battlefield superiority blur, technology has emerged as the definitive fulcrum for India’s tri-service modernisation. Industry experts argue that the upcoming Budget presents a pivotal window to institutionalise this convergence through aggressive structural reforms.

Central to this discourse is the evolution of the Production Linked Incentive (PLI) Scheme. While the current framework has provided a vital tailwind for the drone industry, there is a growing consensus that a “narrow-lens” approach is no longer sufficient. To truly insulate India’s supply chain from global volatility, the PLI umbrella must expand to cover the high-stakes world of dual-use technologies.

In terms of using technology in aerospace and defence technology development, the use of AI and related features will play a significant role.

“Investing in digital twins and simulation technology for testing and research in aviation and defence can boost precision and efficiency in the electronic manufacturing industry. Tecknotrove urges the government to prioritize use of digital twin technology in this financial budget. It’s a strategic move that will amplify innovation, save research and development and manufacturing costs, and drive India’s self-reliance in manufacturing. Digital twins aren’t just a trend—they’re a game-changer. With decades of expertise in digital twins for aviation and defence, we have seen this technology helping in at least 30% reduction in costs,” says Payal Gupta, Co-Founder, Director-Business Development, Tecknotrove Systems India Pvt. Ltd. 

The strategic roadmap for the FY 2026-27 fiscal cycle should ideally prioritise:

  • The Full Drone Spectrum: Moving beyond basic assembly to incentivise the manufacturing of high-endurance propulsion systems and autonomous flight controllers.
  • Electronic Warfare & Surveillance: Bringing Airborne Early Warning (AEW) systems, jamming devices, and advanced radar arrays under the incentive net to neutralise import dependencies.
  • The Robotics Frontier: Providing fiscal stimulus for indigenous sensors and robotic systems that will define the future of unmanned combat and deep-space communication.

By widening these incentive corridors, the government can transform the “Make in India” initiative from a manufacturing slogan into a high-tech powerhouse, ensuring that the next generation of aerospace sensors and AI-driven robotics are conceived, designed, and built on Indian soil.

By: Shreya Bansal, Sub-Editor

The post Budget 2026-27: Can New PLI Schemes Drive India’s A&D Tech Sovereignty? appeared first on ELE Times.

How to Build a Hacker-Proof Car: Insights from the Auto EV Tech Summit

ELE Times - Птн, 01/30/2026 - 11:14

Speaking at the Auto EV Tech Vision Summit 2025, Suresh D highlights the major cyber vulnerabilities and the corresponding technologies required to enable a safer and more resilient automotive ecosystem. 

Since the electronic components in passenger vehicles are set to increase by 20-40 percent, as the recent studies suggest, including infotainment, ADAS, etc, drawing in a lot of sensors in the near future, automobiles are emerging as the new battlefield for cyber developments. Underlining this growing phenomenon, Suresh D, Group CTO, Minda Corporation, CEO, Spark Minda Tech Centre & Board Member, Spark Minda Green Mobility says, “A passenger vehicle is expected to see a 20–40 percent increase—nearly doubling in some cases—over the next two to three years, bringing in a large number of on-board electronic systems. This will significantly increase software content and complexity,” at the Auto EV Tech Vision Summit 2025 held at KTPO, Bengaluru, on November 18–19, 2025.

He further goes on to add that the phenomenon will make Operating Systems and other software indispensable, escalating the security question in automobiles. 

Critical Challenges on the way 

He says that the new architectural parameter of SDVs, where the distributed architecture is being replaced by controlled or zonal architecture, also poses certain security challenges. Also, as the new vehicles remain entirely connected, as in V-2-V or V-2-I connection, the proximity of cyber risks escalates. 

Further, he touches upon the critical challenges that are to be tackled, including phishing, hacking, snooping, malware, etc. He goes on to underline some of the crucial cyberattacks that the automotive industry has seen in the recent past, ranging from CAN spoofing a Jeep Cherokee in 2014 to the latest TESLAMATE attack on Tesla cars in 2025, underlining how the question of cybersecurity becomes more relevant than ever. 

Curious Case of SDVs & EVs 

As EVs are on the rise across the world, Suresh D highlights how EV expansion and the need for robust charging systems also aggravate the risk. He explains that if a charging station compromises a supplier’s build server, it can be manipulated to tamper with BMS parameters via a compromised internal bus or a malicious charging station.

While for SDVs, potential risk sources he underlines include attack scenarios ranging from unprivileged root access and pivoting through fleet management backends, to compromised third-party apps and poorly protected cryptographic keys.

How to Tackle this? 

In the latter part, he touches upon the important steps that can be taken to avoid the potential risks and create a safer and reliable cyber ecosystem for automotives. First among them is the System Architecture approach. He says, “It refers to developing a robust architecture—understanding the OEM’s architecture and aligning the product accordingly.” He sums it up as thinking way ahead of the OEMs. It also includes encryption and decryption of the hardware to ensure that no vulnerability remains open to exploitation. 

Further, he also outlines a distinct approach, which is Embedded Edge Solutions, which means solving the problem at the source. It includes several protections, including secure flashing and secure boot. This is done through the plant server of the OEM that generates distinct private keys for each of the units for further authorization.  

For SDVs, he highlights a telematics-based approach which consists of 3 layers, namely Layer 1- In-vehicle security, Layer 2- Vehicle Communication Security & Layer 3- The cloud infrastructure. When Internet Protocol is used for communication, it enables whitelisting of the IPs through encryption and decryption through SSL, enabling a better and safer environment. 

High Frequency Options:  Granting More Immunity

He also underlines how automobiles these days usually come with smart keys or keyless access to the vehicle. While the technology is referred to as Low-Frequency Radio Frequency (LFRF), it is immune to relay attacks. However, the industry is gradually moving towards safer and more reliable options like Bluetooth and Ultra Wide Bandgap (UWB), with high-frequency technology making decoding highly difficult.   

He adds that even these technologies are prone to cyberattacks, either at the server level or the device level. Conclusively, some techniques like channel sounding with Bluetooth-based technology have been developed, which are more precise and help make authentication more secure. It offers a turnkey secure foundation, making automobiles reliable and secure. 

The post How to Build a Hacker-Proof Car: Insights from the Auto EV Tech Summit appeared first on ELE Times.

Palo Alto Networks Unifies Observability and Security for the AI Era through Chronosphere Acquisition

ELE Times - Птн, 01/30/2026 - 08:38

As enterprises increasingly rely on AI to run digital operations, protect assets, and drive growth, success depends on one critical factor: trusted, high-quality, real-time data. Palo Alto Networks, the global cybersecurity leader, announced the completion of its acquisition of Chronosphere, addressing a core challenge of the AI era: the inability to see and secure the massive data volumes running modern businesses.

Chronosphere, a Leader in the 2025 Gartner Magic Quadrant for Observability Platforms, was purpose-built to handle this scale. While legacy tools break down in cloud-native environments, Chronosphere gives customers deep visibility across their entire digital estate. With this acquisition, Palo Alto Networks is redefining how organisations run at the speed of AI—by enabling customers to gain deep, real-time visibility into their applications, infrastructure, and AI systems — while maintaining strict control over data cost and value.

The planned integration of Palo Alto Networks Cortex AgentiX with Chronosphere’s cloud-native observability platform will allow customers to apply AI agents that can now find and fix security and IT issues automatically—before they impact the customer or the bottom line. AI security without deep observability is blind; this acquisition delivers the essential context across models, prompts, users, and performance to move from manual guessing to autonomous remediation.

Nikesh Arora, Chairman and CEO, Palo Alto Networks:

“Enterprises today are looking for fewer vendors, deeper partnerships, and platforms they can rely on for mission-critical security and operations. Chronosphere accelerates our vision to be the indispensable platform for securing and operating the cloud and AI. We believe that great security starts with deep visibility into all your data, and Chronosphere provides that foundation for our customers.”

Martin Mao, Co-founder and CEO, Chronosphere, is joining Palo Alto Networks as SVP, GM Observability and comments:

“Chronosphere was built to help the world’s most complex digital organisations operate at scale with confidence. Joining Palo Alto Networks allows us to bring AI-era observability to a global audience. Together, we’re delivering a new standard — where observability, security, and AI come together to give organisations control over their most valuable asset: data.”

The Chronosphere Telemetry Pipeline remains available as a standalone solution, enabling organisations to eliminate the ‘data tax’ associated with modern security operations. By acting as an intelligent control layer, the pipeline filters low-value noise to reduce data volumes by 30% or more while requiring 20x less infrastructure than legacy alternatives. This is key to Palo Alto Networks Cortex XSIAM strategy, ensuring customers can scale their security posture—not their spending—as they transition to autonomous, AI-driven operations.

The post Palo Alto Networks Unifies Observability and Security for the AI Era through Chronosphere Acquisition appeared first on ELE Times.

Учений КПІ Юрій Яворський — лауреат премії Верховної Ради України

Новини - Чтв, 01/29/2026 - 22:07
Учений КПІ Юрій Яворський — лауреат премії Верховної Ради України
Image
kpi чт, 01/29/2026 - 22:07
Текст

Доцент кафедри фізичного матеріалознавства та термічної обробки (ФМТО) Інституту матеріалознавства та зварювання ім. Є. О. Патона (ІМЗ) Юрій Яворський отримав Премію Верховної Ради України молодим ученим — одну з найпрестижніших державних відзнак для молодих науковців.

КПІ та КНДІСЕ посилюють співпрацю у сфері судових експертиз

Новини - Чтв, 01/29/2026 - 21:18
КПІ та КНДІСЕ посилюють співпрацю у сфері судових експертиз
Image
kpi чт, 01/29/2026 - 21:18
Текст

КПІ ім. Ігоря Сікорського та Київський науково-дослідний інститут судових експертиз (КНДІСЕ) провели робочу зустріч, щоб посилити партнерство та вивести наукову й освітню співпрацю на новий стратегічний рівень.

Wolfspeed unveils TOLT package portfolio

Semiconductor today - Чтв, 01/29/2026 - 19:42
Wolfspeed Inc of Durham, NC, USA — which makes silicon carbide (SiC) materials and power semiconductor devices — has introduced its new TOLT package portfolio, which enables maximum power density in a power supply for data-center rack applications...

NEC develops high-efficiency compact power amplifier module for sub-6GHz band in 5G base-station radio units

Semiconductor today - Чтв, 01/29/2026 - 19:32
Tokyo-based NEC Corp has developed a high-efficiency, compact power amplifier module (PAM) for the sub-6GHz band, designed for integration into 5G base-station radio units (RUs)...

Vishay launches 1200V SiC MOSFET power modules in SOT-227 packages

Semiconductor today - Чтв, 01/29/2026 - 16:15
Discrete semiconductor and passive electronic component maker Vishay Intertechnology Inc of Malvern, PA, USA has introduced five new 1200V MOSFET power modules designed to increase power efficiency for medium- to high-frequency applications in automotive, energy, industrial and telecom systems...

Successive approximation

EDN Network - Чтв, 01/29/2026 - 15:00

Analog-to-digital conversion methods abound, but we are going to take a look at a particular approach as shown in Figure 1.

Figure 1 An analog-to-digital converter where an analog input signal is compared to a voltage reference that has been scaled via a resistive ladder network. (Source: John Dunn)

In this approach, in very simplified language, an analog input signal is compared to a voltage reference that has been scaled via a resistive ladder network. Scaling is adjusted by finding that digital word for which a scaled version of Vref becomes equal to the analog input. The number of bits in the digital word can be chosen pretty much arbitrarily, but sixteen bits is not unusual. However, for illustrative purposes, we will illustrate the use of only seven bits.

Referring to a couple of examples as seen in Figure 2, the process runs something like this.

Figure 2 Two digital word acquisition examples using successive approximation. (Source: John Dunn)

For descriptive purposes, let the analog input be called our “target”. We first set the most significant bit (the MSB) of our digital word to 1 and all of the lower bits to 0. We compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave the MSB at 1, or if the scaled Vref is greater than the target, we return the MSB to 0. If the two are equal, we have completion.

In either case, if we do not have completion, we then set the next lower bit to 1, and again we compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave this second bit at 1, or if the scaled Vref is greater than the target, we return this second bit back to 0. If the two are equal, we have completion.

Again, in either case, if we do not have completion, we then set the next lower bit to 1, and again we compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave this third bit at 1, or if the scaled Vref is greater than the target, we return this third bit to 0. If the two are equal, we have completion.

Sorry for the monotony, but that is the process. We repeat this process until we achieve equality, which can take as many steps as there are bits, and therein lies the beauty of this method.

We will achieve equality in no more steps than there are bits. For the seven-bit examples shown here, the maximum number of steps to completion will be seven. Of course, it’s not that we actually have seven-bit converters offered by any company, but the number “seven” simply allows viewable examples to be drawn below. Fewer bits might not make things clear, while more bits could have us squinting at the page with a magnifying glass.

If we did a simple counting process starting from all zeros, the maximum number of steps could be as high as 27+1 or one-hundred-twenty-eight, which could/would be really slow.

Slow, straight-out counting would be a “tracking” process, which is sometimes used and which does have its own virtues. However, we can speed things up with what is called “successive approximation”.

Please note that the “1”, the “-1”, and the “0” highlighted in blue are merely indicators of which value is greater than, less than, or equal to the other.

A verbal description of this process for the target value of 101 may help shed some light. We then proceed as follows. (Yes, this is going to be verbose, but please trace it through.)

We first set the most significant bit with its weight value of 64 to a logic 1 and discover that the numerical value of the bit pattern is just that, the value 64. When we compare this to our target number of 101, we find that we’re too low. We will leave that bit where it is and move on.

We set the next lower significant bit with its weight value of 32 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 = 96. When we compare this to our target number of 101, we find that we’re still too low. We will leave the pair of bits where they are and move on.

We set the next lower bit again with its weight value of 16 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 16 = 112. When we compare this to our target number of 101, we find that we are now too high.  We will leave the first two most significant bits where they are, but we will return the third most significant bit to logic 0 and move on.

We set the next lower bit again with its weight value of 8 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 8 = 104.  When we compare this to our target number of 101, we find that we are now again too high.  We will leave the first three most significant bits where they are, but we will return the fourth most significant bit to logic 0 and move on.

We set the next lower bit again with its weight value of 4 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 0 + 4 = 100.  When we compare this to our target number of 101, we find that we’re once again too low. We will leave the quintet of bits where they are and move on.

We set the next lower bit again with its weight value of 2 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 0 + 4 + 2 = 102.  When we compare this to our target number of 101, we find that we are now once again too high.  We will leave the first five most significant bits where they are, but we will return the sixth most significant bit to logic 0 and move on.

We set the lowest bit with its weight value of 1 to a logic 1 and discover that the sum yielding the numerical value is now 101, there is no error. We have completed our conversion in only seven counting steps, which is far and away, way less than the number of steps that would have been required in a simple, direct counting scheme.

It may be helpful to look at a larger number of digital word acquisition examples, as in Figure 3.

 

Figure 3 Digital word acquisitions with number paths. (Source: John Dunn)

Remember the old movie “Seven Brides for Seven Brothers”? For these examples, think “Seven Steps for Seven Bits”.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Successive approximation appeared first on EDN.

Apple CarPlay and Google Android Auto: Usage impressions and manufacturer tensions

EDN Network - Чтв, 01/29/2026 - 15:00

What happens to manufacturers when your ability to differentiate whose vehicle you’re currently traveling in, far from piloting, disappears?

My wife’s 2018 Land Rover Discovery:

not only now has upgraded LED headlights courtesy of yours truly, I also persuaded the dealer a while ago to gratis-activate the vehicle’s previously latent Apple CarPlay and Google Android Audio facilities for us (gratis in conjunction with a fairly pricey maintenance bill, mind you…). I recently finally got around to trying them both out, and the concept’s pretty cool, with the implementation a close second. Here’s what CarPlay’s UI looks like, courtesy of Wikipedia’s topic entry:

And here’s the competitive Android Auto counterpart:

Vehicle-optimized user experiences

As you can see, this is more than just a simple mirroring of the default smartphone user interface; after the mobile device and vehicle successfully activate a bidirectional handshake, the phone switches into an alternative UI that’s more vehicle (specifically: mindful of driver-distraction potential) amenable, and tailored for its larger albeit potentially lower overall resolution dashboard-integrated display.

The baseline support for both protocols in our particular vehicle is wired, which means that you plug the phone into one of the USB-A ports located within the storage console located between the front seats. My wife’s legacy iPhone is still Lightning-based, so I’ve snagged both a set of inexpensive ($4.99 for three) coiled Lightning-to-USB-A cords for her:

and a similarly (albeit not quite as impressively) penny-pinching ($6.67 for two) suite of USB-C-to-USB-A coiled cords for my Google Pixel phones:

The wired approach is convenient because a single cord handles both communication-with-vehicle and phone charging tasks. That said, a lengthy strand of wire, even coiled, spanning the gap from the console to the magnetic mount located at the dashboard vent:

is aesthetically and otherwise unappealing, especially considering that the mount at the phone end also already redundantly supports both MagSafe (iPhone) and Qi (Pixel, in conjunction with a magnet-augmented case) charging functions:

Wireless communications

Therefore, I’ve also pressed into service a couple of inexpensive (~$10 each, sourced from Amazon’s Warehouse-now-Resale section) wireless adapters that mimic the integrated wireless facilities of newer model-year vehicles and even comprehend both the CarPlay and Android Auto protocols. One comes from a retailer called VCARLINKPLAY:

The other is from the “PakWizz Store”:

The approach here is somewhat more complicated. The phone first pairs with the adapter, already plugged into and powered by the car’s USB-A port, over Bluetooth. The adapter then switches both itself and the phone to a common and (understandably, given the aggregate data payload now involved) beefier 5 GHz Wi-Fi Direct link.

Particularly considering the interference potential from other ISM band (both 2.4 GHz for Bluetooth and 5 GHz for Wi-Fi) occupants contending for the same scarce spectrum, I’m pleasantly surprised at how reliable everything is, although initial setup admittedly wasn’t tailored for the masses and even caused techie-me to twitch a bit.

Encroaching on vehicle manufacturers’ turf

As such, I’ve been especially curious to follow recent news trends regarding both CarPlay and Android Auto. Rivian and Tesla, for example, have long resisted adding support for either protocol to their vehicles, although rumors persist that both companies are continuing to develop support internally for potential rollout in the future.

Automotive manufacturers’ broader embrace (public at least) for next-generation CarPlay Ultra has to date been muted at best. And GM is actively phasing out both CarPlay and Android Auto from new vehicle models, in favor of an internally developed entertainment software-and-display stack alternative.

What’s going on? Consider this direct quote from Apple’s May 2025 CarPlay Ultra press release:

CarPlay Ultra builds on the capabilities of CarPlay and provides the ultimate in-car experience by deeply integrating with the vehicle to deliver the best of iPhone and the best of the car. It provides information for all of the driver’s screens, including real-time content and gauges in the instrument cluster.

Granted, Apple has noted that in developing CarPlay Ultra, it’s “reflecting the automaker’s look and feel” (along with “offering drivers a customizable experience”). But given that all Apple showed last May was an Aston Martin logo next to its own:

I’d argue that Apple’s “partnership” claims are dubious, and maybe even specious. And per comments from Ford’s CEO Jim Farley in a recent interview, he seems to agree (the full interview is excellent and well worth a read):

Are you going to allow OEMs to control the vehicles? How far do you want the Apple brand to go? Do you want the Apple brand to start the car? Do you want the Apple brand to limit the speed? Do you want the Apple brand to limit access?

The bottom line, as I see it, is that Apple can pontificate all it wants that:

CarPlay Ultra allows automakers to express their distinct design philosophy with the look and feel their customers expect. Custom themes are crafted in close collaboration between Apple and the automaker’s design team, resulting in experiences that feel tailor-made for each vehicle.

But automakers like Ford and GM are obviously (and understandably so, IMHO) worried that with Apple and Google already taking over key aspects of the visual, touch (and audible; don’t forget about the Siri and Google Assistant-now-Gemini voice) interfaces, not to mention their even more aggressive aspirations (along with historical behavior in other markets as a guide to future behavior here), the manufacturer, brand and model uniqueness currently experienced by vehicle occupants will evaporate in response.

More to come

I’ll be curious to see (and cover) how this situation continues to develop. For now, I welcome your thoughts in the comments on what I’ve shared so far in this post. And FYI, I’ve also got two single-protocol wireless adapter candidates sitting in my teardown pile awaiting attention: a CarPlay-only unit from the “Luckymore Store”:

And an Android Auto-only unit, the v1 AAWireless, which I’d bought several years back in its original Indiegogo crowdfunding form:

Stay tuned for those, as well!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Apple CarPlay and Google Android Auto: Usage impressions and manufacturer tensions appeared first on EDN.

Is this low-inductance power-device package the real deal?

EDN Network - Чтв, 01/29/2026 - 15:00

While semiconductor die get so much of the attention due to their ever-shrinking feature size and ever-increasing substrate size, the ability to effectively package them and thus use them in a circuit is also critical. For this reason, considerable effort is devoted to developing and perfecting practical, technically advanced, thermally suitable cost-effective packages for components ranging from switching power devices to multi-gigahertz RF devices.

Regardless of frequency, package parasitic inductance is a detrimental issue, as it slows down slewing needed for switching crispness of digital devices and responsiveness of analog ones (of course, reality is that digital switching performance is still constrained by analog principles.).

Now, a researcher team at the US Department of Energy’s National Renewable Energy Laboratory (NREL; recently renamed as the National Laboratory of the Rockies) has developed a silicon-carbide half-bridge module that uses organic direct-bonded copper in a novel layout design to enable a high degree of magnetic-flux cancellation, Figure 1.                 

Figure 1 (left) 3D CAD drawing of new half-bridge inverter module; (right) Early prototype of polyimide-based half-bridge module. Source: NREL

Their Ultra-Low Inductance Smart (ULIS) package is a 1200 V, 400 A half bridge silicon carbide (SiC) power module that can be pushed beyond 200-kHz switching frequency at maximum power. The low-cost ULIS also allows the converter to become easier to manufacture, addressing issues related to both bulkiness and costs.  

Preliminary results show that it has approximately seven to nine times lower loop inductances and higher switching speeds at similar voltages/current levels, and five times the energy density of earlier designs — while occupying a smaller footprint, Figure 2.

Figure 2 The complete ULIS package is very different than conventional packages and offers far lower loop inductance compared to exiting approaches. Source: NREL

In addition to being powerful and lightweight, the module continuously tracks its own condition and can anticipate component failures before they happen.

In traditional designs, the power modules conduct electricity and dissipate excess heat by bonding copper sheets directly to a ceramic base—an effective, but rigid, solution. ULIS bonds copper to a flexible Dupont Temprion polymer create a thinner, lighter, more configurable design.

Unlike typical power modules which assemble semiconductor devices inside a brick-like package, ULIS winds its circuits around a flat, octagonal design, Figure 3. The disk-like shape allows more devices to be housed in a smaller area, making the overall package smaller and lighter.

Figure 3 This “exploded” drawing of the complete half-bridge power module shows the arrangement of the electrical and structural elements. Source: NREL

At the same time, its novel current routing allows for maximum cancellation of magnetic flux, contributing to the power module’s clean, low-loss electrical output, meaning ultrahigh efficiency.

While conventional power modules rely on bulky and inflexible materials, ULIS takes a new approach. Traditional designs call for power modules to conduct electricity and dissipate excess heat by bonding copper sheets directly to a ceramic base—an effective but rigid solution. ULIS bonds copper to the flexible, electrically insulating Temprion to create a thinner and lighter module.

The stacked module layout greatly improves energy density and reduces parasitic inductance (based on simulation data).  Typical half-bridge module inductance is 2.2 to 5.5 nanohenries, compared to 20 to 25 nH for existing designs. Further, reliability is enhanced as the compliance of Temprion reduces the strain caused by the differences in the coefficient of thermal expansion (CTE) between mated materials.

Since the material bonds easily to copper using just pressure and heat, and because its parts can be machined using widely available equipment, the team maintains that the ULIS can be fabricated quickly and inexpensively, with manifesting costs in the hundreds of dollars rather than thousands, Figure 4.

Figure 4 The ULIS can be machined using widely available equipment, thus significantly reducing the manufacturing costs for the power module. Source: NREL

Another innovation allows  the ULIS to function wirelessly as an isolated unit that can be controlled and monitored without external cables. A patent is pending for this low-latency wireless communication protocol.

The ULIS design is a good example of the challenges and dead-end paths that innovation can take on its path to a successful conclusion. According to the team’s report, one of the original layouts looked like a flower with a semiconductor at the tip of each petal. Another idea was to create a hollow cylinder with components wired to the inside.

Every idea the team came up with was either too expensive or too difficult to fabricate—until they stopped thinking in three dimensions and flattened the design into nearly two dimensions, which made it possible to build the module balancing complexity with cost and performance.

The details of the work are in their readable and detailed IEEE APEC paper “Organic Direct Bonded Copper-Based Rapid Prototyping for Silicon Carbide Power Module Packaging” but it is behind a paywall. However, there is a nice “poster” summary of their work posted at the NLR site here.

I wonder is this innovation will catch on and be adopted, but I certainly don’t know. What I do know is that some innovations are slow to catch on, and many do not because of real-world problems related to scaling up, volume production unforeseen technical issues, testability…it’s a long list of what can get in the way.

If you don’t think so, just look at batteries: every month, we see news of dramatic advances that will supposedly revolutionize their performance, yet these breakthroughs don’t seem to get traction. Sometimes it is due to technical or implementation problems, but often it is because the actual improvement they provide does not outweigh the disruption they create in getting there.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related content

The post Is this low-inductance power-device package the real deal? appeared first on EDN.

Top 10 edge AI chips

EDN Network - Чтв, 01/29/2026 - 15:00
Hailo’s Hailo-10H edge AI accelerator.

As edge devices become increasingly AI-enabled, more and more chips are emerging to fill every application niche. At the extremes, applications such as speech recognition can be done in always-on power envelopes, while tens of watts will be enough for even larger generative AI models today.

Here, in no particular order, are 10 of EDN’s selections for a range of edge AI applications. These devices range from those capable of handling multimodal large language models (LLMs) in edge devices to those designed for vision processing and minimizing power consumption for always-on applications.

Multiple camera streams

For vision applications, Ambarella Inc.’s latest release is the CV7 edge AI vision system-on-chip (SoC) for processing multiple high-quality camera streams simultaneously via convolutional neural networks (CNNs) or transformer networks. The CV7 features the latest generation of Ambarella’s proprietary AI accelerator, plus an in-house image-signal processor (ISP), which uses both traditional ISP algorithms and AI-driven features. This family also includes quad Arm Cortex-A73 cores, hardware video codecs on-chip, and a new, 64-bit DRAM interface.

Ambarella is targeting this family for AI-based 8K consumer products such as action cameras, multicamera security systems, robotics and drones, industrial automation, and video conferencing. It will also be suitable for automotive applications such as telematics and advanced driver-assistance systems.

 

 

Ambarella’s CV7 vision SoC.Ambarella’s CV7 vision SoC (Source: Ambarella Inc.) Fallback CPU

The MLSoC Modalix from SiMa Technologies Inc. is now available in production quantities, along with its LLiMa software framework for deployment of LLMs and generative AI models on Modalix. Modalix is SiMa’s second-generation architecture, which comes as a family of SoCs designed to host full applications.

Modalix chips have eight Arm A-class CPU cores on-chip alongside the accelerator, important for running application-level code, but also allows programs to fall back on the CPU just in case a particular math operation isn’t supported by the accelerator. Also on the SoC are an on-chip ISP and digital-signal processor (DSP). Modalix will come in 25-, 50-, 100-, and 200-TOPS (INT8) versions. The 50-TOPS version will be first to market and can run Llama2-7B at more than 10 tokens per second, with a power envelope of 8–10 W.

Open-source NPU

Synaptics Inc.’s Astra series of AI-enabled IoT SoCs range from application processors to microcontroller (MCU)-level parts. This family is purpose-built for the IoT.

The SL2610 family of multimodal edge AI processors is for applications between smart appliances, retail point-of-sale terminals, and drones. All parts in the family have two Arm Cortex-A55 cores, and some have a neural processing unit (NPU) subsystem. The Coral NPU included was developed at Google—it’s an open-source RISC-V CPU with scalar instructions—sitting alongside Synaptics’ homegrown AI accelerator, the T1, which offers 1-TOPS (INT8) performance for transformers and CNNs.

Synaptics’ SL2610 multimodal edge AI processors.Synaptics’ SL2610 multimodal edge AI processors (Source: Synaptics Inc.) Raspberry Pi compatibility

The Hailo-10H edge AI accelerator from Hailo Technologies Ltd. is gaining a large developer base, as it is available in a form factor that plugs into hobbyist platform Raspberry Pi. However, the Hailo-10H is also used by HP in add-on cards for its point-of-sale systems, and it’s also automotive-qualified.

The 10H is the same silicon as the Hailo-10 but runs at a lower power-performance point: The 10H can run 2B-parameter LLMs in about 2.5 W. The architecture of this AI co-processor is based on Hailo’s second-generation architecture, which has improved support for transformer architectures and more flexible number representation. Multiple models can be inferenced concurrently.

Hailo’s Hailo-10H edge AI accelerator.Hailo’s Hailo-10H edge AI accelerator (Source: Hailo Technologies Ltd.) Analog acceleration

Startup EnCharge AI announced its first product, the EN100. This chip is a 200-TOPS (INT8) accelerator targeted squarely at the AI PC, achieving an impressive 40 TOPS/W. The device is based on EnCharge’s capacitance-based analog compute-in-memory technology, which the company says is less temperature-sensitive than resistance-based schemes. The accelerator’s output is a voltage (not a current), meaning transimpedance amplifiers aren’t needed, saving power.

Alongside the analog accelerator on-chip are some digital cores that can be used if higher precision is required, or floating-point maths. The EN100 will be available on a single-chip M.2 card with 32-GB LPDDR, with a power envelope of 8.25 W. A four-chip, half-height, half-length PCIe card offers up to 1 TOPS (INT8) in a 40-W power envelope, with 128-GB LPDDR memory.

Encharge AI’s EN100 M.2 card.Encharge AI’s EN100 M.2 card (Source: Encharge AI) SNNs

For microwatt applications, Innatera Nanosystems B.V. has developed an AI-equipped MCU that can run inference at very, very low power. The Pulsar neuromorphic MCU targets always-on sensor applications: It consumes 600 µW for radar-based presence detection and 400 µW for audio scene classification, for example.

The neural processor uses Innatera’s spiking neural network (SNN) accelerators—there are both analog and digital spiking accelerators on-chip, which can be used for different types of applications and workloads. Innatera says its software stack, Talamo, means developers don’t have to be SNN experts to use the device. Talamo interfaces directly with PyTorch and a PyTorch-based simulator and can enable power consumption estimations at any stage of development.

Innatera’s Pulsar spiking neural processor.Innatera’s Pulsar spiking neural processor (Source: Innatera Nanosystems B.V.) Generative AI

Axelera AI’s second-generation chip, Europa, can support both multi-user generative AI and computer vision applications in endpoint devices or edge servers. This eight-core chip can deliver 629 TOPS (INT8). The accelerator has large vector engines for AI computation alongside two clusters of eight RISC-V CPU cores for pre- and post-processing of data. There is also an H.264/H.265 decoder on-chip, meaning the host CPU can be kept free for application-level software. Given the importance of ensuring compute cores are fed quickly with data from memory, the Europa AI processor unit provides 128 MB of L2 SRAM and a 256-bit LPDDR5 interface.

Axelera’s Voyager software development kit covers both Europa and the company’s first-generation chip, Metis, reserved for more classical CNNs and vision tasks. Europa is available both as a chip or on a PCIe card. The cards are intended for edge server applications in which processing multiple 4K video streams is needed.

Butter wouldn’t melt

Most members of the DX-M1 series from South Korean chip company DeepX Co. Ltd. provide 25-TOPS (INT8) performance in the 2- to 5-W power envelope (the exception being the DX-M1M-L, offering 13 TOPS). One of the company’s most memorable demos involves placing a blob of butter directly on its chip while running inference to show that it doesn’t get hot enough for the butter to melt.

Delivering 25 TOPS in this co-processor chip is plenty for vision tasks such as pose estimation or facial recognition in drones, robots, or other camera systems. Under development, the DX-M2 will run generative AI workloads at the edge. Part of the company’s secret sauce is in its quantization scheme, which can run INT8-quantized networks with accuracy comparable to the FP32 original. DeepX sells chips, modules/cards, and small, multichip systems based on its technology for different edge applications.

Voice interface

The latest ultra-low-power edge AI accelerator from Syntiant Corp., the NDP250, offers 5× the tensor throughput versus its processor. This device is designed for computer vision, speech recognition, and sensor data processing. It can run on as little as microwatts, but for full, always-on vision processing, the consumption is closer to tens of milliwatts.

As with other parts in Syntiant’s range, the devices use the company’s AI accelerator core (30 GOPS [INT8]) alongside an Arm Cortex-M0 MCU core and an on-chip Tensilica HiFi 3 DSP. On-chip memory can store up to 6-million-bit parameters. The NDP250’s DSP supports floating-point maths for the first time in the Syntiant range. The company suggests that the ability to run both automatic speech recognition and text-to-speech models will lend the NDP250 to voice interfaces in particular.

Multiple power modes

Nvidia Corp.’s Jetson Orin Nano is designed for AI in all kinds of edge devices, targeting robotics in particular. It’s an Ampere-generation GPU module with either 8 GB or 4 GB of LPDDR5. The 8-GB version can do 33 TOPS (dense INT8) or 17 TFLOPS (FP16). It has three power modes: 7-W, 15-W, and a new, 25-W mode, which boosts memory bandwidth to 102 GB/s (from 65 GB/s for the 15-W mode) by increasing GPU, memory, and CPU clocks. The module’s CPU has six Arm Cortex-A78AE 64-bit cores. Jetson Orin Nano will be a good fit for multimodal and generative AI at the edge, including vision transformer and various small language models (in general, those with <7 billion parameters).

Nvidia’s Jetson Orin Nano.Nvidia’s Jetson Orin Nano (Source: Nvidia Corporation)

The post Top 10 edge AI chips appeared first on EDN.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів