Feed aggregator

Power Tips #151: Improving efficiency in 48V-input multiphase buck converters with GaN

EDN Network - 9 hours 17 min ago

Step-down buck converters used in 48V-to-5V power supply designs are becoming increasingly common in automotive and industrial applications, especially in advanced driver assistance systems, in-vehicle infotainment, and robotics. While synchronous buck topologies achieve high efficiency, they sometimes fall short of expected performance. In some cases, switching behavior, controller bias, power, and thermal performance can create limiting losses, resulting in a decrease in efficiency.

Figure 1 shows the efficiency of Texas Instruments’ 48 VIN, 960 W four-phase buck converter with integrated GaN reference design (PMP23595), with the output voltage set to 5 V using forced pulse-width modulation operation without cooling.

Figure 1 Efficiency of 48 VIN to 5 VOUT at a 400 kHz switching frequency. Source: Texas Instruments

The efficiency curve in Figure 1 can meet the specifications of most 48V-to-5V power supply designs, but could fall just below the intended target for others. Rather than changing topology or adding complexity, it’s possible to make some practical adjustments within a standard buck converter to boost efficiency further.

Figure 2 shows the efficiency curve for of the 48V-5V buck converter under several test configurations, including added thermal management, switching frequency adjustment and external bias operation. These configurations were selected to isolate the effects of each adjustment and indicate that different loss mechanisms dominate depending on the operating point. Let’s look at each adjustment in greater detail.

Figure 2 Efficiency of 48VIN to 5VOUT with multiple adjustments. Source: Texas Instruments

Adjustment No. 1: Thermal performance

Adding a cooling system, in this case a heat sink, produced a negligible improvement at a low output current but resulted in a clear improvement above 30 A.

At a low output current, the total power dissipation remains relatively small, and device temperatures remain closer to ambient. Thus, reducing thermal resistance provides little effect.

At higher output current, conduction losses increase with IOUT2, causing the field-effect transistor (FET) junction temperature and inductor temperature to rise. As temperature increases, the FET drain-to-source on-resistance (RDS(on)) and inductor copper resistance increase, further increasing conduction losses. Incorporating a heat sink or some form of cooling reduces this rise in junction temperature, directly lowering temperature-dependent resistances. Another result is a measurable reduction in conduction losses, which appear as improved efficiency at high currents. At a high current – 80 A in this scenario – the improvement reached 0.8%.

Adjustment No. 2: Switching frequency

Reducing the switching frequency from 400 kHz to 250 kHz while ensuring that the inductance value was still suitable improved efficiency approximately 0.5% through the mid-current range and 1% in the high-current range. However, decreasing the switching frequency too much with the same inductor value can result in higher core losses if you don’t manage the ripple current correctly.

Reduced switching-related losses cause this behavior, such as field-effect transistor turn-on and turn-off losses, gate-drive losses, and internal controller switching losses. At a 48-V input, these losses scale quickly with both current and switching frequency.

At light loads, reducing the switching frequency produces smaller efficiency improvements, suggesting that fixed losses such as quiescent current or inductor core loss dominate in this region and limit the overall impact of this adjustment.

Adjustment No. 3: Controller bias power

In a forced pulse-width modulation configuration, supplying the controller bias from an external 5-V source improves efficiency by approximately 0.5% in the light- to mid-current range.

Deriving bias from VOUT remains a viable option if the output voltage is not a much higher voltage (such as 24 V and above) or much lower (such as 3V and below).

When deriving bias power internally from the output rail, a small portion of the converter’s output power operates the controller. At light loads, this overhead represents a slightly larger fraction of the total output power.

At higher output currents, the conduction losses in the FETs and inductor begin to dominate. In this region, the controller bias power becomes such a small fraction of total losses that it no longer produces a measurable efficiency benefit. As a result, the externally biased efficiency curve converges with the internally biased efficiency curve.

Adjustment No. 4: Inductor optimization

The inductor can play a larger role in efficiency than its direct current resistance (DCR) alone suggests. While copper losses depend on DCR and scale with the output current, core losses depend strongly on ripple current and switching frequency.

If the ripple current is high, core losses can become significant. This is especially common with powdered iron core material, which can have high core losses if you don’t account for the ripple current.

Increasing the inductance reduces ripple current and core losses but may increase DCR. Conversely, using a very low DCR inductor while having excessive ripple current can increase core losses to the point where it offsets the efficiency boost. The inductor choice balances DCR and ripple current such that neither copper nor core losses dominate.

When looking to improve converter efficiency, identify which loss mechanism dominates the operating region of interest as a useful first step. For what we have seen here on this synchronous buck converter, you can evaluate it quickly:

  • If light-load efficiency is low, examine the switching frequency and internal bias losses.
  • If efficiency is low at high current, focus on conduction losses and thermal management.
  • If the losses appear higher than expected across the full current range, review the inductor ripple current and core material.

Once you identify the dominant loss mechanism, minor design adjustments can often lead to measurable efficiency gains.

The high-efficiency system in this exercise used the TI reference design that I mentioned earlier, which includes the LMG708B0 synchronous step-down converter with integrated GaN configured to a 5-V output with a reduced inductance of 2.5µH.

References

  1. Jacob, Mathew. “Select inductors for buck converters to get optimum efficiency and reliability.” Texas Instruments Analog Design Journal article, literature No. SLYT775, 3Q2019.

Matthew Bowers is a systems engineer in TI’s Power Design Services team, focused on developing power solutions for automotive applications. Matthew received his bachelor’s degree in electrical engineering from Texas Tech University in 2023.

 

Related Content

The post Power Tips #151: Improving efficiency in 48V-input multiphase buck converters with GaN appeared first on EDN.

What does Arm’s own chip stand for?

EDN Network - 10 hours 33 min ago

Arm is now a chip vendor—what does it mean for the semiconductor industry? EE Times’ Nitin Dahad was at the event in San Francisco, California, where the British IP giant unveiled its first chip, an AGI CPU for data centers. He reports on what it means for the company, now increasingly dubbed Arm 2.0, and how this launch will impact its standing in the semiconductor industry. He also explains the delicate balancing act that Arm will have to play moving forward.

Read the full article at EDN’s sister publication, EE Times.

Related Content

The post What does Arm’s own chip stand for? appeared first on EDN.

У КПІ відбулася відкрита лекція Посла Республіки Корея Пака Кічанга

Новини - 10 hours 35 min ago
У КПІ відбулася відкрита лекція Посла Республіки Корея Пака Кічанга
Image
kpi пт, 03/27/2026 - 12:41
Текст

🇰🇷 Студенти, викладачі та представники адміністрації університету поспілкувалися з Надзвичайним і Повноважним Послом Республіки Корея Паком Кічангом під час відкритої лекції у форматі запитань і відповідей.

КПІ ім. Ігоря Сікорського відвідала делегація Королівства Данія

Новини - 10 hours 39 min ago
КПІ ім. Ігоря Сікорського відвідала делегація Королівства Данія
Image
kpi пт, 03/27/2026 - 12:38
Текст

🇩🇰 Київська політехніка стала майданчиком для робочої зустрічі представників Університету Південної Данії, Ольборзького університету, Наукового парку «Фінкорд-Політех» та української компанії SkyFall.

📰 Газета "Київський політехнік" № 11-12 за 2026 (.pdf)

Новини - 11 hours 7 min ago
📰 Газета "Київський політехнік" № 11-12 за 2026 (.pdf)
Image
Інформація КП пт, 03/27/2026 - 12:10
Текст

Вийшов 11-12 номер газети "Київський політехнік" за 2026 рік

Wolfspeed reduces senior secured note balance by 43% after raising $475.9m in private placements

Semiconductor today - 12 hours 15 min ago
Wolfspeed Inc of Durham, NC, USA — which makes silicon carbide (SiC) materials and power semiconductor devices — has completed its private placements (announced on 19 March) of...

Overcoming interconnect obstacles with co-packaged optics (CPO)

EDN Network - 12 hours 24 min ago

Over the last few years, there has been growing interest across the global semiconductor packaging industry with a new approach. Co-packaged optics (CPO) involves integrating optical fibers, used for data transmission, directly onto the same package or photonic IC die as semiconductor chips. Traditionally, semiconductor packaging has used copper interconnects, but these can consume large amounts of power and lead to signal weakening at high frequencies when the distance is further than a couple of meters.

With CPO, the optical components are integrated directly into a package, and the long copper trace between the switch and the optical module is replaced with short, high-integrity connections. Optical signaling uses far less power at high data rates than electrical signaling. As CPO reduces the distance between optical components and the semiconductor dice, this lowers latency, improves high-speed signal integrity, and accelerates data transfer.

All of which are fundamental for the next generation of AI devices for high performance computing (HPC) inside the data center systems. Nevertheless, there are obstacles that need to be overcome with CPO and when designing photonic packages, especially for integrated photonic circuits or photonic chips. This is why advances in photonic package design are coming to the forefront.

 

Overcoming CPO obstacles

When co-packaging photonics with electronics, there can be signal integrity issues. Electrical crosstalk must be reduced to improve signal quality. Using short interconnects and low-parasitic layouts are the most appropriate tactics when used alongside co-design tools for optical optimization. Signal integrity can be ensured without requiring complex routing or more space, as optical interconnects can support multi-terabit-per-second data rates over long distances with only minor signal loss.

Mounting a large photonic IC die onto a laminate or organic substrate can be problematic. Due to the coefficient of thermal expansion (CTE) mismatch between the substrate and the photonic IC die, non-negligible die warpage may occur. This warpage can significantly degrade optical signal performance in the photonic IC waveguides during data transmission, leading to substantial reductions in optical signal power and quality.

In addition, excessive warpage may introduce mechanical stress in the photonic IC die, altering its material properties and further impacting optical performance. While using a ceramic substrate could mitigate these issues, it’s more costly and is not widely adopted today.

Dealing with temperature variations can be a concern with photonic devices, but efficient thermal management and thorough thermal design can help to improve performance and reliability. Integrating photonics with electronics may require thermoelectric coolers (TECs) and heat sinks along with smart thermal simulations throughout the design process.

Sub-micron alignment is also a complex technical task. Optical misalignment can lead to significant insertion losses, as well as disrupting device performance. Leveraging passive alignment techniques with etched features or alignment markers may mean lower levels of accuracy, but this is the lowest cost. Active alignment, using real-time optical feedback, results in better performance and efficiency, though it’s far more complex and costly.

Addressing challenges when testing optical components involves using built-in test waveguides, automated optical probing systems, and standardized test procedures during and after packaging. Integrating optical and electrical components into a single package not only makes the manufacturing process more complicated, the associated risks and costs are also greater due to the different assembly phases. It’s possible to cut through the complexity and improve yields by using standardized processes for CPO assembly.

The future of CPO and photonic package design

As a result of the growing interest in CPO and photonic packaging, there have been advances in photonic package design. CPO enables faster data transmission and improved power-efficiency when compared to the conventional copper-based interconnects approach. It has many advantages, including high-speed communication and lower power consumption, but there are also concerns related to signal integrity, thermal management, optical alignment, and costs.

Advances in photonic package design can overcome these obstacles and help electronic design engineers create new architectures that would not be viable with traditional semiconductor packaging. As the semiconductor industry continues to rapidly evolve, with more complex devices requiring high-performance, compact and power-efficient chips, CPO with advanced photonic package design will become increasingly important.

Dr Larry Zu is CEO of Sarcina Technology.

Special Section: Chiplets Design

The post Overcoming interconnect obstacles with co-packaged optics (CPO) appeared first on EDN.

Nuvoton and Trustonic Collaborate to Strengthen Security of NuMicro MA35 Series MPU

ELE Times - 13 hours 59 min ago

Leading semiconductor manufacturer, Nuvoton, has partnered with pioneering cybersecurity business, Trustonic, to strengthen the capability of its advanced NuMicro® MA35 series MPU.

Established in 2008, Nuvoton was founded to bring innovative semiconductors to market and has since evolved into a leading name in the provision of microcontroller application integrated circuits (ICs), audio application ICs and cloud & computing ICs.

To strengthen the security of the solution, the Trusted Secure Island (TSI) of Nuvoton’s NuMicro MA35 series integrates Trustonic’s Trusted Execution Environment (TEE), Kinibi.

Having obtained the World’s first comprehensive EAL5+ certification in 2022, ‘Kinibi’ is now deployed to nearly 3 billion smart devices and 20 million vehicles globally, with zero safety violations. Its integration in the NuMicro MA35 series creates a secure environment that drives Protection, Detection, and Recovery for IoT products, including EV chargers.

Walter Tseng, Vice President of the Microcontroller Business Group at Nuvoton, explained: “Our partnership with Trustonic represents a significant milestone in Nuvoton’s commitment to providing industry-leading security for the industrial IoT market. By integrating the EAL5+ certified Kinibi TEE into our NuMicro MA35 series, we are providing our customers with a robust, hardware-backed security foundation. This collaboration ensures that critical industrial applications—from edge gateways to smart factory automation—are protected against evolving cyber threats through a dedicated ‘Protection, Detection, and Recovery’ framework, all while maintaining the high performance our users expect.”

Andrew Till, General Manager of Secure Platform for Trustonic, added: “Nuvoton’s MA35 platform is designed for high-performance edge applications, and security is critical to its success. Integrating Kinibi provides a proven Trusted Execution Environment that protects sensitive operations and enables manufacturers to build secure, scalable industrial IoT solutions with confidence.”

The post Nuvoton and Trustonic Collaborate to Strengthen Security of NuMicro MA35 Series MPU appeared first on ELE Times.

SMD LED

Reddit:Electronics - Thu, 03/26/2026 - 20:37
SMD LED

This two images i took a long time ago are from a smd led, its curious to se the two little wires connecting the led!.

submitted by /u/aguilavoladora36
[link] [comments]

КПІшниці — у плей-офф Клубно-зальної волейбольної ліги

Новини - Thu, 03/26/2026 - 20:19
КПІшниці — у плей-офф Клубно-зальної волейбольної ліги
Image
kpi чт, 03/26/2026 - 20:19
Текст

🏐 Жіноча збірна КПІ впевнено подолала груповий етап Клубно-зальної волейбольної ліги на офіційному Чемпіонаті Києва й вийшла у плей-офф. Далі — ігри на виліт і боротьба за високі місця серед найсильніших комад турніру.

Lumentum to establish new US plant to manufacture indium phosphide lasers for AI data centers

Semiconductor today - Thu, 03/26/2026 - 16:21
Lumentum Holdings Inc of San Jose, CA, USA (which designs and makes photonics products for optical networks and lasers for industrial and consumer markets) plans to establish a new US manufacturing facility in Greensboro, North Carolina. The 240,000ft2 facility will produce indium phosphide (InP)-based optical devices that serve as critical components in the world’s largest AI data centers...

💡 Оголошено набір на інтенсивні курси з англійської мови до ЄВІ

Новини - Thu, 03/26/2026 - 15:01
💡 Оголошено набір на інтенсивні курси з англійської мови до ЄВІ
Image
kpi чт, 03/26/2026 - 15:01
Текст

🎓 Хочеш впевнено скласти ЄВІ та вступити в магістратуру чи аспірантуру? Почни підготовку разом із КПІ ім. Ігоря Сікорського!

Made an FPGA based calculator, supports basic arithmetic (+ - * /), log(x,y), exponent(x,y), sin, cos, tan.

Reddit:Electronics - Thu, 03/26/2026 - 14:48
Made an FPGA based calculator, supports basic arithmetic (+ - * /), log(x,y), exponent(x,y), sin, cos, tan.

implemented the whole thing on a PYNQ-Z2 FPGA + an Arduino UNO (probably a clone lol).

made my own custom keyboard using ~30 pushbuttons,

connected them to a 32:5 encoder (which is made using 4* 8:3 encoders and some AND gate ICs)

resulting in a 5 bit input to the fpga.

fpga then debounces the input, decodes the 5bit signal back to 30 buttons,

which are then connected to the internal keyboard of the fpga.

now, every button pressed results in the insertion of a character into the calc's input buffer.

could be a number, operator, function, decimal, comma, parenthesis, one of the 2 constants pi & e

each character is repersented by a unique 8 bit ID

when "evaluate" signal is sent, the gears start spinning

first, the numbuilder converts the seperate tokens of a number, like :
9 . 0 1 8 3 9 1 into a single number: 9.018391

Represented in a type, sign, mantissa, signed exponent format, so:

2+1+34+7 = 44 bits in total

then comes the infix to postfix converter

then the postfix evaluator

and when it's done evaluating, the final SPI master takes the initial input buffer, and the final answer as inputs, and sends them to an arduino via the SPI protocol. (unidirectional, since the arduino dosen't have to talk back to the FPGA)

then the arduino displays the buffer and the final answer on the 16*2 LCD display using preexisting libraries

(grossly oversimplified the whole flow, but yea these are all the modules in the picture)

im still a beginner but im proud to be a digital electronics enthusiast, there's still alot i need to learn!!

submitted by /u/Rude_Parfait_3194
[link] [comments]

The Tapo Hub: TP-Link joins the low-bandwidth, long-range RF club

EDN Network - Thu, 03/26/2026 - 14:00

Leveraging low-power wireless connectivity isn’t proprietary to a single smart-home technology and product supplier, no matter that each company’s implementation of the concept may be.

Back in 2019, when I first conceptually explored, then tore down, and finally implemented personally a Blink outdoor security system (still operational to this very day):

The aspect of the architecture that intrigued me the most was the camera’s battery-powered nature. How on earth were they spec’d to run for up to two years (far from nearly five in real life) solely on two lithium AA cells while still regularly remaining user-accessible over Wi-Fi?

The answer, as those of you who’ve already read my writeups (and remember them) know, was a two-fold response:

  • The entire system wasn’t battery-powered, and
  • The communications infrastructure wasn’t solely Wi-Fi

In-between the cameras (back then, I was apparently using quarters for size comparison purposes, not pennies):

and the Internet is a Sync Module:

Multi-spectral stinginess

Requoting my original piece in the series:

A Blink system consists of one or multiple tiny cameras, each connected both directly to a common router or to an access point intermediary (and from there to the Internet) via Wi-Fi, and to a common (and equally diminutive) Sync Module control point (which itself then connects to that same router or access point intermediary via Wi-Fi) via a proprietary “LFR” long-range 900 MHz channel.

The purpose of the Sync Module may be non-intuitive to those of you who (like me) have used standalone cameras before…until you realize that each camera is claimed to be capable of running for up to two years on a single set of two AA lithium cells. Perhaps obviously, this power stinginess precludes continuous video broadcast from each camera, a “constraint” which also neatly preserves both available LAN and WAN bandwidth. Instead, the Android or iOS smartphone or tablet app first communicates with the Sync Module and uses it to initiate subsequent transmission from a network-connected camera (generic web browser access to the cameras is unfortunately not available, although you can also view the cameras’ outputs from either a standalone Echo Show or Spot, or a Kindle Fire tablet in Echo Show mode).

That the battery-powered network nodes (cameras in this case) are battery-based is convenient from a location-flexibility standpoint, not necessitating running wired-power feeds to them, just as the fact that they’re wireless precludes needing to run Cat5 spans to them. And in some cases, it also enables ongoing implementation functionality (at least to a degree) even if premises power goes down.

Discerning degree of dryness

Fast forward to the present. My wife and I recently bought a couple of ionizing humidifiers for the house, one of them “smart” (believe it or not; stay tuned for coverage to come):

The (upstairs) thermostats for our (downstairs) furnaces, one for each horizontal half of the house, supposedly also report residence humidity, but I’ve never believed the data they feed me; they perpetually say that it’s “<15%”. I could have just bought a cheap hygrometer (standalone humidity sensor) for $5 or so; this one’s even solar-rechargeable:

But when I came across one, the T315, part of TP-Link’s Tapo smart home product suite, I knew I had to have it:

It was less than $25 at Amazon. It leveraged Kindle-reminiscent display tech. And I already had several other Tapo devices active in the home. How hard could it be to add one more?

Ingenuity redux

Not hard, it turned out, but not quite as straightforward as I’d initially envisioned. The Tapo T315 is battery-powered, just like those Blink XT cameras. And equally similarly (can you already guess where I’m going here?), just as with TP-Link’s other smart sensors—buttons (doorbells, etc.), door and window contacts, presence, motion, water leak (hold that last thought), etc.—this time, in-between it and my router, there’s therefore a required (drum roll) smart hub!

Since my data payload size was modest in this case, I went with the entry-level Tapo H100, which Amazon also sells for sub-$25:

And I quote (sound familiar?):

The Tapo Hub is the heart of your Tapo smart home, connecting devices like smart sensors, switches and buttons, using an ultra-low power wireless protocol. This technology helps battery-powered devices last up to 10 times longer.

The company also sells more advanced (but still economical) hubs that further comprehend battery-powered Tapo security cameras (including, I’m assuming, transitioning them to Wi-Fi for active broadcast streaming, and also supporting local recording storage); the mid-range microSD card-based H200 and high-end H500, the latter shipping with 16 GBytes of eMMC flash memory and (believe it or not) further expandable via an optional 2.5” SATA HDD or SSD.

Here’s the packaging for the Tapo H100 smart hub, which I needed to activate first:

And here’s what was inside, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, along with a sliver of literature which I didn’t bother photographing:

Nitty-gritty details:

Right-side configuration and reset switch:

After plugging it in to a power strip-housed AC outlet, setup was multi-step but straightforward:

Success!

Desert, jungle, or somewhere in-between?

Now for the Tapo T315 hygrometer. Packaging first, again:

Setup, including connection to the now-active hub several rooms over, was once again easy:

And there we are! Sub-15% humidity…pfft…

Water, water, (hopefully not) every where…

Feeling pretty good about myself, I decided to push my luck once more. When the plumber replaced our geriatric (but thankfully not yet leaking) water heater downstairs in the furnace room a few years ago, he threw in a standalone leak detection sensor (a valuable albeit often overlooked addition to any residence) to reside on the floor next to it:

Note, however, this bit in the operating instructions:

Replacing the battery: Replace the battery if the alarm has operated for an extended period of time, or if the battery expiration date is approaching. You may want to mark the battery expiration date on a piece of tape and attach it to the alarm when you install the battery.

Let’s be real. I know myself well enough to realize that once I set it, I’m going to forget it. I was admittedly surprised to learn, after replacing it (more accurately, moving it; it now sits below the whole-house water filter enclosure in a different room) that unlike my carbon monoxide detectors at their end-of-life dates, it didn’t at least chirp when its battery was getting low. That said, we’d only hear the sound if we were there at the time, and assuming it was loud enough to capture our attention. And further to that point, more generally, if we were away when a leak started, we’d be blissfully ignorant of what was going on…at least at first, until we returned home, that is.

Enter the $19.99 (on Amazon as I write this) TP-Link Tapo T300 Smart Water Leak Sensor:

Once again, box shots first:

Followed by what’s inside (minus, again, the also-provided piece of paperwork):

Yank the blue plastic strip to activate the factory-installed and user-replaceable two-battery connection:

Thereby auto-transitioning the sensor to setup mode:

Go through the brain-dead simple setup steps:

And voilà:

Dissections, etc., to come

My mixed Kasa-plus-Tapo smart home topology is functionally rock-solid so far, including the hub-based portion. Buh-bye, Belkin Wemo…and maybe, someday, Blink, too. To be clear, Blink and TP-Link’s disparate ecosystems, coupled with the latter’s comparatively greater product type diversity, would be the sole long-term replacement motivation (specifically, mothballing my Blink cameras and replacing them with TP-Link equivalents).

My Blink gear also continues to work just fine, including no evidence whatsoever of any functionally degrading interference between its and TP-Link’s respective ultra-low power wireless links. That all said, I’ll undoubtedly further expand my TP-Link-sourced stuff in the future; stay tuned for more hands-on coverage. Speaking of which, I’ve also got a redundant Tapo H100 smart hub and T300 smart water leak sensor, both sitting on the shelf, queued up for teardown, along with a display-less sibling of the T315 hygrometer, the Tapo T310 Smart Temperature and Humidity Sensor ($17.99 at Amazon):

I hope you’re looking forward to those analyses as well. Until then, let me know what you think in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post The Tapo Hub: TP-Link joins the low-bandwidth, long-range RF club appeared first on EDN.

Aixtron to build new manufacturing plant in Malaysia

Semiconductor today - Thu, 03/26/2026 - 12:44
To strengthen its global competitiveness, deposition equipment maker Aixtron SE of Herzogenrath, near Aachen, Germany is to build a new manufacturing facility in Malaysia in order to tap into the fast growing semiconductor equipment ecosystem in South East Asia...

STMicroelectronics and Leopard Imaging use NVIDIA Jetson-ready multi-sensor module for robotics vision 

ELE Times - Thu, 03/26/2026 - 10:56

STMicroelectronics and Leopard Imaging have introduced an all-in-one multimodal vision module for humanoid and other advanced robotics systems. Combining ST imaging, 3D scene-mapping, and motion sensing with the NVIDIA Holoscan Sensor Bridge technology, the module integrates natively with NVIDIA Jetson and NVIDIA Isaac open robot development platform, simplifying and accelerating vision system design within the size, weight, and power constraints of humanoid robots.

“Humanoid robotics is moving beyond research projects and demonstrations to deliver powerful new machines for a wide range of roles in manufacturing and automotive factories, logistics and warehousing, as well as retail and customer service,” said Marco Angelici, Vice-President of Marketing and Application for Analogue Power MEMS and Sensors, at STMicroelectronics. “Our collaboration with Leopard Imaging brings market-leading ST sensors and actuators, seamlessly integrated into the NVIDIA robotics ecosystem, to accelerate the deployment of physical AI applications with human-like awareness.”

Accessing ST sensors and actuators directly within the ecosystem has allowed us to standardise and streamline data acquisition and logging for humanoid robot vision across the HSB interface,” said Bill Pu, CEO of Leopard Imaging. “Robot builders can use our multi-sensing vision module with Isaac tools to accelerate learning and quickly bridge the ‘sim-to-real’ gap.

Powered by the NVIDIA Holoscan Sensor Bridge, the new module integrates seamlessly with NVIDIA Jetson over Ethernet for real-time sensor data ingestion and NVIDIA Isaac open robot development platform, which offers open AI models, simulation frameworks and libraries for developers. The new module includes a build system and application programming interfaces (APIs), artificial intelligence (AI) algorithms curated for mobile robots, sample applications, domain randomisation, and a simulation environment containing sensor models.

ST continues to integrate its sensors, drivers, actuators, controllers, and development tools into the NVIDIA robotics ecosystem as a key NVIDIA robotics and edge AI partner, including high-fidelity models and proof-of-concept modules.

Technical information

The Leopard Imaging Systems vision module incorporates:

For vision-based sensing, the ST VB1940 automotive-grade RGB-IR 5.1-megapixel image sensor has combined rolling shutter and global shutter modes. ST has also released a mass market and industrial version V**943, part of the ST BrightSense product family, existing in monochrome or RGB-IR, in die or packaged sensor.

For motion sensing, the LSM6DSV16X 6-axis inertial measurement unit (IMU) embeds ST machine-learning core (MLC) for AI in the edge, sensor-fusion low-power (SFLP), and Qvar electrostatic sensing for user-interface detection.

For 3D depth sensing, the VL53L9CX dToF all-in-one LiDAR module, part of the ST FlightSense product family, provides 3D depth sensing with accurate ranging up to 9 meters. With its resolution of 54 x 42 zones (nearly 2,300 zones) combined with a wide 55°x42° FoV providing 1° angular resolution, short and long-distance measurements and small objects detection are achievable at up to 100 fps.

The post STMicroelectronics and Leopard Imaging use NVIDIA Jetson-ready multi-sensor module for robotics vision  appeared first on ELE Times.

Decoding SDV Revolution: Sensors, AI, and the Future of Automotive Architecture

ELE Times - Thu, 03/26/2026 - 10:45

At Auto EV TVS Summit, 2025, a panel of industry leaders—from semiconductor companies to automotive software firms—gathered to discuss one of the most transformative shifts underway in mobility: the rise of the Software-Defined Vehicles (SDVs). Moderated by Mohammed Saeed Mombasawala, CTO at Keysight Technologies, the discussion brought together voices from Bosch Global Software Technologies, Marelli India, NXP Semiconductors, Aumovio, and Auto Ascent to unpack how vehicles are evolving from mechanical machines into continuously upgradeable software platforms.

The consensus across the panel was clear: software is no longer an auxiliary component in the automotive stack—it is becoming the primary architecture around which the vehicle is built.

The Shift Toward “Intelligence on Wheels”

Software-defined vehicles represent a departure from the traditional automotive model, where functionality was fixed at the time of manufacturing. Instead, SDVs rely on software layers that can evolve through updates, new services, and data-driven improvements throughout a vehicle’s lifecycle. “SDV is essentially about delivering affordable intelligence on wheels,” explained Bosch’s Naved Narayan during the panel discussion.

The concept is already beginning to take shape in India. Features such as over-the-air updates, connected vehicle services, and Level-2 driver assistance systems are gradually entering the market. However, unlike Western markets where premium vehicles dominate adoption, India’s automotive ecosystem operates under a different constraint: cost sensitivity.

Industry participants emphasized that the success of SDVs in India will depend on achieving a delicate balance between technological sophistication and affordability.

India’s SDV Journey: Progress, But With Constraints

While the SDV revolution is global, India’s pathway has unique challenges. Panelists noted that the country is progressing toward connected and intelligent mobility, but several structural barriers remain.

Latha Chembrakalam, Founder and CEO of Auto Ascent, highlighted that the industry is driven by three key factors: safety, ease of use, and joy of driving. Software-defined architectures promise to enhance all three—but India must simultaneously contend with infrastructure gaps and regulatory limitations.

“India is a cost-sensitive market, and infrastructure readiness also plays a major role,” she noted. “While progress is visible, technological capabilities and regulatory frameworks still need to evolve to fully support SDV adoption.” Yet there are encouraging signs. Automakers such as Mahindra and MG have already begun introducing advanced connected features and driver assistance technologies in the Indian market, creating early momentum.

The New Architecture of the Vehicle

At the engineering level, SDVs are forcing a fundamental redesign of vehicle electronics. Traditional vehicles rely on dozens of electronic control units (ECUs), each responsible for a specific function. SDVs, however, are shifting toward centralized computing architectures, where domain controllers or zonal controllers manage multiple vehicle functions.

This transformation also introduces new challenges. Automotive companies must reconcile modern SDV architectures with legacy platforms and existing software stacks. “For new electric vehicle platforms, it is easier to design from scratch,” Chembrakalam explained. “But the real complexity lies in integrating SDV capabilities into existing vehicle platforms with legacy systems.”

In addition, the industry still lacks fully standardized architectures across OEMs, Tier-1 suppliers, and technology vendors. Without stronger coordination, experts warned, the ecosystem risks becoming fragmented.

Sensors, Connectivity, and the Edge Computing Challenge

Software-defined vehicles depend heavily on three technological pillars:

  • Sensors
  • Connectivity
  • In-vehicle compute

Radar, cameras, LiDAR, and other sensing technologies collectively form the vehicle’s perception system. Rather than relying on a single sensor type, most companies now favor sensor fusion architectures that combine multiple sensing modalities. Rajkumar Anantharaman of NXP noted that radar remains one of the most critical sensing technologies due to its reliability across weather conditions. However, cameras provide complementary information, such as object classification and visual context.

“Radar alone cannot solve everything,” he said. “The industry is moving toward fusion architectures where radar and camera data are combined to improve environmental perception.” At the same time, sensing systems are also becoming increasingly intelligent. Modern radar chips now integrate edge processing capabilities, allowing them to detect and classify objects directly on the sensor rather than transmitting raw data to a central processor.

This shift toward edge AI processing helps reduce latency and bandwidth requirements.

Connectivity: Why Latency Matters

Connectivity plays another crucial role in the SDV ecosystem. While current vehicle platforms rely primarily on 4G networks, panelists believe 5G—and eventually 6G—will enable new levels of vehicle intelligence, particularly for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2X) communication.

The reason is latency. Even a delay of a few hundred milliseconds can significantly affect vehicle safety systems. In collision scenarios, a 200-millisecond delay could translate into several meters of additional braking distance. Future networks promise to reduce latency to just a few milliseconds, enabling faster information exchange between vehicles and surrounding infrastructure.

However, experts cautioned that India’s current telecommunications infrastructure still needs to mature before such capabilities can be widely deployed.

AI, Data, and the Digital Twin Revolution

Perhaps the most complex dimension of SDVs lies in software development itself. Unlike traditional automotive software, SDV platforms require continuous integration, machine learning models, and large-scale data pipelines.

Training autonomous and driver-assistance systems requires massive datasets capturing real-world driving conditions. Yet India presents a unique challenge in this area: driving environments that are far less structured than those in Western markets. Unpredictable traffic behavior, unmarked roads, and unusual scenarios—from animals crossing highways to dense urban congestion—make it difficult to collect sufficient real-world training data.

To overcome this limitation, companies are increasingly relying on digital twins, simulation environments, and synthetic data generation. These virtual environments allow engineers to simulate thousands of driving scenarios before deploying software in actual vehicles. By shifting validation earlier in the development process—known as “shift-left engineering”—companies can test and refine software models without relying entirely on expensive physical vehicle testing.

Vehicles That Improve Over Time

One of the most intriguing aspects of SDVs is the possibility that vehicles may increase in value over time. Traditionally, a car’s capabilities remained fixed after leaving the factory. With SDVs, however, software updates can introduce entirely new features years after purchase.

Bosch’s Narayan described this shift as “an upgrade without actually upgrading the vehicle.” Through over-the-air updates, automakers can introduce new driver assistance features, improved algorithms, or additional digital services long after the vehicle has been sold.

At the same time, consumer expectations are rapidly evolving. According to Yogesh Davangere Adevappa, rising awareness of global technology trends—driven by social media and digital exposure—is pushing buyers to expect more intelligent and feature-rich vehicles. “People are increasingly aware of the technologies available worldwide,” he noted during the panel discussion. “That awareness is driving demand for vehicles with more connected features, safety systems, and intelligent capabilities.”

Why the Software-Defined Vehicle Matters

Ultimately, the SDV transformation is about more than just technology. It represents a fundamental redefinition of what a vehicle is. For consumers, the appeal lies in enhanced safety, convenience, and personalized driving experiences. For automakers, SDVs open the door to entirely new business models built around software services and continuous updates.

There is also growing demand for customization in vehicles. Vikram Bhatt, Aumovio, pointed out that SDV architectures allow drivers to configure vehicle behavior dynamically—whether enabling features like park-assist modes, obstacle detection systems, or personalized driving configurations. These runtime adjustments represent a fundamental shift from static vehicle functions to software-enabled experiences.

Yet despite this progress, panelists acknowledged that India is still at an early stage of SDV deployment compared to global markets.

As one panelist summarized during the discussion, the future of mobility may not just be electric or autonomous—it will be software-driven at its core.

The post Decoding SDV Revolution: Sensors, AI, and the Future of Automotive Architecture appeared first on ELE Times.

Graphene in Focus: How Nanotechnology is Transforming Electronics?

ELE Times - Thu, 03/26/2026 - 09:53

As miniaturisation and increasingly complex design architectures continue to define modern technology, nanotechnology is emerging as a frontier discipline shaping the trajectory of innovation—from medical and electronic devices to energy infrastructure and beyond. Simply stated, as the focus of electronics development shifts towards the engineering and application of materials at the atomic and molecular scale—typically between 1 and 100 nanometres—certain physical, chemical, and electrical limitations begin to surface. When conventional materials such as silicon and copper are miniaturised to the nanoscale, they often encounter issues such as increased resistance, heat generation, and reduced performance. To address these limitations, Graphene, an sp²-hybridized two-dimensional honeycomb lattice, has emerged as one of the most promising materials for next-generation electronic systems.

Amid rapid advances across the nanotechnology landscape, graphene is increasingly regarded as a flagship material in nanoscale engineering, attracting significant attention, particularly in electronics.

While Graphene continues to attract significant research interest due to its exceptional properties, the transition from laboratory-scale breakthroughs to commercially viable semiconductor technologies remains a complex challenge. Industry players such as Weebit Nano emphasise that beyond material performance, factors such as manufacturability, process compatibility, and scalability are equally critical. This creates a dynamic balance in nanoelectronics—between exploring high-potential emerging materials and developing solutions that can be seamlessly integrated into existing semiconductor fabrication ecosystems.

Owing to its exceptional electrical conductivity and extremely high electron mobility, graphene is being explored for a wide range of electronic components, including high-speed transistors, flexible circuits, and highly sensitive biosensors. The material is both electrically and thermally efficient, enabling electronic devices to operate with improved performance while generating less heat. These properties have positioned graphene as a promising complement—and in some cases a potential alternative—to conventional materials such as silicon in applications including touchscreens, sensors, and next-generation electronic interfaces.

When nanotechnology converges with electronics, the field is commonly referred to as Nanoelectronics. Nanoelectronic systems require extremely high switching speeds and efficient charge transport while minimizing heat buildup in densely packed circuits. In this context, Graphene offers exceptional carrier mobility—reaching approximately 100,000 cm²/V·s under ideal conditions—making it an attractive material for high-frequency electronic applications. Additionally, as electronic components become increasingly dense in nanoelectronic architectures, thermal management becomes a critical challenge. Graphene’s remarkably high thermal conductivity enables efficient heat dissipation, thereby helping maintain the reliability and performance of nanoscale electronic systems.

Let’s look into some applications of Graphene in nanoelectronics: 

Graphene Field-Effect Transistors (GFETs)

Graphene Field Effect Transistors are advanced, ultra-sensitive electronic components comprising a channel made of a single-atom-thick layer of graphene, enabling modulation of current by an electric field.   

Structure: A GFET typically consists of three things: Source, drain & a gate (top or back).

  • Channel: The space between the Source & the Drain makes up a channel where a 2D Sheet of Graphene is placed. 
  • Gate Control: The gate voltage modifies the electric field, changing the charge carrier density in the graphene channel.

How does it work? 

It operates by controlling the flow of electrical current through the Graphene channel. When a voltage is applied between the source and drain, charge carriers in the graphene layer begin to move, creating a current. The gate electrode, separated from the graphene by an insulating dielectric layer, is used to control this current. By applying a positive or negative voltage to the gate, an electric field is generated that changes the concentration of electrons or holes in the graphene channel. 

A positive gate voltage increases electron concentration, while a negative gate voltage increases hole concentration, thereby modulating the conductivity of the channel and controlling the amount of current that flows between the source and drain. Because graphene has extremely high carrier mobility, electrons can move through the channel very quickly, allowing GFETs to operate at very high frequencies, which makes them particularly promising for radio-frequency and high-speed electronic applications.

Applications 

GFETs are used in various fields due to their high performance: 

  1. Biosensors & Chemical Sensors: For detecting DNA, proteins, and gases at low concentrations.
  2. Flexible Electronics: For wearable sensors and devices.
  3. Radio Frequency (RF) Electronics: Due to high-speed charge transport. 

Nano-Electro- Mechanical Systems (NEMS)

Nano-Electro -Mechanical Systems are highly miniaturized devices that integrate electrical and mechanical functionality at the nanoscale, enabling the development of devices that are smaller, more sensitive, and more efficient as compared to the traditional silicon-based ones. 

Structure: The structure of a Graphene-based Nanoelectromechanical System (graphene NEMS) generally consists of a few key components integrated on a microfabricated substrate. It Includes: 

  • Silicon Base: At the base is a Silicon or silicon-oxide substrate in which a small cavity or trench is created. 
  • Electrodes: Metal source and drain electrodes are patterned on the surface to provide electrical contacts. A thin insulating layer may also be included to isolate different parts of the device. 
  • Graphene Sheet: The central element is a suspended sheet of Graphene, which spans the cavity and connects the electrodes, forming a bridge-like membrane. 
  • Gate Electrode: In some designs, a gate electrode is positioned beneath the graphene, separated by a dielectric layer.

How does it work? 

A Nanoelectromechanical System (NEMS) functions by converting mechanical motion at the nanoscale into electrical signals, or vice versa. These devices integrate mechanical structures—such as beams, membranes, or resonators—with electronic components on a very small scale.

When voltage is applied between electrodes (such as source, drain, or gate), electrostatic forces drive the mechanical motion of the nanoscale structure, and with this, the mechanical component begins to deflect, vibrate, or resonate. This mechanical movement changes certain electrical properties of the system—such as resistance, capacitance, or current flow—which can then be detected and measured by the electronic circuitry. As a result, NEMS devices operate as ultra-sensitive sensors, resonators, or switches, capable of detecting extremely small physical changes at the nanoscale.

Applications: 

NEMS are used in various applications, including: 

  • Ultra-Sensitive Sensors: NEMS devices, such as AFM tips, detect forces, vibrations, and chemical signals at the atomic level. They are used as highly sensitive accelerometers for inertial navigation and motion detection.
  • Bio-nanotechnology & Medical: NEMS enables lab-on-a-chip devices for diagnostics, biomolecule detection, and precise, targeted drug delivery systems. 
  • Nano-switches and Relays: NEMS switches serve as mechanical, low-power alternatives to traditional semiconductor logic switches, offering near-zero leakage current.

Conclusion

As the electronics industry continues to push the boundaries of miniaturisation and performance, materials engineered at the nanoscale will play an increasingly central role in shaping the next phase of technological evolution. Among these, Graphene stands out for its exceptional electrical, thermal, and mechanical properties, offering solutions to several limitations faced by conventional semiconductor materials.

However, the path from material innovation to large-scale deployment remains complex. While graphene continues to demonstrate immense potential in nanoelectronic applications—from high-frequency transistors to ultra-sensitive nanoscale systems—its integration into mainstream semiconductor manufacturing is still an evolving challenge. In contrast, industry players such as Weebit Nano are focusing on developing technologies that align closely with existing fabrication ecosystems, underscoring the importance of manufacturability alongside performance.

As nanotechnology matures, the future of electronics will likely be shaped by a careful balance between breakthrough materials and practical implementation—where innovation is not only defined by what is possible at the nanoscale, but also by what can be reliably produced at scale.

The post Graphene in Focus: How Nanotechnology is Transforming Electronics? appeared first on ELE Times.

The 6G clock ticking: Why silicon architecture for 2030 must start in 2026

EDN Network - Thu, 03/26/2026 - 09:12

The 6G transition is no longer a distant theoretical exercise; it’s a commercial inevitability driven by fundamental requirements for cellular standards to keep moving forward. 5G penetration has already surpassed 75% and is on a trajectory to reach 95% within a few years. We are witnessing an appreciation for continued call quality and data throughput improvements despite an explosion in mobile traffic.

However, the wireless ecosystem projects that even this capacity will soon overload due to accelerating AI content, the integration of satellite communications (SATCOM) into the cellular fold, and the rise of physical AI. 6G is the industry’s response to keep pace with that exponential growth in data communication demand.

The 2030 countdown: Why 2026 is the crucial starting line

To understand the urgency, one must look at the decadal cycle of cellular evolution. History shows it takes about five years to finalize a standard and fold its requirements into a functional ecosystem. While 6G is anticipated to take off commercially by 2030, the work-back schedule reveals a tight timeline for product builders. By 2029, hardware must be ready for compliance testing, meaning component technologies must be finalized by 2028.

Consequently, underlying embedded systems must be built in 2027, necessitating that architectural definitions start as early as 2026. As an example of what is going on in the industry, Qualcomm’s CEO recently hinted at the Snapdragon Summit that 6G-capable devices could appear as early as 2028 for trials, making the 2028 Olympics a perfect arena for tech demos.

Unlocking the “Golden Band”: FR3 and the business of spectrum

Beyond architectural shifts, 6G introduces the Frequency Range 3 (FR3) spectrum, spanning 7.125 GHz to 24.25 GHz. Often called the “Golden Band for 6G,” FR3 offers the perfect balance between the wide coverage of lower bands and the massive capacity of mmWave.

This spectrum is expected to be a major business driver, enabling the 10x higher data rates targets (up to 200 Gbps) and supporting “massive MIMO evolution” to handle the projected 4x traffic growth by 2030 (going over 5.4 zettabytes as indicated by the GSMA Intelligence report).

Sustainable networks

Sustainability is a core pillar of 6G, with network operators seeking to reduce OpEx, as 25% of it is driven by power demand. 6G moves from an “always-on” to a “smart-on” philosophy, aiming for 30-50% increase in power efficiency. Key techniques include:

  • Enhanced deep sleep modes: Enabling base stations to achieve near-zero power consumption when no active users are present, and reduction in periodic signaling (current 5G standard mandates high periodic signaling that in practice keeps a lot of the RF and power amplifier components active at all times).
  • AI-driven beamforming: Using AI to direct signals precisely to users, reducing energy waste from broad, inefficient broadcasting.
  • AI-driven resource management: Using AI at the higher protocol layers for effective radio resources management.

The AI-native revolution: Moving intelligence to the air interface

One of the most significant shifts in 6G is the move toward an AI-native air interface. Unlike 5G’s rigid mathematical models, 6G uses deep learning to dynamically adapt signal processing blocks. This enables “adaptive waveforms” that adjust modulation in real-time to environmental conditions.

It also facilitates integrated sensing and communication (ISAC), where RF reflections provide precise spatial awareness, allowing the network to proactively adjust beamforming based on user movement.

The coordination challenge: Managing two-sided AI

This transition introduces a complex challenge in how the transmitter (base station) and receiver (device) coordinate their intelligence. Unlike traditional algorithms, AI components must be synchronized through AI lifecycle management (LCM). The industry is weighing one-sided models (device-only optimization) against two-sided architectures (essential for tasks like CSI compression).

In two-sided designs, the device acts as a neural encoder and the base station as a decoder; these must be coordinated pairs to some extent. The level of coordination is still in study, as there are few optional schemes. Examples for those schemes are fully matched neural networks couples, or alternatively, independent at the NN architecture level but trained on the same dataset.

This raises critical questions on the protocol level: should the network use model ID-based selection (activating pre-loaded models) or model transfer (pushing new neural weights over the air) or weights transfer?

Programmable intelligence: Why DSPs are the preferred path

Because 3GPP specifications remain fluid, the need for flexibility through programmability has never been higher. Developing 6G on hard-wired logic is risky, as spec changes could render silicon obsolete. This is why digital signal processors (DSPs) are the preferred architecture. Modern DSPs are uniquely suited for the AI-native physical layer; they possess the massive number of MACs required for matrix operations and are highly efficient at the vector processing necessary for neural networks.

Leading technology vendors also offer dedicated AI ISA for accelerated NN activation functions. A fully programmable modem powered by AI-native DSP offers a “safe bet,” allowing developers to adapt as 6G settles while maintaining the performance needed to lead the market.

Elad Baram is director of product marketing for the Mobile Broadband Business Unit at Ceva.

Related Content

The post The 6G clock ticking: Why silicon architecture for 2030 must start in 2026 appeared first on EDN.

FormFactor and Rohde & Schwarz Advance their Partnership for on-wafer RF Component Characterisation

ELE Times - Thu, 03/26/2026 - 09:11

FormFactor and Rohde & Schwarz have announced a strategic co-marketing partnership as part of FormFactor’s MeasureOne partner program, a solution-integration initiative designed to deliver validated, turnkey on-wafer test systems. The collaboration combines advanced probing technology from FormFactor with industry-leading RF test instrumentation from Rohde & Schwarz, providing manufacturers with comprehensive solutions spanning early design verification through production. With RF device complexity and operating frequencies continuing to increase, this expanded collaboration formalises a tightly integrated on‑wafer test solution designed to lower integration effort and risk, reduce overall cost, and accelerate time-to-market for customers across development and production.

Reduced costs and faster time-to-market

On-wafer device characterisation of RF components such as 5G frontends or filters enables design validation during development, as well as product qualification and verification in production. Identifying faulty devices before packaging can significantly help reduce costs and improve yield. Through their integrated solutions, Rohde & Schwarz and FormFactor help manufacturers detect issues early in the process, which can result in faster time-to-market.

Seamlessly integrated test solutions

Rohde & Schwarz and FormFactor have been working together for several years to deliver powerful, seamlessly integrated solutions. Rohde & Schwarz provides instruments like the R&S ZNA, a versatile high-end VNA capable of measuring all key RF parameters, which can easily be combined with frequency converters extending frequencies up to the THz range. FormFactor complements this with a comprehensive portfolio of manual, semi-automated, and fully automated probe systems, including advanced thermal control, high-frequency probes, precision probe positioners, and robust calibration tools.

This combined approach allows manufacturers to validate product performance directly during wafer runs, leveraging the expertise of both companies. The tight integration of hardware and software components from both companies is designed to enable fast and reliable testing. The complete solution includes advanced instruments, reliable wafer and die fixuring, and high-precision probe positioning throughout the entire test cycle, strengthening confidence in product quality and performance.

Jens Klattenhoff, SVP and GM of the Systems Business Unit at FormFactor, said: “By expanding our collaboration with Rohde & Schwarz through the MeasureOne program, we are delivering integrated on‑wafer RF test solutions designed to help customers reduce risk, improve efficiency, and accelerate development. This partnership brings together advanced wafer probing and proven RF measurement technologies to address the growing complexity of next‑generation semiconductor devices.”

Michael Fischlein, Vice President, Spectrum & Network Analysers, EMC and Antenna Test at Rohde & Schwarz, stated: “We are delighted to be part of MeasureOne, a strategic Co-Marketing partner program that unites FormFactor – one of the world’s leading probe station providers – with Rohde & Schwarz, a global leader in test and measurement. Together, we are set up to deliver turnkey on-wafer solutions enabling crucial and demanding test capabilities for next-generation semiconductors.”

The MeasureOne partnership encompasses a wide range of Rohde & Schwarz instruments, including the R&S ZNA, R&S ZNB, R&S ZNBT, R&S ZVA and R&S ZNL VNA families, alongside signal and spectrum analysers such as the FSW, FSWX, R&S FSV3000 and R&S FSVA3000. Integration also extends to signal generators (R&S SMA100B, R&S SMB100B, R&S SGS100A, R&S SGU100A) and selected frontends and converters for advanced calibration workflows, all working seamlessly with FormFactor’s traditional plus speciality probe stations for cryogenic and vacuum applications.

The post FormFactor and Rohde & Schwarz Advance their Partnership for on-wafer RF Component Characterisation appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator