EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 40 хв тому

Omnivision expands automotive image sensor portfolio

Птн, 10/17/2025 - 23:19
Omnivision's OX08D20 automotive image sensor.

Omnivision expands its automotive portfolio with two new image sensors. The OX05C global shutter (GS) high dynamic range (HDR) sensor is a new addition to the company’s Nyxel near-infrared (NIR) family for in-cabin monitoring cameras, and the OXO8D20 image sensor targets advanced-driver assistance systems (ADAS) and autonomous driving (AD) applications.

The OX05C represents the automotive industry’s first and only 5-megapixel (MP) back-side illuminated (BSI) GS HDR sensor for driver and occupant monitoring systems, according to Omnivision. It delivers extremely clear images of the entire cabin, enabling improved algorithm accuracy even in high-brightness conditions.

 Omnivision)OX05C GS HDR image sensor (Source: Omnivision)

The 2.2-µm OX05C features Omnivision’s Nyxel NIR technology, claiming world-class quantum efficiency (QE) at the 940-nm NIR wavelength, improving driver and occupant monitoring systems capabilities in low-light conditions. The on-chip RGB-IR separation eliminates the need for a dedicated image signal processor and backend processing.

The GS HDR OX05C also avoids interference from other IR light sources in the cabin, compared to rolling-shutter HDR sensors, Omnivision said, improving the RGB image quality and enabling more capture scheme and functions in real applications.

Measuring 6.61 × 5.34 mm, the OX05C1S package is 30% smaller than its predecessor, the OX05B (7.94 × 6.34 mm), allowing greater design flexibility when placing cameras in the automotive cabin. OEMs also use the same camera lens when upgrading from the OX05B to the newer OX05C for a design and cost advantage.

In addition, the integrated cybersecurity and the support of simultaneous driver and occupant monitoring with a single camera reduces complexity, cost, and space, Omnivision said.

The sensor comes in Omnivision’s stacked a-CSP package and a reconstructed wafer option for designers that need to customize their own package. The OX05C sensor is available in both color filter array RGB-IR and mono designs. Samples of the OX05C are currently available. Mass production starts in 2026.

In addition to the OX05C, Omnivision introduced the 8-MP  OX08D20 automotive image sensor with TheiaCel technology for exterior automotive cameras. It delivers improvements in low-light ADAS and AD performance and is an upgrade to the OXO810 sensor for exterior cameras.

Omnivision's OX08D20 automotive image sensor.OX08D20 automotive image sensor (Source: Omnivision)

The OX08D20 features the same benefits of the OX08D10, plus an innovative capture scheme developed in collaboration with Mobileye that reduces the motion blur of nearby objects while driving and improves low-light performance. It also upgrades to 60 frames per second to enable dual-use cameras, and includes updated cybersecurity to match the MIPI CSE 2.0 standard.

The image sensor features low power consumption and is housed in an a-CSP package that is 50% smaller than other exterior sensors in its class. The OX08D20 will be sampling in November 2025 and will enter mass production in the fourth quarter of 2026. 

The post Omnivision expands automotive image sensor portfolio appeared first on EDN.

Illuminated tactile switches withstand reflow soldering

Птн, 10/17/2025 - 22:57
Littelfuse's  K5V Series of illuminated tactile switches.

Littelfuse Inc. extends its K5V Series of illuminated tactile switches with the release of new K5V4 models including the gull-wing and 2.1-mm pin-in-paste (PIP) versions compatible with reflow soldering. These switches target a range of applications, such as data centers, network infrastructure, industrial equipment, and pro audio/video systems.

Littelfuse's  K5V Series of illuminated tactile switches.(Source: Littelfuse Inc.)

The K5V4 is the first long-travel, single pole/double throw (SPDT) illuminated tactile switch in a reflow-capable SMT package, Littelfuse said, filling a critical gap in the market. They enable direct SMT assembly for the first time, reduce production costs, support higher throughput, and improve end-product quality, while maintaining durability and tactile performance, the company added.

The K5V4 switches are reflow soldering-compatible thanks to the use of a high-temperature polyarylate (PAR) material with a 250°C thermal deformation threshold, eliminating the need for silicone sleeves or special handling. They are  suited for manufacturers transitioning from wave to reflow soldering processes.

Other features include SPDT contact configuration with normally-open and normally-closed options, a sharp tactile response with audible click and 4N operating force, and integrated high-brightness LEDs in a variety of colors and bi-color options.

For greater reliability, these switches provide a compact, dust-resistant design for reliable operation in dense boards, and gold-plated dome contacts for long-term contact performance. They are available in SMT (gull wing) and THT (PIP) versions for design flexibility.

The K5V tactile switches are currently available in tape and reel format, with quantities ranging from 1,000 to 2,000 units. Samples can be requested through authorized Littelfuse distributors.

The post Illuminated tactile switches withstand reflow soldering appeared first on EDN.

No more missed steps: Unlocking precision with closed-loop stepper control

Птн, 10/17/2025 - 20:26

Bipolar stepper motors provide precise position control while operating in an open loop. Industrial automation applications—such as robots and processing and packaging machinery—and consumer products—such as 3D printers and office equipment—effectively take advantage of the stepper’s inherent position retention. This eliminates the need for convoluted sensor technology, processing power requirements, or complex control algorithms.

However, driving a stepper motor in an open-loop methodology requires the motion profile to be errorless. Any glitch in which the stepper’s load abruptly changes results in step loss, which desynchronizes the stepper position from the application’s perceived position. In most cases, this position tracking loss is problematic. For example, in a label printer, step loss could cause the print to be skewed with the label, resulting in skewed label prints.

This article will describe a simple implementation that gives stepper motor the ability to sense its position and actively correct any error that might accrue during actuation.

 

Design assumptions

For this article, we will assume that a bipolar stepper motor with 200 steps per revolution is employed to drive a mechanism that is responsible for opening and closing some sort of flap or valve while servicing a production line. To make motion smooth, we will utilize a bipolar stepper driver with 8 degrees of microstepping, resulting in 1,600 step commands per full rotor revolution.

In order to fully open or close said mechanism, we will need multiple rotor turns; for simplicity, assume we need 10 full turns. In this case, the controller would need to send 16,000 step commands on each direction to successfully actuate the mechanism.

When the current is high enough to overcome any torque variation, the stepper moves accordingly and can fully open and close the control surface. In this scenario, the position is preserved. If steps are lost, however, the controller loses synchronization with the motor, and the actuation becomes compromised.

Newer technologies attempt to provide checks, such as stall detection, by measuring the motor winding’s back electromotive force (BEMF) when the applied revolving magnetic field crosses the zero-current magnitude. Stall detection only tells the application whether the motor is moving; it fails to report how many steps have been effectively lost. In cases like this, it’s worthwhile to explore closing the loop on the rotor position using sensing technology.

Sensor selection

In some cases, using simple limit switches—like magnetic, optical, or mechanical—might suffice to drive the stepper motor until the limits are met. However, there are plenty of cases where the available space does not allow the use of such switches. If a switch cannot be used, it might make sense to populate an optical shaft encoder (relative or absolute) at the motor’s back side shaft, but there is a high cost associated with these solutions.

An affordable solution for this dilemma is a contactless angular position sensor. This type of sensor involves the use of readily available magnetics with precise and accurate semiconductors that employ Hall sensors, which extract the rotor’s position with as much as 15 bits worth of resolution. That means each rotor revolution can be encoded to as much as 215 = 32,768 units, or 0.01 degrees (360/32,768).

For this example, an 11.5-bit resolution was selected, as that will be sufficient to encode the 1,600 microsteps. By using 11.5 bits of resolution, we can obtain 2,896.31 effective angle segments. A Hall-effect based contactless sensor such as the MA732 provides absolute position encoding with 11.5 bits of resolution.

When coupled to a diametrically magnetized round magnet, the sensor is periodically sampled through its serial peripheral interface (SPI) port at 1-ms intervals (Figure 1). When a read command is issued, the sensor responds with a 16-bit word. The application uses the 16 bits worth of information, although the system’s accuracy is driven by the effective 11.5-bit resolution.

Figure 1 The Hall-effect sensor is connected to the MCU through the SPI ports. Source: Monolithic Power Systems

Power stage selection

Driving bipolar steppers require two full H-bridges. The two main implementations to drive bipolar stepper motors are using a dual H-bridge power stage with a microcontroller unit (MCU) to generate sine/cosine wave pairs or using a fully integrated step indexer engine with microstepping support. Using an MCU and dual H-bridge combination provides more flexibility in terms of how to regulate the sine wave currents, but it also increases complexity.

For this article, a fully integrated step indexer with as much as 16 degrees of microstepping was selected (Figure 2). The integrated step indexer in this article is MP6602, which provides up to 4 A of current drive and is capable of driving NEMA 17 and NEMA 23 bipolar stepper motors. Meanwhile, the MCU drives all control signals, communicates with the indexer through the SPI port, and samples the fault information.

Figure 2 The step indexer is connected to an MCU to drive the bipolar stepper motor. Source: Monolithic Power Systems

Final implementation

For a closed-loop stepper implementation, the sensor and power stage should be controlled by an off-the-shelf ARM Cortex M4F MCU. The MCU communicates with both devices through a single SPI port with two chip selects. An internal timer generates the steps. The board measures 1.35”x1.35” and is small enough to fit behind a NEMA17 stepper motor (Figure 3). This allows the reference design to be used in a larger motor frame size such as the NEMA 23.

Figure 3 The PCB’s bottom side has the MA732 angle sensor. Source: Monolithic Power Systems

Figure 4 shows the motor assembly, in which Figure 4a (above) shows the motor assembly with a diametrically magnetized round magnet facing MA732 sensor, and Figure 4b (below) shows the final solution.

Figure 4 Assemble the motor such that the housing is invisible. Source: Monolithic Power Systems

Absolute position and sensor overflow

Although the contactless magnetic based sensor is an absolute position encoder, this is only true on a per-revolution basis. That is, throughout the rotor’s angular travel through each revolution, the sensor provides a 16-bit number that the MCU reads, which essentially allows the firmware to learn the rotor’s absolute position at any given time.

As the motor revolves, however, each new revolution is indistinguishable from the previous revolution. We can add angular position readings into a much larger number, which can be expressed as a variable that takes all the angle readings to obtain the entire position as an absolute value (called Rotor_Angle_Absolute). This variable is a 32-bit signed integer.

If the motor moves forward, increment the variable, and vice versa. Assuming 16-bit readings, 1,600 microsteps per revolution, and a 1,000-rpm step rate, it would take 22.37 hours for the variable to overflow. The MCU must ensure that the sensor readings are added correctly, even as the rotor goes through its overflow region. This absolute position correction must be executed whether the motor is rotating clockwise or counterclockwise; in other words, the sensor position is incrementing or decrementing.

Figure 5 shows how the angle position changes over time.

Figure 5 The angle position changes over time as the motor revolves. Source: Monolithic Power Systems

Figure 5 shows that the angular displacement (MA732_Angle_Delta, denoted as AD in figure) is computed at periodic intervals (1ms). During each sample, the previous read is stored within MA732_Angle_Prev (denoted as Prev Angle in figure), the new sample is stored at MA732_Angle_New (denoted as New Angle in figure). MA732_Angle_Delta can be calculated with Equation 1:

The result of Equation 1 is added to MA732_Angle_Absolute. If the rotor moved clockwise (forward), the displacement is positive; if the motor moves counterclockwise (reverse), the displacement is negative.

A special consideration must be made during angle sensor overflows. If the sensor moves forward past the maximum of 0xFFFF (denoted as OvF+AD in Figure 5), or if the sensor decrements its position past 0x0000 (denoted as OvF-AD in Figure 5), the previous equation can no longer be used. In both scenarios, the FW logic chooses one of the following equations, depending on which case we are servicing.

If the angle displacement overflows when counting up and exceeds the maximum (OvF+AD), then MA732_Angle_Delta can be calculated with Equation 2:

If the angle displacement overflows when counting down and falls below the minimum (OvF-AD), then MA732_Angle_Delta can be calculated with Equation 3:

Stepper motor: New frontiers

Using an off-the-shelf MCU, we can interface the stepper motor driver and Hall-sensor based sensor via an SPI port. The firmware can then continuously interrogate the position sensor and extrapolate the motor rotor position at all times. By comparing this position to a commanded position, the motor can be commutated to reach the commanded position in a timely fashion.

If an external force causes the motor to lose steps, the sensor information tracks how many steps were lost, which then allows the MCU to close the loop on position and successfully bring the stepper motor to the commanded position.

Although stepper motors are mostly used in open-loop applications, there are plenty of advantages in closing the loop on position. By employing cost-effective, Hall-sensing technologies, and an easy-to-use index-based stepper drivers, the application can now add servo-like properties to their stepper-based applications.

Jose Quinones is senior application engineer at Monolithic Power Systems (MPS).

Related Content

The post No more missed steps: Unlocking precision with closed-loop stepper control appeared first on EDN.

Program sequence monitoring using watchdog timers

Птн, 10/17/2025 - 17:01
WDT in safety standards

With the prevalence of microcontrollers (MCUs) as processing units in safety-related systems (SRS) comes the need for diagnostic measures that will ensure safe operation. IEC 61508-2 specifies self-test supported by hardware (one channel) as one of the recommended diagnostic techniques for processing units. This measure uses special hardware that increases speed and extends the scope of the failure detection, for instance, a watchdog timer (WDT) IC that cyclically monitors the output of a certain bit pattern from the MCU.

The basic functional safety (FS) standard IEC 61508-2 Annex A Table A.10 recommends several diagnostic techniques and measures to control hardware failures in the program sequences of digital devices. Such techniques include a watchdog with a separate time base with or without a time window, as well as a combination of temporal and logical monitoring of program sequences. While each of these has corresponding maximum claimable diagnostic coverage, all these techniques employ WDTs.

This article will show how to implement these diagnostic functions using WDTs. Furthermore, the article will provide insights into the differences of program sequence monitoring diagnostic measures in terms of operation and diagnostic coverage when implemented with ADI’s high-performance supervisory circuits with watchdog function.

Low diagnostic coverage

Part 2 of IEC 61508 describes simple watchdogs as external timing elements with a separate time base. Such devices allow the detection of program sequence failures in a computer device, such as MCUs, within a specified interval. This is done by having a mechanism that allows either:

  1. The MCU is to issue a signal to reset the watchdog before it reaches the timeout
  2. The watchdog timeout period to be reached so that the watchdog can issue a reset signal to the MCU

Step #1 occurs when the program sequence is running smoothly, while step #2 happens when it is not.

Figure 1a shows an example of the watchdog implementation with a separate time base but without a time window through the MAX6814. Notably, MCUs usually have an internal WDT, but it cannot be solely relied on to detect a fault if it is part of the defective MCU, which will be an issue considering common cause failures (CCF).

To address such CCF concerns, a separate WDT is used to ensure the MCU is placed in reset [1, 2]. Through a flowchart, Figure 1b illustrates the behavior of the WDT as embedded in the MCU’s program execution. Before the flow starts, it’s important to set the watchdog timeout period or the WDT’s maximum reset interval. When such a period or interval is defined, the WDT will run upon execution of the program. The MCU must be able to send a signal to the MAX6814’s WDI pin before it reaches timeout, as the device will issue a reset signal to the MCU if the timeout period is reached. When the MCU resets, the system will be placed into a safe state.

Figure 1 Simple watchdog operation showing (a) an example of the watchdog implementation with a separate time base but without a time window and (b) the behavior of the WDT as embedded in the MCU’s program execution. Source: Analog Devices

Such a WDT’s timeout period will capture program sequence issues; for example, a program sequence gets stuck in a loop, or an interrupt service routine does not return in time. For instance, only 5 of the 10 subroutines meant to be run on every loop of the software are executed.

However, the WDT’s timeout period will not cover other issues concerning program sequence issues—whether execution of the program took longer or shorter than expected, or if the sequence of the program sections is correctly executed. This can be solved by the next type of WDTs.

Medium diagnostic coverage

Since the existence of a separate time window allows for the detection of both excessive delays and premature execution, windowed WDTs prohibit the MCU from responding longer or shorter than the WDT’s open window. This is also referred to as a valid window specification. As compared to simple watchdogs, it guarantees that all subroutines are executed by the program in a timely manner; otherwise, it will assert the MCU into reset [3].

Figure 2 shows an example implementation of program sequence monitoring using the MAX6753. It comes with a windowed watchdog with external-capacitor-configurable watchdog periods.

Figure 2 Sample implementation of a windowed watchdog operation with external-capacitor-configurable watchdog periods.

Figure 3, on the other hand, shows another implementation using the MAX42500, whose watchdog time settings can be configured through I2C—effectively reducing the number of external components. This allows for the capability to increase fault coverage through a packet error checking (PEC) byte as shown in Figure 4. The PEC byte increases diagnostic coverage against I2C communication-related failures such as bus errors, stuck-bus conditions, timing problems, and improper configuration.

Figure 3 Another implementation: windowed watchdog through I2C, reducing the number of external components compared to Figure 2. Source: Analog Devices

Figure 4 PEC byte coverage to I2C interface failures, such as bus errors, stuck-bus conditions, timing problems, and improper configuration. Source: Analog Devices

 While watchdogs with a separate time base and time window offer higher diagnostic coverage compared to simple WDTs, they still cannot capture issues concerning whether the software’s subroutines have been executed in the correct sequence. This is what the next type of diagnostic technique addresses.

High diagnostic coverage

Diagnostic techniques involving the combination of temporal and logical monitoring provide high diagnostic coverage to program sequences according to IEC 61508-2. One implementation of this technique involves a windowed watchdog and a capability to check whether the program sequence has been executed in the correct order.

An example can be visualized when the circuit in Figure 2 is combined with the sequence in Figure 5, where the MCU has each of its program routines employing a unique combination of characters and digits. Such unique combinations are then placed in an array each time a routine is executed. After the last routine, the MCU will only kick, or send a reset signal to, the watchdog if all words are correctly set in the array.

Figure 5 Checking the correct logic of the sequence through markers. Source: Analog Devices

Highest diagnostic coverage

In some systems, more diagnostic coverage may be required to capture failures of the MCU, which may mean simply that sending back a pulse in a windowed time is not enough. With this, it may be beneficial to require the MCU to perform a complex task, such as calculating, to ensure that it’s fully operational. This is where the MAX42500’s challenge/response watchdog can come into play.

In this watchdog mode, there’s a key-value register in the IC that must be read as the starting point of the challenge message. The MCU must use this message to calculate the appropriate response to send back to the watchdog IC, ensuring the watchdog is kicked within the valid window. This type of challenge/response watchdog operates similarly to a simple windowed one, except that the key register is updated rather than the watchdog being refreshed with a rising edge. This is shown in Figure 6. Notably, for the MAX42500’s WDT, the watchdog input is implemented using the I2C, while the watchdog output is the output reset pin.

Figure 6 A challenge/response windowed watchdog example where the MCU reads the challenge message in the IC and calculates an appropriate response to be sent back to the watchdog IC to allow it to be kicked within the valid window. Source: Analog Devices

The MAX42500 contains a linear-feedback shift key (LFSK) register with a polynomial of x8 + x6 + x5 + x4 + 1 that will shift all bits upward towards the most significant bit (MSB) and insert the calculated bit as the new least significant bit (LSB). With this, the MCU must compute the response in this manner and return it to the register of the MAX42500 through I2C. Notably, such a polynomial is identified as primitive and at the same time, a maximal length feedback polynomial for 8 bits. This ensures that all bit value combinations (1 to 255) are generated by the polynomial, and the order of the numbers is indeed pseudo-random [4][5].

Such a challenge/response can offer more coverage than the combination of temporal and logical program sequence monitoring, as it shows that the MCU can still do actual calculations. This is as opposed to an MCU just implementing decision-making routines, such as only checking whether the array of words is correct before issuing a signal to reset the watchdog.

Diagnostic coverage claims

The basic functional safety standard has maximum claimable diagnostic coverage for each diagnostic measure recommended per block in an SRS. Table 1 corresponds to the program sequence according to IEC 61508, which utilizes WDTs.  

Diagnostic Technique/Measure

Maximum DC Considered Achievable

Watchdog with a separate time base without a time window

Low

Watchdog with a separate time base and time window

Medium

Combination of temporal and logical monitoring of program sequences

High

Table 1 Watchdog program sequence according to IEC 61508-2 Annex A Table A.10.

Furthermore, with the existence of different implementations that may not be covered in the standard, a claimed diagnostic coverage can only be validated through fault insertion testing.

Diagnostic measures using WDTs

This article enumerates three types of diagnostic measures that use WDTs as recommended by IEC 61508-2 to address failures in program sequence. The first type of watchdog, which has a separate time base but without a time window, can be implemented using a simple watchdog. This diagnostic measure can only claim low diagnostic coverage.

On the other hand, the second type of watchdog, which has both a separate time base and a separate time window, can be implemented by a windowed watchdog. This measure can claim a medium diagnostic coverage.

To improve diagnostic coverage to high, one can employ logical monitoring aside from the usual temporal monitoring using watchdogs. A challenge/response windowed watchdog architecture can further increase diagnostic coverage against program sequence failures with its capability to check an MCU’s computational ability.

Bryan Angelo Borres is a TÜV-certified functional safety engineer who focuses on industrial functional safety. As a senior power applications engineer, he helps component designers and system integrators design functionally safe power products that comply to industrial functional safety standards such as the IEC 61508. Bryan is a member of the IEC National Committee of the Philippines to IEC TC65/SC65A and IEEE Functional Safety Standards Committee. He also has a postgraduate diplomat in power electronics and more than seven years of extensive experience in designing efficient and robust power electronics systems.

Christopher Macatangay is a senior product applications engineer supporting the industrial power product line. Since joining Analog Devices in 2015, he has played a key role in enabling customer success through technical support, system validation, and application development for analog and mixed-signal products. Christopher spent six years prior to ADI as a test development engineer at a power supply company, where he focused on the design and implementation of automated test solutions for high-reliability products.

References

  1. “IEC 61508 All Parts, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related ” International Electrotechnical Commission, 2010.
  2. Top Misunderstandings About Functional Safety.” TÜV SÜD,
  3. Basics of Windowed Watchdog Operation.” Analog Devices, Inc. December
  4. Pseudo Random Number Generation Using Linear Feedback Shift Registers.” Maxim, June 2010.
  5. Mohammed Abdul Samad AL-khatib and Auqib Hamid Lone “Acoustic Lightweight Pseudo Random Number Generator based on Cryptographically Secure LFSR.” International Journal of Computer Network and Information Security, Vol. 2, February

 Related Content

The post Program sequence monitoring using watchdog timers appeared first on EDN.

Fast, compact scopes reveal subtle signal shifts

Чтв, 10/16/2025 - 19:17

Covering bandwidths from 100 MHz to 1 GHz, R&S MXO 3 oscilloscopes capture up to 4.5 million waveforms/s with 99% real-time visibility. According to R&S, the 4- and 8-channel models deliver responsive, precise performance in a space-saving form factor at a more accessible price point.

The MXO 3 offers hardware-accelerated zone triggering at up to 600,000 events/s, 50,000 FFTs/s, and 600,000 math operations/s, with a minimum trigger re-arm time of just 21 ns. It resolves small signal changes alongside larger ones with 12-bit vertical resolution at all sample rates, enhanced 18-bit HD mode, 125 Mpoints of standard memory, and a maximum sample rate of 5 Gsamples/s.

Both the 4- and 8-channel scopes come in a portable 5U design, weighing only about 4 kg, and fit easily on benches, even crowded ones. Each includes an 11.6-in. full-HD display with a capacitive touchscreen and intuitive user interface. VESA mounting compatibility allows additional flexibility in engineering environments.

Prices for the MXO3 oscilloscopes start at just over $6000.

MXO 3 product page

Rohde & Schwarz 

The post Fast, compact scopes reveal subtle signal shifts appeared first on EDN.

Inductive sensors broaden motion-control options

Чтв, 10/16/2025 - 19:17

Three magnet-free inductive position sensors from Renesas provide a cost-effective alternative to magnetic and optical encoders. With different coil architectures, the ICs address a wide range of applications in robotics, medical devices, smart buildings, home appliances, and motor control.

The dual-coil RAA2P3226 uses a Vernier architecture to deliver up to 19-bit resolution and 0.01° absolute accuracy, providing true power-on position feedback for precision robotic joints. The single-coil RAA2P3200 prioritizes high-speed, low-latency operation for motor commutation in e-bikes and cobots, with built-in protection for robust industrial use. Also using single-coil sensing, the RAA2P4200 offers a compact, cost-efficient option for low-speed applications such as service robots, power tools, and medical devices.

All three sensors share a common inductive sensing core that enables accurate, contactless position measurement in harsh industrial environments. Each device supports rotary on-axis, off-axis, arc, and linear configurations, and includes automatic gain control to compensate for air-gap variations. A 16-point linearization feature enhances accuracy.

The sensors are now in volume production, supported by a web-based design tool that automates coil layout, simulation, and tuning.

RAA2P3226 product page 

RAA2P3200 product page 

RAA2P4200 product page 

Renesas Electronics 

The post Inductive sensors broaden motion-control options appeared first on EDN.

AOS devices power 800-VDC AI racks

Чтв, 10/16/2025 - 19:17

GaN and SiC power semiconductors from AOS support NVIDIA’s 800-VDC power architecture for next-gen AI infrastructure, enabling data centers to deploy megawatt-scale racks for rapidly growing workloads. Moving from conventional 54-V distribution to 800 VDC reduces conversion steps, boosting efficiency, cutting copper use, and improving reliability.

The company’s wide-bandgap semiconductors are well-suited for the power conversion stages in AI factory 800‑VDC architectures. Key device roles include:

  • High-Voltage Conversion: SiC devices (Gen3 AOM020V120X3, topside-cooled AOGT020V120X2Q) handle high voltages with low losses, supporting power sidecars or single-step conversion from 13.8 kV AC to 800 VDC. This simplifies the power chain and improves efficiency.
  • High-Density DC/DC Conversion: 650-V GaN FETs (AOGT035V65GA1) and 100-V GaN FETs (AOFG018V10GA1) convert 800 VDC to GPU voltages at high frequency. Smaller, lighter converters free rack space for compute resources and enhance cooling.
  • Packaging Flexibility: 80-V and 100-V stacked-die MOSFETs (AOPL68801) and 100-V GaN FETs share a common footprint, letting designers balance cost and efficiency in secondary LLC stages and 54-V to 12- V bus converters. Stacked-die packages boost secondary-side power density.

AOS power technologies help realize the advantages of 800‑VDC architectures, with up to 5% higher efficiency and 45% less copper. They also reduce maintenance and cooling costs.

Alpha & Omega Semiconductor

The post AOS devices power 800-VDC AI racks appeared first on EDN.

Optical Tx tests ensure robust in-vehicle networks

Чтв, 10/16/2025 - 19:17

Keysight’s AE6980T Optical Automotive Ethernet Transmitter Test Software qualifies optical transmitters in next-gen nGBASE-AU PHYs for IEEE 802.3cz compliance. The standard defines optical automotive Ethernet (2.5–50 Gbps) over multimode fiber, providing low-latency, EMI-resistant links with high bandwidth, and lighter cabling. Keysight’s platform helps enable faster, more reliable in-vehicle networks for software-defined and autonomous vehicles.

Paired with Keysight’s DCA-M sampling oscilloscope and FlexDCA software, the AE6980T offers Transmitter Distortion Figure of Merit (TDFOM) and TDFOM-assisted measurements, essential for evaluating optical signal quality. Device debugging is simplified through detailed margin and eye-quality evaluations. The compliance application also automates complex test setups and generates HTML reports showing how devices pass or fail against defined limits.

AE6980T software provides full compliance with IEEE 802.3cz-2023, Amendment 7, and Open Alliance TC7 test house specifications. It currently supports 10-Gbps data rates, with 25 Gbps planned for the future.

For more information about Keysight in-vehicle network test solutions and their automotive use cases, visit Streamline In-Vehicle Networking.

AE6980T product page 

Keysight Technologies 

The post Optical Tx tests ensure robust in-vehicle networks appeared first on EDN.

Gate drivers tackle 220-V GaN designs

Чтв, 10/16/2025 - 19:17

Two half-bridge GaN gate drivers from ST integrate a bootstrap diode and linear regulators to generate high- and low-side 6-V gate signals. The STDRIVEG210 and STDRIVEG211 target systems powered from industrial or telecom bus voltages, 72-V battery systems, and 110-V AC line-powered equipment.

The high-side driver of each device withstands rail voltages up to 220 V and is easily supplied through the embedded bootstrap diode. Separate gate-drive paths can sink 2.4 A and source 1.0 A, ensuring fast switching transitions and straightforward dV/dt tuning. Both devices provider short propagation delay with 10-ns matching for low dead-time operation.

ST’s gate drivers support a broad range of power-conversion applications, including power supplies, chargers, solar systems, lighting, and USB-C sources. The STDRIVEG210 works with both resonant and hard-switching topologies, offering a 300-ns startup time that minimizes wake-up delays in burst-mode operation. The STDRIVEG211 adds overcurrent detection and smart shutdown functions for motor drives in tools, e-bikes, pumps, servos, and class-D audio systems.

Now in production, the STDRIVEG210 and STDRIVEG211 come in 5×4-mm, 18-pin QFN packages. Prices start at $1.22 each in quantities of 1000 units. Evaluation boards are also available.

STDRIVEG210 product page 

STDRIVEG211 product page 

STMicroelectronics

The post Gate drivers tackle 220-V GaN designs appeared first on EDN.

Apple’s M5: The SoC-and-systems cadence (sorta) continues to thrive

Чтв, 10/16/2025 - 17:15

A month and a few days ago, Apple dedicated an in-person event (albeit with the usual pre-recorded presentations) to launching its latest mainstream and Pro A19 SoCs and the various iPhone 17s containing them, along with associated smart watch and earbuds upgrades. And at the end of my subsequent coverage of Amazon and Google’s in-person events, I alluded to additional Apple announcements that, judging from both leaks (some even straight from the FCC) and historical precedents, might still be on the way.

Well, earlier today (as I write these words on October 15), at least some of those additional announcements just arrived, in the form of the new baseline M5 SoC and the various upgraded systems containing it. But this time, again following historical precedent, they were delivered only in press release form. Any conclusions you might draw as the relative importance within Apple of smartphones versus other aspects of the overall product line are…well…🤷‍♂️

The M5 SoC

Looking at the historical trends of M-series SoC announcements, you’ll see that the initial >1.5-year latency between the baseline M1 (November 2020) and M2 (June 2022) chips subsequently shrunk to a yearly (plus or minus a few months) cadence. To wit, since the M4 came out last May but the M5 hadn’t yet arrived this year, I was assuming we’d see it soon. Otherwise, its lingering absence would likely be reflective of troubles within Apple’s chip design team and/or longstanding foundry partner TSMC. And indeed, the M5 has finally shown up. But my concerns about development and/or production troubles still aren’t completely alleviated.

Let’s parse through the press release.

Built using third-generation 3-nanometer technology…

This marks the third consecutive generation of M-series CPUs manufactured on a 3-nm litho process (at least for the baseline M5…I’ll delve into higher-end variants next). Consider this in light of Wikipedia’s note that TSMC began risk production on its first 2 nm process mid-last year and was originally scheduled to be in mass production on 2 nm in “2H 2025”. Admittedly, there are 2.5 more months to go until 2025 is over, but Apple would have had to make its process-choice decision for the M5 many months (if not several years) in the past.

Consider, too, that the larger die size Pro and Max (and potentially also Ultra) variants of the M5 haven’t yet arrived. This delay isn’t without precedent; there was a nearly six-month latency between the baseline M4 and its Pro and Max variants, for example. That said, the M4 had shown up in early May, with the Pro and Max following in late October, so they all still arrived in 2024. And here’s an even more notable contrast: all three variants of the M3 were launched concurrently in late October 2023. Consider all of this in the light of persistent rumors that M5 Pro- and Max-based systems may not show up until spring-or-later 2026.

M5 introduces a next-generation 10-core GPU architecture with a Neural Accelerator in each core, enabling GPU-based AI workloads to run dramatically faster, with over 4x the peak GPU compute performance compared to M4. The GPU also offers enhanced graphics capabilities and third-generation ray tracing that combined deliver a graphics performance that is up to 45 percent higher than M4.

Note that these Neural Accelerators are presumably different than those in the dedicated 16-core Neural Engine. The latter historically garnered the bulk of the AI-related press release “ink”, but this time it’s limited to terse “improved” and “faster” descriptions. What does this tell me?

  • “Neural Accelerator” is likely a generic term reflective of AI-tailored shader and other functional block enhancements, analogous to the increasingly AI-optimized capabilities of NVIDIA’s various GPU generations.
  • The Neural Engine, conversely, is (again, I’m guessing) largely unchanged here from the one in the M4 series, instead indirectly benefiting from a performance standpoint due to the boosted overall SoC-to-external memory bandwidth.

M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.

Core count proportions and totals both match those of the M4. Aside from potential “Neural Accelerator” tweaks such as hardware-accelerated instruction set additions (à la Intel’s MMX and SSE), I suspect they’re largely the same as the prior generation, with any performance uplift resulting from overall external memory bandwidth improvements. Speaking of which…

M5 also features…a nearly 30 percent increase in unified memory bandwidth to 153GB/s.

And later…

M5 offers unified memory bandwidth of 153GB/s, providing a nearly 30 percent increase over M4 and more than 2x over M1. The unified memory architecture enables the entire chip to access a large single pool of memory, which allows MacBook Pro, iPad Pro, and Apple Vision Pro to run larger AI models completely on device. It fuels the faster CPU, GPU, and Neural Engine as well, offering higher multithreaded performance in apps, faster graphics performance in creative apps and games, and faster AI performance running models on the Neural Accelerators in the GPU or the Neural Engine.

The enhanced memory controller is, I suspect, the nexus of overall M4-to-M5 advancements, as well as explaining why Apple’s still able to cost-effectively (i.e., without exploding the total transistor count budget) fabricate the new chip on a legacy 3-nm lithography. How did the company achieve this bandwidth boost? While an even wider bus width than that used with the M4 might conceptually provide at least part of the answer, it’d also both balloon the required SoC pin count and complicate the possible total memory capacity increments. I therefore suspect a simpler approach is at play. The M4 used 7500 Mbps DDR5X SDRAM, while the M4 Pro and Max leveraged the faster 8533 Mbps DDR5X speed bin. But if you look at Samsung’s website (for example), you’ll see an even faster 9600 Mbps speed bin listed. 9600 Mbps is 28% more than 7500 Mbps…voila, there’s your “nearly 30 percent increase”.

There’s one other specification, this time not found in the SoC press release but instead in the announcement for one of the M5-based systems, that I’d like to highlight:

…up to 2x faster storage read and write speeds…

My guess here is that Apple has done a proprietary (or not)-interface equivalent to the industry-standard PCI Express 4.x-to-5.x and UFS 4.x-to-5.x evolutions, which also tout doubled peak transfer rate speeds.

Speaking of speeds…keep in mind when reading about SoC performance claims that they’re based on the chip running at its peak possible clock cadence, not to mention when outfitted with maximum available core counts. An especially power consumption-sensitive tablet computer, for example, might clock-throttle the processor compared to the SoC equivalent in a mobile or (especially) desktop computer. Yield-maximization (translating into cost-minimization) “binning” aspirations are another reason why the SoC in a particular system configuration may not perform to the same level as a processor-focused press release might otherwise suggest. Such schemes are particularly easy for someone like Apple—who doesn’t publish clock speeds anyway—to accomplish.

And speaking of cost minimization, reducing the guaranteed-functional core counts on a chip can significantly boost usable silicon yield, too. To wit, about those M5-based systems…

11” and 13” iPad Pros

Last May’s M4 unveil marked the first time that an iPad, versus a computer, was the initial system to receive a new M-series processor generation. More generally, the fifth-gen iPad Pro introduced in April 2021 was the first iPad to transition from Apple’s A-series SoCs to the M-series (the M1, to be precise). This was significant because, up to that point, M-series chips had been exclusively positioned as for computers, with A-series processors for iPhones and iPads.

This time, both the 11” and 13” iPad Pro get the M5, albeit with inconsistent core counts (and RAM allocations, for that matter) depending on the flash memory storage capacity and resultant price tag. From 9 to 5 Mac’s coverage:

  • 256GB storage: 12GB memory, M5 with 9-core CPU, 10-core GPU
  • 512GB storage: 12GB memory, M5 with 9-core CPU, 10-core GPU
  • 1TB storage: 16GB memory, M5 with 10-core CPU, 10-core GPU
  • 2TB storage: 16GB memory, M5 with 10-core CPU, 10-core GPU

It bears noting that the 12 GByte baseline capacity is 4 GBytes above what baseline M4 iPad Pros came with a year-plus ago. Also, the deprecated CPU core in the lower-end variants is one of the four performance cores; CPU efficiency core counts are the same across all models, as are—a pleasant surprise given historical precedents and a likely reflection of TSMC’s process maturity—the graphics core counts. And for the first time, a cellular-equipped iPad has switched from a Qualcomm modem to Apple’s own: the newest C1X, to be precise, along with the N1 for wireless communications, both of which we heard about for the first time just a month ago.

A brief aside: speaking of A-series to M-series iPad Pro transitions, mine is a second-generation 11” model (one of the fourth-generation iPad Pros) dating from March 2020 and based on the A12Z Bionic processor. It’s still running great, but I’ll bet Apple will drop software support for it soon (I’m frankly surprised that it survived this year’s iPadOS 26 cut, to be honest). My wife-and-I have a wedding anniversary next month. Then there’s Christmas. And my 60th birthday next May. So, if you’re reading this, honey…😂

The 14” MacBook Pro

This one was not-so-subtly foreshadowed by Apple’s marketing VP just yesterday. The big claim here, aside from the inevitable memory bandwidth-induced performance-boost predications, is “phenomenal battery life of up to 24 hours” (your mileage may vary, of course). And it bears noting that, in today’s tariff-rife era, the $1599 entry-level pricing is unchanged from last year.

The Vision Pro

The underlying rationale for the performance boost is more obvious here; the first-generation model teased in June 2023 with sales commencing the following February was based on the three-generations-older M2 SoC. That said, given the rampant rumors that Apple has redirected its ongoing development efforts to smart glasses, I wonder how long we’ll be stuck with this second-generation evolutionary tweak of the VR platform. A redesigned headband promises a more comfortable wearing experience. Apple will also start selling accessories from Logitech (the Muse pencil, available now) and Sony (the PlayStation VR2 Sense controller, next month).

Anything else?

I should note, by the way, that the Beats Powerbeats Fit earbuds that I mentioned a month back, which had been teased over YouTube and elsewhere but were MIA at Apple’s event, were finally released at the end of September. And on that note, other products (some currently with evaporating inventories at retail, another common tipoff that a next-generation device is en route) are rumored candidates for near-future launch:

  • Next-gen Apple TV 4K
  • HomePod mini 2
  • AirTag 2
  • One (or multiple) new Apple Studio Display(s)
  • (???)

We shall see. Until next time, I welcome your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Apple’s M5: The SoC-and-systems cadence (sorta) continues to thrive appeared first on EDN.

TI launches power management devices for AI computing

Чтв, 10/16/2025 - 02:15
Data center.

Texas Instruments Inc. (TI) announced several power management devices and a reference design to help companies meet AI computing demands and scale power management architectures from 12 V to 48 V to 800 VDC. These products include a dual-phase smart power stage, a dual-phase smart power module for lateral power delivery, a gallium nitride (GaN) intermediate bus converter (IBC), and a 30-kW AI server power supply unit reference design.

“Data centers are very complex systems and they’re running very power-intensive workloads that demand a perfect balance of multiple critical factors,” said Chris Suchoski, general manager of TI’s data center systems engineering and marketing team. “Most important are power density, performance, safety, grid-to-gate efficiency, reliability, and robustness. These factors are particularly essential in developing next-generation, AI purpose-driven data centers, which are more power-hungry and critical today than ever before.”

Data center.(Source: Texas Instruments Inc.)

Suchoski describes grid-to-gate as the complete power path from the AC utility gird to the processor gates in the AI compute servers. “Throughout this path, it’s critical to maximize your efficiency and power density. We can help improve overall energy efficiency from the original power source to the computational workload,” he said.

TI is focused on helping customers improve efficiency, density, and security at every stage in the power data center by combining semiconductor innovation with system-level power infrastructure, allowing them to achieve high efficiency and high density, Suchoski said.

Power density and efficiency improvements

TI’s power conversion products for data centers address the need for increased power density and efficiency across the full 48-V power architecture for AI data centers. These include input power protection, 48-V DC/DC conversion, and high-current DC/DC conversion for the AI processor core and side rails. TI’s newest power management devices target these next-generation AI infrastructures.

One of the trends in the market is a move from single-phase to dual-phase power stages that enable higher current density for the multi-phase buck voltage regulators that power these AI processors, said Pradeep Shenoy, technologist for TI’s data center systems engineering and marketing team.

The dual-phase power stage has very high-current capabilities, 200-A peak, Shenoy said, and it is in a very small, 5 × 5-mm package that comes in a thermally enhanced package with top-side cooling, enabling a very efficient and reliable supply in a small area.

The CSD965203B dual-phase power stage claims the highest peak power density power stage on the market, with 100 A of peak current per phase, combining two power phases in a 5 × 5-mm quad-flat no-lead package. With this device, designers can increase phase count and power delivery across a small printed-circuit-board area, improving efficiency and performance.

Another related trend is the move to dual-phase power modules, Shenoy said. “These power modules combine the power stages with the inductors, all in a compact form factor.”

The dual-phase power module co-packages the power stages with other components on the bottom and the inductor on the top, and it offers both trans-inductor voltage regulator (TLVR) and non-TLVR options, he added. “They help improve the overall power density and current density of the solution with over a 2× reduction in size compared with discrete solutions.”

The CSDM65295 dual-phase power module delivers up to 180 A of peak output current in a 9 × 10 × 5-mm package. The module integrates two power stages and two inductors with TLVR options while maintaining high efficiency and reliable operation.

The GaN-based IBC achieves over 1.5 kW of output power with over 97.5% peak efficiency, and it also enables regulated output and active current sharing, Shenoy said. “This is important because as we see the power consumption and power loads are increasing in these data centers, we need to be able to parallel more of these IBCs, and so the current sharing helps make that very scalable and easy to use.”

The LMM104RM0 GaN converter module offers over 97.5% input-to-output power conversion efficiency and high light-load efficiency to enable active current sharing between multiple modules. It can deliver up to 1.6 kW of output power in a quarter-brick (58.4 × 36.8-mm) form factor.

TI also introduced a 39-kW dual-stage power supply reference design for AI servers that features a three-phase, three-level flying capacitor power-factor-correction converter paired with dual delta-delta three-phase inductor-inductor-capacitor converters. The power supply is configurable as a single 800-V output or separate output supplies.

TI's 30-kW HVDC AI data center reference design.30-kW HVDC AI data center reference design (Source: Texas Instruments Inc.)

TI also announced a white paper, “Power delivery trade-offs when preparing for the next wave of AI computing growth,” and its collaboration with Nvidia to develop power management devices to support 800-VDC power architectures.

The solutions will be on display at Open Compute Summit (OCP), Oct. 13–16, in San Jose, California. TI is exhibiting at Booth #C17. The company will also participate in technology sessions, including the OCP Global Summit Breakout Session and OCP Future Technologies Symposium.

The post TI launches power management devices for AI computing appeared first on EDN.

100-V GaN transistors meet automotive standard

Срд, 10/15/2025 - 22:04
Infineon’s CoolGaN 100-V G1 GaN transistors.

Infineon Technologies AG unveils its first gallium nitride (GaN) transistor family qualified to the Automotive Electronics Council (AEC) standard for automotive applications. The new CoolGaN automotive transistor 100-V G1 family, including high-voltage (HV) CoolGaN automotive transistors and bidirectional switches, meet AEC-Q101.

Infineon’s CoolGaN 100-V G1 GaN transistors.(Source: Infineon Technologies AG)

This supports Infineon’s commitment to provide automotive solutions from low-voltage infotainment systems addressed by the new 100-V GaN transistor to future HV product solutions in onboard chargers and traction inverters. “Our 100-V GaN auto transistor solutions and the upcoming portfolio extension into the high-voltage range are an important milestone in the development of energy-efficient and reliable power transistors for automotive applications,” said Johannes Schoiswohl, Infineon’s head of the GaN business line, in a statement.

The new devices include the IGC033S10S1Q CoolGaN automotive transistor 100 V G1 in a 3 × 5-mm PQFN package, and the  IGB110S10S1Q CoolGaN transistor 100 V G1 in a  3 × 3-mm PQFN. The IGC033S10S1Q features an Rds(on) of 3.3 mΩ and the IGB110S10S1Q has an Rds(on) of 11 mΩ. Other features include dual-side cooling, no reverse recovery charge, and ultra-low figures of merit.

These GaN e-mode power transistors target automotive applications such as advanced driver assistance systems and new climate control and infotainment systems that require higher power and more efficient power conversion solutions. GaN power devices offer higher energy efficiency in a smaller form factor and lower system cost compared to silicon-based components, Infineon said.

The new family of 100-V CoolGaN transistors target applications such as zone control and main DC/DC converters, high-performance auxiliary systems, and Class D Audio amplifiers. Samples of the pre-production automotive-qualified product range are now available. Infineon will showcase its automotive GaN solutions at the OktoberTech Silicon Valley, October 16, 2025.

The post 100-V GaN transistors meet automotive standard appeared first on EDN.

Voltage-to-period converter offers high linearity and fast operation

Срд, 10/15/2025 - 16:20

The circuit in Figure 1 converts the input DC voltage into a pulse train. The period of the pulses is proportional to the input voltage with a 50% duty cycle and a nonlinearity error of 0.01%. The maximum conversion time is less than 5 ms.

Figure 1 The circuit uses an integrator and a Schmitt trigger with variable hysteresis to convert a DC voltage into a pulse train where the period of the pulses is proportional to the input voltage.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The circuit is made of four sections. The op-amp IC1 and resistors R1 to R5 create two reference voltages for the integrator.

The integrator, built with IC2, RINT, and CINT, generates two linear ramps. Switch S1 changes the direction of the current going to the integrating capacitor; in turn, this changes the direction of the linear ramps. The rest of the circuit is a Schmitt trigger with variable hysteresis. The low trip point VLO is fixed, and the high trip point VHI is variable (the input voltage VIN comes in there).

The signal coming from the integrator sweeps between the two trip points of the trigger at an equal rate and in opposite directions. Since R4 = R5, the duty cycle is 50% and the transfer function is as follows:

To start oscillations, the following relation must be satisfied when the circuit gets power:

Figure 2 shows that the transfer function of the circuit is perfectly linear (the R² factor equals unity). In reality, there are slight deviations around the straight line; with respect to the span of the output period, these deviations do not exceed ± 0.01%. The slope of the line can be adjusted to 1000 µs/V by R2, and the offset can be easily cancelled by the microcontroller (µC).

Figure 2 The transfer function of the circuit in Figure 1. It is very linear and can be easily adjusted via R2.

Figure 1 shows that the µC converts period T into a number by filling the period with clock pulses of frequency fCLK = 1 MHz. It also adds 50 to the result to cancel the offset. The range of the obtained numbers is from 200 to 4800, i.e., the resolution is 1 count per mV.

Resolution can be easily increased by a factor of 10 by setting the clock frequency to 10 MHz. The great thing is that the nonlinearity error and conversion time remain the same, which is not possible for the voltage-to-frequency converters (VFCs). Here is an example.

Assume that a voltage-to-period converter (VPC) generates pulse periods T = 5 ms at a full-scale input of 5 V. Filling the period with 1 MHz clock pulses produces a number of 5000 (N = T * fCLK). The conversion time is 5 ms, which is the longest for this converter. As we already know, the nonlinearity is 0.01%.

Now consider a VFC which produces a frequency f = 5 kHz at a 5-V input. To get the number of 5000, this signal must be gated by a signal that is 1 second long (N = tG * f). Gate time is the conversion time.

The nonlinearity in this case is 0.002 % (see References), which is five times better than VPC’s nonlinearity. However, conversion time is 200 times longer (1 s vs. 5 ms). To get the same number of pulses N for the same conversion time as the VPC, the full-scale frequency of the VFC must go up to 1 MHz. However, nonlinearity at 1 MHz is 0.1%, ten times worse than VPC’s nonlinearity.

The contrast becomes more pronounced when the desired number is moved up to 50,000. Using the same analysis, it becomes clear that the VPC can do the job 10 times faster with 10 times better linearity than the VFCs. An additional advantage of the VPC is the lower cost.

If you plan to use the circuit, pay attention to the integrating capacitor. As CINT participates in the transfer function, it should be carefully selected in terms of tolerance, temperature stability, and dielectric material.

Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently, he teaches electrical and electronics courses at a Toronto community college.

Related Content

References:

  1. AD650 voltage-to-frequency and frequency-to-voltage converter. Data sheet from Analog Devices; www.analog.com
  2. VFC320 voltage-to-frequency and frequency-to-voltage converter. Data sheet from Burr-Brown; www.ti.com

The post Voltage-to-period converter offers high linearity and fast operation appeared first on EDN.

“Flip ON Flop OFF” for 48-VDC systems with high-side switching

Срд, 10/15/2025 - 16:20

My Design Idea (DI), “Flip ON Flop OFF for 48-VDC systems,“ was published and referenced Stephen Woodward’s earlier “Flip ON Flop OFF” circuit. Other DIs published on this subject matter were for voltages less than 15 V, which is the voltage limit for CMOS ICs, while my DI was intended for higher DC voltages, typically 48 VDC. In this earlier DI, the ground line is switched, which means the input and output grounds are different. This is acceptable to many applications since the voltage is small and will not require earthing.

However, some readers in the comments section wanted a scheme to switch the high side, keeping the ground the same. To satisfy such a requirement, I modified the circuit as shown in Figure 1, where input and output grounds are kept the same and switching is done on the positive line side.

Figure 1 VCC is around 5 V and should be connected to the VCC of the ICs U1 and U2. The grounds of ICs U1 and U2 should also be connected to ground (connection not shown in the circuit). Switching is done in the high side, and the ground is the same for the input and output. Note, it is necessary for U1 to have a heat sink.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In this circuit, the voltage dividers R5 and R7 set the voltage at around 5 V at the emitter of Q2 (at VCC). This voltage is applied to ICs U1 and U2. A precise setting is not important, as these ICs can operate from 3 to 15 V. R2 and C2 are for the power ON reset of U1. R1 and C1 are for the push button (PB) switch debounce.

 When you momentarily push PB once, the Q1-output of the U1 counter (not the Q1 FET) goes HIGH, saturating the Q3 transistor. Hence, the gate of Q1 (PMOSFET, IRF 9530N, VDSS=-100 V, IDS=-14 A, RDS=0.2 Ω) is pulled to ground. Q1 then conducts, and its output goes near 48 VDC.

Due to the 0.2-Ω RDS of Q1, there will be a small voltage drop depending on load current. When you push PB again, transistor Q3 turns OFF and Q1 stops conducting, and the voltage at the output becomes zero. Here, switching is done at the high side, and the ground is kept the same for the input and output sides.

If galvanic isolation is required (this may not always be the case), you may connect an ON/OFF mechanical switch prior to the input. In this topology, on-load switching is taken care of by the PB-operated circuit, and the ON/OFF switch switches zero current only, so it does not need to be bulky. You can select a switch that passes the required load current. While switching ON, first close the ON/OFF switch and then operate PB to connect. While switching OFF, first push PB to disconnect and operate the ON/OFF switch.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post “Flip ON Flop OFF” for 48-VDC systems with high-side switching appeared first on EDN.

A logically correct SoC design isn’t an optimized design

Срд, 10/15/2025 - 03:08

The shift from manual design to AI-driven, physically aware automation of network-on-chip (NoC) design can be compared to the evolution of navigation technology. Early GPS systems revolutionized road travel by automating route planning. These systems allowed users to specify a starting point and destination, aiming for the shortest travel time or distance, but they had a limited understanding of real-world conditions such as accidents, construction, or congestion.

The result was often a path that was correct, and minimized time or distance under ideal conditions, but not necessarily the most efficient in the real world. Similarly, early NoC design approaches automated connectivity, yet without awareness of physical floorplans or workloads as inputs for topology generation, they usually fell well short of delivering optimal performance.

Figure 1 The evolution of NoC design has many similarities with GPS navigation technology. Source: Arteris

Modern GPS platforms such as Waze or Google Maps go further by factoring in live traffic data, road closures, and other obstacles to guide travelers along faster, less costly routes. In much the same way, automation in system-on-chip (SoC) interconnects now applies algorithms that minimize wire length, manage pipeline insertion, and optimize switch placement based on a physical awareness of the SoC floorplan. This ensures that designs not only function correctly but are also efficient in terms of power, area, latency, and throughput.

The hidden cost of “logically correct”

As SoC complexity increases, the gap between correctness and optimization has become more pronounced. Designs that pass verification can still hide inefficiencies that consume power, increase area, and slow down performance. Just because a design is logically correct doesn’t mean it is optimized. While there are many tools to validate that a design is logically correct, both at the RTL and physical design stages, what tools are there to check for design optimization?

Traditional NoC implementations depend on experienced NoC design experts to manually determine switch locations and route the connections between the switches and all the IP blocks that the NoC needs to connect. Design verification (DV) tools can verify that these designs meet functional requirements, but subtle inefficiencies will remain undetected.

Wires may take unnecessarily long detours around blocks of IP, redundant switches may persist after design changes, and piecemeal edits often accumulate into suboptimal paths. None of these are logical errors that many of today’s EDA tools can detect. They are inefficiencies that impact area, power, and latency while remaining invisible to standard checks.

Manually designing an NoC is also both tedious and fragmented. A large design may take several days to complete. Expert designers must decide where to place switches, how to connect them, and when to insert pipeline stages to enable timing closure.

While they may succeed in producing a workable solution, the process is vulnerable to oversights. When engineers return to partially completed work, they may not recall every earlier decision, especially for work done by someone else on the team. As changes accumulate, inefficiencies mount.

The challenge compounds when SoC requirements shift. Adding or removing IP blocks is routine, yet in manual flows, such changes often force large-scale rework. Wires and switches tied to outdated connections often linger because edits rarely capture every dependency.

Correcting these issues requires yet more intervention, increasing both cost and time. Automating NoC topology generation eliminates these repetitive and error-prone tasks, ensuring that interconnects are optimized from the start.

Scaling with complexity

The need for automation grows as SoC architectures expand. Connecting 20 IP blocks is already challenging. At 50, the task becomes overwhelming. At 500, it’s practically impossible to optimize without advanced algorithms. Each block introduces new paths, bandwidth requirements, and physical constraints. Attempting this manually is no longer realistic.

Simplified diagrams of interconnects often give the impression of manageable scale. Reality is far more daunting, where a single logical connection may consist of 512, 1024, or even 2048 individual wires. Achieving optimized connectivity across hundreds of blocks requires careful balancing of wire length, congestion, and throughput all at once.

Another area where automation adds value is in regular topology generation. Different regions of a chip may benefit from different structures such as meshes, rings, or trees. Traditionally, designers had to decide these configurations in advance, relying on experience and intuition. This is much like selecting a fixed route on your GPS, without knowing how conditions may change.

Automation changes the approach. By analyzing workload and physical layout, the system can propose or directly implement the topology best suited for each region. Designers can choose to either guide these choices or leave the system to determine the optimal configuration. Over time, this flexibility may make rigid topologies less relevant, as interconnects evolve into hybrids tailored to the unique needs of each design.

In addition to initial optimization, adaptability during the design process is essential. As new requirements emerge, interconnects must be updated without requiring a complete rebuild. Incremental automation preserves earlier work while incorporating new elements efficiently, removing elements that are no longer required. This ability mirrors modern navigation systems, which reroute travelers seamlessly when conditions change rather than responding to the evolving conditions once the journey has started.

For SoC teams, the value is clear. Incremental optimization saves time, avoids unnecessary rework, and ensures consistency throughout the design cycle.

Figure 2 FlexGen smart NoC IP unlocks new performance and efficiency advantages. Source: Arteris

Closing the gap with smart interconnects

SoC development has benefited from decades of investment in design automation. Power analysis, functional safety, and workload profiling are well-established. However, until now, the complexity of manually designing and updating NoCs left teams vulnerable to inefficiencies that consumed resources and slowed progress. Interconnect designs were often logically correct, but rarely optimal.

Suboptimal wire length is one of the few classes of design challenges that some EDA tools still may not detect. NoC automation has bridged the gap, eliminating them at the source, delivering a correct wire length optimized to meet the throughput constraints of the design specification. By embedding intelligence into the interconnect backbone, design teams achieve solutions that are both correct and efficient, while reducing or even eliminating reliance on scarce engineering expertise.

NoCs have long been essential for connecting IP blocks in modern complex SoC design, and often the cause of schedule delays and throughput bottlenecks. Smart NoC automation now transforms interconnect design by reducing risk for both the project schedule and its ultimate performance.

At the forefront of this change is smart interconnect IP created to address precisely these challenges. By automating topology generation, minimizing wire lengths, and enabling incremental updates, a smart interconnect IP like FlexGen closes the gap between correctness and optimization. As a result, engineering groups under pressure to deliver complex designs quickly gain a path to higher performance with less effort.

There is a difference between finding a path and finding the best path. In SoC design, that difference determines competitiveness in performance, power, and time-to-market, and smart NoC automation is what makes it possible.

Rick Bye is Director of Product Management and Marketing at Arteris, overseeing the FlexNoC family of non-coherent NoC IP products. Previously, he was a senior product manager at Arm, responsible for a demonstration SoC and compression IP.  Rick has extensive product management and marketing experience in semiconductors and embedded software.

Related Content

The post A logically correct SoC design isn’t an optimized design appeared first on EDN.

AI Ethernet NIC drives trillion-parameter AI workloads

Втр, 10/14/2025 - 23:59
Broadcom's Thor Ultra 800G AI Ethernet network interface card.

Broadcom Inc. introduces Thor Ultra, claiming the industry’s first 800G AI Ethernet network interface card (NIC). The Ethernet NIC, adopting the open Ultra Ethernet Consortium (UEC) specification, can interconnect hundreds of thousands of XPUs to drive trillion-parameter AI workloads.

The UEC modernized remote direct memory access (RDMA) for large AI clusters, which the Thor Ultra leverages, offering several RDMA innovations. These include packet-level multipathing for efficient load balancing, out-of-order packet delivery directly to XPU memory for maximizing fabric utilization, selective retransmission for efficient data transfer, and programmable receiver-based and sender-based congestion control algorithms.

By providing these advanced RDMA capabilities in an open ecosystem, it allows customers to connect to XPUs, optics, or switches and to reduce dependency on proprietary, vertically integrated solutions, Broadcom said.

Broadcom's Thor Ultra 800G AI Ethernet network interface card.(Source: Broadcom Inc.)

The Thor Ultra joins Broadcom’s Ethernet AI networking portfolio, including Tomahawk 6, Tomahawk 6-Davisson, Tomahawk Ultra, Jericho 4,  and Scale-Up Ethernet (SUE), as part of an open ecosystem for large scale high-performance XPU deployments.    

The Thor Ultra Ethernet NIC is available in standard PCIe CEM and OCP 3.0 form factors. It offers 200G or 100G PAM4 SerDes with support for long-reach passive copper, and claims the industry’s lowest bit error rate SerDes, reducing link flaps and accelerating job completion time.

Other features include a PCI Express Gen 6 ×16 host interface, programmable congestion control pipeline, secure boot with signed firmware and device attestation, and line-rate encryption and decryption with PSP offload, which relieves the host/XPU of compute-intensive tasks.

The Ethernet NIC also provides packet trimming and congestion signaling support with Tomahawk 5, Tomahawk 6, or any UEC compliant switch. Thor Ultra is now sampling.

The post AI Ethernet NIC drives trillion-parameter AI workloads appeared first on EDN.

Power design tools ease system development

Втр, 10/14/2025 - 23:06
ADI's ADI Power Studio.

Analog Devices, Inc. (ADI) launches its ADI Power Studio, a family of products that offers advanced modeling, component recommendations, and efficiency analysis with simulation to help streamline power management design and optimization. ADI also offers early versions of two new web-based tools as part of Power Studio.

The web-based ADI Power Studio Planner and ADI Power Studio Designer tools, together with the full ADI Power Studio portfolio, are designed to streamline the entire power system design process from initial concept through measurement and evaluation. The Power Studio portfolio also features ADI’s existing desktop and web-based power management tools, including LTspice, SIMPLIS, LTpowerCAD, LTpowerPlanner, EE-Sim, LTpowerPlay, and LTpowerAnalyzer.

ADI's ADI Power Studio.(Source: Analog Devices Inc.)

The Power Studio tools address key challenges in designing electronic systems with dozens of power rails and interdependent voltage domains, which creates greater design complexity. These bottlenecks require rework during architecture decisions, component selection, and validation, ADI said.

Power Studio addresses these challenges by providing a workflow that helps engineering teams make better decisions earlier by simulating real-world performance with accurate models and automating key outputs such as bill of materials and report generation, helping to reduce rework.

The ADI Power Studio Planner web-based tool targets system-level power tree planning. It provides an interactive view of the system architecture, making it easier to model power distribution, calculate power loss, and analyze system efficiency. Key features include intelligent parametric search and tradeoff comparisons.

The ADI Power Studio Designer is a web-based tool for IC-level power supply design. It provides optimized component recommendations, performance estimates, and tailored efficiency analysis. Built on the ADI power design architecture, Power Studio Designer offers guided workflows so engineers can set key parameters to build accurate models to simulate real-world performance, with support for both LTspice and SIMPLIS schematics, before moving to hardware.

Power Studio Planner and Power Studio Designer are available now as part of the ADI Power Studio. These tools are the first products released under ADI’s vision to deliver a fully connected power design workflow for customers. ADI plans to introduce ongoing updates and product announcements in the months ahead.

The post Power design tools ease system development appeared first on EDN.

Broadcom delivers Wi-Fi 8 chips for AI

Втр, 10/14/2025 - 22:34
Broadcom's Wi-Fi 8 chips.

Broadcom Inc. claims the industry’s first Wi-Fi 8 silicon solutions for the broadband wireless edge, including residential gateways, enterprise access points, and smart mobile clients. The company also announced the availability of its Wi-Fi 8 IP for license in IoT, automotive, and mobile device applications.

Designed for AI-era edge networks, the new Wi-Fi 8 chips include the BCM6718 for residential and operator access applications, the BCM43840 and BCM43820 for enterprise access applications, and the BCM43109 for edge wireless clients such as smartphones, laptops, tablets and automotive. These new chips also include a hardware-accelerated telemetry engine, targeting AI-driven network optimization. This engine collects real-time data on network performance, device behavior, and environmental conditions.

Broadcom's Wi-Fi 8 chips.(Source: Broadcom Inc.)

The engine is a critical input for AI models and can be used by customers to train and run inference on the edge or in the cloud for use cases such as measuring and optimizing quality of experience (QoE), strengthening Wi-Fi network security and anomaly detection, and lowering the total cost of ownership through predictive maintenance and automated optimization, Broadcom said.

Wi-Fi 8 silicon chips

The BCM6718 residential Wi-Fi access point chip features advanced eco modes for up to 30% greater energy efficiency and third-generation digital pre-distortion, which reduces peak power by 25%. Other features include a four-stream Wi-Fi 8 radio, receiver sensitivity enhancements enabling faster uploads, BroadStream wireless telemetry engine for AI training/inference, and BroadStream intelligent packet scheduler to maximize QoE. It also provides full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications.

The BCM43840 (four-stream Wi-Fi 8 radio) and BCM43820 (two-stream scanning and analytics Wi-Fi 8 radio) enterprise Wi-Fi access point chips also feature advanced eco modes and third-generation digital pre-distortion, a BroadStream wireless telemetry engine for AI training/inference, and full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications. They also provide an advanced location tracking capability.

The highly-integrated BCM43109 dual-core Wi-Fi 8, high-bandwidth Bluetooth, and 802.15.4 combo chip is optimized for mobile handset applications. The combo chip offers non-primary channel access for latency reduction and improved low-density parity check coding to extend gigabit coverage. It also provides full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications, along with 802.15.4 support including Thread V1.4 and Zigbee Pro, and Bluetooth 6.0 high data throughput and higher-bands support. Other key features include a two-stream Wi-Fi 8 radio with 320-MHz channel support, enhanced long range Wi-Fi, and sensing and secure ranging.

The Wi-Fi 8 silicon is currently sampling to select partners. The Wi-Fi IP is currently available for licensing, manufacture, and use in edge client devices.

The post Broadcom delivers Wi-Fi 8 chips for AI appeared first on EDN.

Microchip launches PCIe Gen 6 switches

Втр, 10/14/2025 - 21:47
Microchip's PCIe Gen 6 switches for AI infrastructure.

Microchip Technology Inc. expands its Switchtec PCIe family with its next-generation Switchtec Gen 6 PCIe fanout switches, supporting up to 160 lanes for high-density AI systems. Claiming the industry’s first PCIe Gen 6 switches manufactured using a 3-nm process, the Switchtec Gen 6 family features lower power consumption and advanced security features, including a hardware root of trust and secure boot with post-quantum-safe cryptography compliant with the Commercial National Security Algorithm Suite (CNSA) 2.0.

The PCIe 6.0 standard doubles the bandwidth of PCIe 5.0 to 64 GT/s per lane, making it suited for AI workloads and high-performance computing applications that need faster data transmission and lower latency. It also adds flow control unit (FLIT) mode, a lightweight forward-error-correction (FEC) system, and dynamic resource allocation, enabling more efficient and reliable data transfer, particularly for small packets in AI workloads.

As a high-performance interconnect, the Switchtec Gen 6 PCIe switches, Microchip’s third-generation PCIe switch, enable high-speed connectivity between CPUs, GPUs, SoCs, AI accelerators, and storage devices, reducing signal loss and maintaining the low latency required by AI fabrics, Microchip said.

Though there are no production CPUs with PCIe Gen 6 support on the market, Microchip wanted to make sure that they had all of the infrastructure components in advance of PCIe Gen 6 servers.

“This breakthrough is monumental for Microchip, establishing us once again as a leader in data center connectivity and broad infrastructure solutions,” said Brian McCarson, corporate vice president of Microchip’s data center solutions business unit.

Offering full PCIe Gen 6 compliance, which includes FLIT, FET, 64-Gbits/s PAM4 signaling, deferrable memory, and 14-bit tag, the Switchtec Gen 6 PCIe switches feature 160 lanes, 20 ports, and 10 stacks with each port featuring hot- and surprise-plug controllers. Also available are 144-lane variants. These switches support non-transparent bridging to connect and isolate multiple host domains and multicast for one-to-many data distribution within a single domain. They are suited for high-performance compute, cloud computing, and hyperscale data centers.

Microchip's PCIe Gen 6 switches for AI infrastructure.(Source: Microchip Technology Inc.)

Multicast support is a key feature of the next-generation switch. Not all switch providers have multicast capability, McCarson said.

“Without multicast, if a CPU needs to communicate to two drives because you want to have backup storage, it has to cast to one drive and then cast to the second drive,” McCarson said. “With multicast, you can send a signal once and have it cast to multiple drives.

“Or if the GPU and CPU have to communicate but you need to have all of your GPUs networked together, the CPU can communicate to an entire bank of GPUs or vice versa if you’re operating through a switch with multicast capability,” he added. “Think about the power savings from not having a GPU or CPU do the same thing multiple times day in, day out.”

McCarson said customers are interested in PCIe Gen 6 because they can double the data rate, but when they look at the benefits of multicast, it could be even bigger than doubling the data rates in terms of efficient utilization of their CPU and GPU assets.

Other features include advanced error containment and comprehensive diagnostics and debug capabilities, several I/O interfaces, and an integrated MIPS processor with bifurcation options at x8 and x16. Input and output reference clocks are based on PCIe stacks with four input clocks per stack.

Higher performance

The Switchtec Gen 6 product delivers on performance in signal integrity, advanced security, and power consumption.

PCIe 6.0 uses PAM4 signaling, which enables the doubling of the data rate, but it can also reduce the signal-to-noise ratio, causing signal integrity issues. “Signal integrity is one of the key factors when you’re running this higher data rate,” said Tam Do, technical engineer, product marketing for Microchip’s Data Center Solutions business unit

Signal loss, or insertion loss, set by the PCIe 6 spec is 32 dB. The new switch meets the spec thanks in part to its SerDes design and Microchip’s recommended layout of the pinout and package, according to Do.

In addition, Microchip added post-quantum cryptography to the new chip, which is not part of the PCIe standard, to meet customer requirements for a higher level of security, Do said.

The PCIe switch also offers lower power consumption, thanks to the 3-nm process, than competing PCIe Gen 6 devices built on older technology nodes.

Development tools include Microchip’s ChipLink diagnostic tools, which provide debug, diagnostics, configuration, and analysis through an intuitive graphical user interface. ChipLink connects via in-band PCIe or sideband signals such as UART, TWI, and EJTAG. Also available is the PM61160-KIT Switchtec Gen 6 PCIe switch evaluation kit with multiple interfaces.

Switchtec Gen 6 PCIe switches (x8 and x16 bifurcation) and an evaluation kit are available for sampling to qualified customers. A low-lane-count version with 64 and 48 lanes with x2, x4, x8, x16 bifurcation for storage and general enterprise use cases will also be available in the second quarter of 2026.

The post Microchip launches PCIe Gen 6 switches appeared first on EDN.

Amps x Volts = Watts

Втр, 10/14/2025 - 16:12

Analog topologies abound for converting current to voltage, voltage to current, voltage to frequency, and frequency to voltage, among other conversions.

Figure 1 joins the flock while singing a somewhat different tune. This current, voltage, and power (IVW) DC power converter multiplies current by voltage to sense wattage. Here’s how it gets off the ground.

Figure 1 TheI*V = W” converter comprises voltage-to-frequency conversion (U1ab & A1a) with frequency (F) of 2000 * Vload, followed by frequency-to-voltage conversion (U1c & A1b) with Vw = Iload * F / 20000 = (Iload * Vload) / 10 = Watts / 10 where Vload < 33 V and Iload < 1.5 A.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The basic topology of the IVW converter comprises a voltage-to-frequency converter  (VFC) cascaded with a frequency-to-voltage converter (FVC). U1ab and A1a, combined with the surrounding discretes (Q1, Q2, Q3, etc.), make a VFC similar to the one described in this previous Design Idea, “Voltage inverter design idea transmogrifies into a 1MHz VFC

The U1ab, A1a, C2, etc., VFC forms an inverting charge pump feedback loop that actively balances the 1 µA/V current through R2. Each cycle of the VFC deposits a charge of 5v * C2, or 500 picocoulombs (pC), onto integrator capacitor C3 to produce an F of 2 kHz * Vload (= 1 µA / 500 pC) for the control signal input of the FVC switch U1c. 

The other input to the U1c FVC is the -100 mV/A current-sense signal from R1. This combo forces U1c to pump F * -0.1 V/amp * 500 pF = -2 kHz * Vload * 50 pC * Iload into the input of the A1b inverting integrator.

 The melodious result is:

Vw = R1 * Iload * 2000 * Vload * R6 * C6

or, 

Vw = Iload * Vload * 0.1 * 2000 * 1 MΩ * 500 pF = 100 mV/W.

The R6C5 = 100-ms integrator time constant provides >60-dB of ripple attenuation for Vload > 1-V and a low noise 0- to 5-V output suitable for consumption by a typical 8- to 10-bit resolution ADC input. Diode D1 provides fire insurance for U1 in case Vload gets shorted to ground.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Amps x Volts = Watts appeared first on EDN.

Сторінки