Українською
  In English
EDN Network
Architectural opportunities propel software-defined vehicles forward

At the end of last year, the global software-defined vehicle (SDV) market size was valued at $49.3 billion. With a compound annual growth rate exceeding 25%, the industry is set to skyrocket over the next decade. But this anticipated growth hinges on automakers addressing fundamental architectural and organizational barriers. To me, 2025 will be a pivotal year for SDVs, provided the industry focuses on overcoming these challenges rather than chasing incremental enhancements.
Moving beyond the in-cabin experienceIn recent years, innovations in the realm of SDVs have primarily focused on enhancing passenger experience with infotainment systems, high-resolution touchscreens, voice-controlled car assistance, and personalization features ranging from seat positions to climate control, and even customizable options based on individual profiles.
While enhancements of these sorts have elevated the in-cabin experience to essentially replicate that of a smartphone, the next frontier in the automotive revolution lies in reimagining the very architecture of vehicles.
To truly advance the future of SDVs, I believe OEMs must partner with technology companies to architect configurable systems that enable SDV features to be unlocked on demand, unified infrastructures that optimize efficiency, and the integration of software and hardware teams at organizations. Together, these changes signal a fundamental redefinition of what it means to build and operate a vehicle in the era of software-driven mobility.
1. Cost of sluggish software updatesThe entire transition to SDVs was built on the premise that OEMs could continuously improve their products, deploy new features, and offer better user experience throughout the vehicle’s lifecycle, all without having to upgrade the hardware. This has created a new business model of automakers depending on software as a service to drive revenue streams. Companies like Apple have shelved plans to build a car, instead opting to control digital content within vehicles with Apple CarPlay. As automakers rely on users purchasing software to generate revenue, the frequency of software updates has risen. However, these updates introduce a new set of challenges to both vehicles and their drivers.
When over-the-air updates are slow or poorly executed, it can cause delayed functionality in other areas of the vehicle by rendering certain features unavailable until the software update is complete. Lacking specific features can have significant implications for a user’s convenience but also surfaces safety concerns. In other instances, drivers could experience downtime where the vehicle is unusable while updates are installed, as the process may require the car to remain parked and powered off.
Rapid reconfiguration of SDV softwareModern users will soon ditch their car manufacturers who continue to deliver slow over-the-air updates that impair the use of their car, as seamless and convenient functionality remains a priority. To stay competitive, OEMs need to upgrade their vehicle architectures with configurable platforms to grant users access to features on the fly without friction.
Advanced semiconductor solutions will play a critical role in this transformation, by facilitating the seamless integration of sophisticated electronic systems like advanced driver-assistance systems (ADAS) and in-vehicle entertainment platforms. These technological advancements are essential for delivering enhanced functionality and connected experiences that define next-generation SDVs.
To support this shift, cutting-edge semiconductor technologies such as fully-depleted silicon-on-insulator (FD-SOI) and Fin field-effect transistor (FinFET) with magnetoresistive random access memory (MRAM) are emerging as key enablers. These innovations enable the rapid reconfiguration of SDVs, significantly reducing update times and minimizing disruption for drivers. High-speed, low-power non-volatile memory (NVM) further accelerates this progress, facilitating feature updates in a fraction of the time required by traditional flash memory. Cars that evolve as fast as smartphones, giving users access to new features instantly and painlessly, will enhance customer loyalty and open up new revenue streams for automakers, Figure 1.
Figure 1 Cars that evolve as fast as smartphones using key semiconductor technologies such as FD-SOI, FinFET, and MRAM will give users access to new features instantly and painlessly. Source: Getty Images
2. Inefficiencies of distinct automotive domainsThe present design of automotive architecture also lends itself to challenges, as today’s vehicles are built around a central architecture that is split into distinct domains: motion control, ADAS, and entertainment. These domains function independently, each with their own control unit.
This current domain-based system has led to inefficiencies across the board. With domains housed in separate infrastructures, there are increased costs, weight, and energy consumption associated with computing. Especially as OEMs increasingly integrate new software and AI into the systems of SDVs, the domain architecture of cars presents the following challenges:
- Different software modules must run on the same hardware without interference.
- Software portability across different hardware in automotive systems is often limited.
- AI is the least hardware-agnostic component in automotive applications, complicating integration without close collaboration between hardware and software systems.
The inefficiencies of domain-based systems will continue to be amplified as SDVs become more sophisticated, with an increasing reliance on AI, connectivity, and real-time data processing, highlighting the need for upgrades to the architecture.
Optimizing a centralized architectureOEMs are already trending toward a more unified hardware structure by moving from distinct silos to an optimized central architecture under a single house, and I anticipate a stronger shift toward this trend in the coming years. By sharing infrastructure like cooling systems, power supplies, and communication networks, this shift is accompanied by greater efficiency, both lowering costs and improving performance.
As we look to the future, the next logical step in automotive innovation will be to merge domains into a single system-on-chip (SoC) to easily port software between engines, reducing R&D costs and driving further innovation. In addition, chiplet technology ensures the functional safety of automotive systems by maintaining freedom of interference, while also enabling the integration of various AI engines into SDVs, paving the way for more agile innovation without overhauling entire vehicles (Figure 2).
Figure 2 Merge multiple domains into a singular, central SoC is key to realizing SDVs. This architectural shift inherently relies upon chiplet technology to ensure the functional safety of automotive systems. Source: Getty Images
3. The reorganization companies must faceMany of these software and hardware architectural challenges stem from the current organization of companies in the industry. Historically, automotive companies have operated in silos, with hardware and software development functioning as distinct, and often disconnected entities. This legacy approach is increasingly incompatible with the demands of SDVs.
Bringing software to the forefrontMoving forward, automakers must shift their focus from being hardware-centric manufacturers to becoming software-first innovators. Similar to technology companies, automakers must adopt new business models that allow for continuous improvement and rapid iteration. This involves restructuring organizations to promote cross-functional collaboration, bringing traditionally isolated departments together to ensure seamless integration between hardware and software components.
While restructuring any business requires significant effort, this transformation will also reap meaningful benefits. By prioritizing software first, automakers will be able to deliver vehicles with scalable, future-proofed architectures while also keeping customers satisfied as seamless over-the-air updates remain a defining factor of the SDV experience.
Semiconductors: The future of SDV architectureThe SDV revolution stands at a crossroads; while the in-cabin experience has made leaps in advancements, the architecture of vehicles must evolve to meet future consumer demands. Semiconductors will play an essential role in the future of SDV architecture, enabling seamless software updates without disruption, centralizing domains to maximize efficiency, and driving synergy between software and hardware teams.
Sudipto Bose, Senior Director of Automotive Business Unit, GlobalFoundries.
Related Content
- CES 2025: Wirelessly upgrading SDVs
- CES 2025: Moving toward software-defined vehicles
- Software-defined vehicle (SDV): A technology to watch in 2025
- Will open-source software come to SDV rescue?
The post Architectural opportunities propel software-defined vehicles forward appeared first on EDN.
Why optical technologies matter in machine vision systems

Machine vision systems are becoming increasingly common across multiple industries. Manufacturers use them to streamline quality control, self-driving vehicles implement them to navigate, and robots rely on them to work safely alongside humans. Amid these rising use cases, design engineers must focus on the importance of reliable and cost-effective optical technologies.
While artificial intelligence (AI) algorithms may take most of the spotlight in machine vision, optical systems providing the data these models analyze are crucial, too. Therefore, by designing better camera and sensor arrays, design engineers can foster optimal machine vision on several fronts.
Optical systems are central to machine vision accuracy before the underlying AI model starts working. These algorithms are only effective when they have sufficient relevant data for training, and this data requires cameras to capture it.
Some organizations have turned to using AI-generated synthetic data in training, but this is not a perfect solution. These images may contain errors and hallucinations, hindering the model’s accuracy. Consequently, they often require real-world information to complement them, which must come from high-quality sources.
Developing high-resolution camera technologies with large dynamic ranges gives AI teams the tools necessary to capture detailed images of real-world objects. As a result, it becomes easier to train more reliable machine vision models.
Expanding machine vision applications
Machine vision algorithms need high-definition visual inputs during deployment. Even the most accurate model can produce inconsistent results if the images it analyzes aren’t clear or consistent enough.
External factors like lighting can limit measurement accuracy, so designers must pay attention to these considerations in their optical systems, not just the cameras themselves. Sufficient light from the right angles to minimize shadows and sensors to adjust the focus accordingly can impact reliability.
Next, video data and still images are not the only optical inputs to consider in a machine vision system. Design engineers can also explore a variety of technologies to complement conventional visual data.
For instance, lidar is an increasingly popular choice. More than half of all new cars today come with at least one radar sensor to enable functions like lane departure warnings. So, lidar is following a similar trajectory as self-driving features grow.
Complementing a camera with lidar sensors can provide these machine vision systems with a broader range of data. More input diversity makes errors less likely, especially when operating conditions may vary. Laser measurements and infrared cameras could likewise expand the roles machine vision serves.
The demand for high-quality inputs means the optical technologies in a machine vision system are often some of its most expensive components. By focusing on developing lower-cost solutions that maintain acceptable quality levels, designers can make them more accessible.
It’s worth noting that advances in camera technology have already brought the cost of such a solution from $1 million to $100,000 on the high end. Further innovation could have a similar effect.
Machine vision needs reliable optical technologies
AI is only as accurate as its input data. So, machine vision needs advanced optical technologies to reach its full potential. Design engineers hoping to capitalize on this field should focus on optical components to push the industry forward.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- What Is Machine Vision All About?
- Know Your Machine Vision Components
- Video Cameras and Machine Vision: A Technology Overview
- How Advancements in Machine Vision Propel Factory Revolution
- Machine Vision Approach Addresses Limitations of Standard 3D Sensing Technologies
The post Why optical technologies matter in machine vision systems appeared first on EDN.
Automotive chips improve ADAS reliability

TI has expanded its automotive portfolio with a high-speed lidar laser driver, BAW-based clocks, and a mmWave radar sensor. These devices support the development of adaptable ADAS for safer, more automated driving.
The LMH13000 is claimed to be the first laser driver with an ultra-fast 800-ps rise time, enabling up to 30% longer distance measurements than discrete implementations and enhancing real-time decision making. It integrates LVDS, CMOS, and TTL control signals, eliminating the need for large capacitors or additional external circuitry. The device delivers up to 5 A of adjustable output current with just 2% variation across an ambient temperature range of -40°C to +125°C.
By leveraging bulk acoustic wave (BAW) technology, the CDC6C-Q1 oscillator and the LMK3H0102-Q1 and LMK3C0105-Q1 clock generators provide 100× greater reliability than quartz-based clocks, with a failure-in-time (FIT) rate as low as 0.3. These devices improve clocking precision in next-generation vehicle subsystems.
TI’s AWR2944P front and corner radar sensor builds on the AWR2944 platform, offering a higher signal-to-noise ratio, enhanced compute performance, expanded memory, and an integrated radar hardware accelerator. The accelerator enables the system’s MCU and DSP to perform machine learning tasks for edge AI applications.
Preproduction quantities of the LMH13000, CDC6C-Q1, LMK3H0102-Q1, LMK3C0105-Q1, and AWR2944P are available now on TI.com. Additional output current options and an automotive-qualified version of the LMH13000 are expected in 2026.
The post Automotive chips improve ADAS reliability appeared first on EDN.
PMIC fine-tunes power for MPUs and FPGAs

Designed for high-end MPU and FPGA systems, the Microchip MCP16701 PMIC integrates eight 1.5-A buck converters that can be paralleled and are duty cycle-capable. It also includes four 300-mA LDO regulators and a controller to drive external MOSFETs.
The MCP16701 enables dynamic VOUT adjustment across all converters, from 0.6 V to 1.6 V in 12.5-mV steps and from 1.6 V to 3.8 V in 25-mV steps. This flexibility allows precise power tuning for specific requirements in industrial computing, data servers, and edge AI, enhancing overall system efficiency.
Housed in a compact 8×8-mm VQFN package, the PMIC reduces board area by 48% and lowers component count to less than 60% compared to discrete designs. It supports Microchip’s PIC64-GX MPU and PolarFire FPGAs with a configurable feature set and operates from -40°C to +105°C. An I2C interface facilitates communication with other system components.
The MCP16701 costs $3 each in lots of 10,000 units.
The post PMIC fine-tunes power for MPUs and FPGAs appeared first on EDN.
PXI testbench strengthens chip security testing

The DS1050A Embedded Security Testbench from Keysight is a scalable PXI-based platform for advanced side-channel analysis (SCA) and fault injection (FI) testing. Designed for modern chips and embedded devices, it builds on the Device Vulnerability Analysis product line, offering up to 10× higher test effectiveness to help identify and mitigate hardware-level security threats.
This modular platform combines three core components—the M9046A PXIe chassis, M9038A PXIe embedded controller, and Inspector software. It integrates key tools, including oscilloscopes, interface equipment, amplifiers, and trigger generators, into a single chassis, reducing cabling and improving inter-module communication speed.
The 18-slot M9046A PXIe chassis delivers up to 1675 W of power and supports 85 W of cooling per slot, accommodating both Keysight and third-party test modules. Powered by an Intel Core i7-9850HE processor, the M9038A embedded controller provides the computing performance required for complex tests. Inspector software simulates diverse fault conditions, supports data acquisition, and enables advanced cryptanalysis across embedded devices, chips, and smart cards.
For more information on the DS1050A Embedded Security Testbench, or to request a price quote, click the product page link below.
The post PXI testbench strengthens chip security testing appeared first on EDN.
Sensor brings cinematic HDR video to smartphones

Omnivision’s OV50X CMOS image sensor delivers movie-grade video capture with ultra-high dynamic range (HDR) for premium smartphones. Based on the company’s TheiaCel and dual conversion gain (DCG) technologies, the color sensor achieves single-exposure HDR approaching 110 dB—reportedly the highest available in smartphones.
The OV50X is a 50-Mpixel sensor with a 1.6-µm pixel pitch and an 8192×6144 active array in a 1-in. optical format. It supports 4-cell binning, providing 12.5-Mpixel output at up to 180 frames/s, or 60 frames/s with three-exposure HDR. The sensor also enables 8K video with dual analog gain HDR and on-sensor crop-zoom capability.
TheiaCel employs lateral overflow integration capacitor (LOFIC) technology in combination with Omnivision’s proprietary DCG HDR to capture high-quality images and video in difficult lighting conditions. Quad phase detection (QPD) with 100% sensor coverage enables fast, precise autofocus across the entire frame—even in low light.
The OV50X image sensor is currently sampling, with mass production slated for Q3 2025.
The post Sensor brings cinematic HDR video to smartphones appeared first on EDN.
GaN transistors integrate Schottky diode

Medium-voltage CoolGaN G5 transistors from Infineon include a built-in Schottky diode to minimize dead-time losses and enhance system efficiency. The integrated diode also streamlines power stage design and helps reduce BOM cost.
In hard-switching designs, GaN devices can suffer from higher power losses due to body diode behavior, especially with long controller dead times. CoolGaN G5 transistors address this by integrating a Schottky diode, improving efficiency across applications such as telecom IBCs, DC/DC converters, USB-C chargers, power supplies, and motor drives.
GaN transistor reverse conduction voltage (VRC) depends on the threshold voltage (VTH) and OFF-state gate bias (VGS), as there is no body diode. Since VTH is typically higher than the turn-on voltage of silicon diodes, reverse conduction losses increase in third-quadrant operation. The CoolGaN transistor reduces these losses, improves compatibility with high-side gate drivers, and allows broader controller compatibility due to relaxed dead-time.
The first device in the CoolGaN G5 series with an integrated Schottky diode is a 100-V, 1.5-mΩ transistor in a 3×5-mm PQFN package. Engineering samples and a target datasheet are available upon request.
The post GaN transistors integrate Schottky diode appeared first on EDN.
Shoot-through

This phenomenon has nothing to do with “Gunsmoke” or with “Have Gun, Will Travel”. (Do you remember those old TV shows?) The phrase “shoot- through” describes unwanted and possibly destructive pulses of current flowing through power semiconductors in certain power supply designs.
In half-bridge and full-bridge power inverters, we have one pair (half-bridge) or two pairs (full-bridge) of power switching devices connected in series from a rail voltage to a rail voltage return. Those devices could be power MOSFETs, IGBTs, or whatever but the requirement in each case is the same. That requirement is that the two devices in each pair turn on and off in alternate fashion. If the upper one is on, the lower one is off. If the upper one is off, the lower one is on.
The circuit board seen in Figure 1 was one such design based on a full-bridge power inverter, and it had a shoot- through issue.
Figure 1 A full-bridge circuit board with a shoot-through issue and the test arrangement used to assess it.
A super simplified SPICE simulation shows conceptually what was going amiss with that circuit board, Figure 2.
Figure 2 A SPICE simulation that conceptually walks through the shoot-through problem occurring on the circuit in Figure 1.
S1 represents the board’s Q1 and Q2 upper switches and S2 represents the board’s Q4 and Q3 lower switches. At each switching transition, there was a brief moment when one switch had not quite turned off by the time its corresponding switch had turned on. With both switching devices on at the same time, however brief that “same” time was, there would be a pulse of current flowing from the board’s rail through the two switches an into the board’s rail return. That current pulse would be of essentially unlimited magnitude and the two switching devices could and would suffer damage.
Electromagnetic interference issues arose as well, but that’s a separate discussion.
Old hands will undoubtedly recognize the following, but let’s take a look at the remedy shown in Figure 3.
Figure 3 Shoot-through problem solved by introducing two diodes to speed up the switchs’ turn-off times.
The capacitors C1 and C2 represent the input gate capacitances of the power MOSFETs that served as the switches. The shoot-through issue would arise when one of those capacitances was not fully discharged before the other capacitance got raised to its own full charge. Adding two diodes sped up the capacitance discharge times so that essentially full discharge was achieved for each FET before the other one could turn on.
Having thus prevented simultaneous turn-ons, the troublesome current pulses on that circuit board were eliminated.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Shoot-thru suppression
- Tip of the Week: How to best implement a synchronous buck converter
- MOSFET Qrr: Ignore at your peril in the pursuit of power efficiency
- EMI and circuit components: Where the rubber meets the road
The post Shoot-through appeared first on EDN.
Addressing hardware failures and silent data corruption in AI chips

Meta trained one of its AI models, called Llama 3, in 2024 and published the results in a widely covered paper. During a 54-day period of pre-training, Llama 3 experienced 466 job interruptions, 419 of which were unexpected. Upon further investigation, Meta learned 78% of those hiccups were caused by hardware issues such as GPU and host component failures.
Hardware issues like these don’t just cause job interruptions. They can also lead to silent data corruption (SDC), causing unwanted data loss or inaccuracies that often go undetected for extended periods.
While Meta’s pre-training interruptions were unexpected, they shouldn’t be entirely surprising. AI models like Llama 3 have massive processing demands that require colossal computing clusters. For training alone, AI workloads can require hundreds of thousands of nodes and associated GPUs working in unison for weeks or months at a time.
The intensity and scale of AI processing and switching create a tremendous amount of heat, voltage fluctuations and noise, all of which place unprecedented stress on computational hardware. The GPUs and underlying silicon can degrade more rapidly than they would under normal (or what used to be normal) conditions. Performance and reliability wane accordingly.
This is especially true for sub-5 nm process technologies, where silicon degradation and faulty behavior are observed upon manufacturing and in the field.
But what can be done about it? How can unanticipated interruptions and SDC be mitigated? And how can chip design teams ensure optimal performance and reliability as the industry pushes forward with newer, bigger AI workloads that demand even more processing capacity and scale?
Ensuring silicon reliability, availability and serviceability (RAS)
Certain AI players like Meta have established monitoring and diagnostics capabilities to improve the availability and reliability of their computing environments. But with processing demands, hardware failures and SDC issues on the rise, there is a distinct need for test and telemetry capabilities at deeper levels—all the way down to the silicon and multi-die packages within each XPU/GPU as well as the interconnects that bring them together.
The key is silicon lifecycle management (SLM) solutions that help ensure end-to-end RAS, from design and manufacturing to bring-up and in-field operation.
With better visibility, monitoring, and diagnostics at the silicon level, design teams can:
- Gain telemetry-based insights into why chips are failing or why SDC is occurring.
- Identify voltage or timing degradation, overheating, and mechanical failures in silicon components, multi-die packages, and high-speed interconnects.
- Conduct more precise thermal and power characterization for AI workloads.
- Detect, characterize, and resolve radiation, voltage noise, and mechanism failures that can lead to undetected bit flips and SDC.
- Improve silicon yield, quality, and in-field RAS.
- Implement reliability-focused techniques—like triple modular redundancy and dual core lock step—during the register-transfer level (RTL) design phase to mitigate SDC.
- Establish an accurate pre-silicon aging simulation methodology to detect sensitive or vulnerable circuits and replace them with aging-resilient circuits.
- Improve outlier detection on reliability models, which helps minimize in-field SDC.
Silicon lifecycle management (SLM) solutions help ensure end-to-end reliability, availability, and serviceability. Source: Synopsys
An SML design example
SLM IP and analytics solutions help improve silicon health and provide operational metrics at each phase of the system lifecycle. This includes environmental monitoring for understanding and optimizing silicon performance based on the operating environment of the device; structural monitoring to identify performance variations from design to in-field operation; and functional monitoring to track the health and anomalies of critical device functions.
Below are the key features and capabilities that SLM IP provides:
- Process, voltage and temperature monitors
- Help ensure optimal operation while maximizing performance, power, and reliability.
- Highly accurate and distributed monitoring throughout the die, enabling thermal management via frequency throttling.
- Path margin monitors
- Measure timing margin of 1000+ synthetic and functional paths (in-test and in-field).
- Enable silicon performance optimization based on actual margins.
- Automated path selection, IP insertion, and scan generation.
- Clock and delay monitors
- Measure the delay between the edges of one or more signals.
- Check the quality of the clock duty cycle.
- Measure memory read access time tracking with built-in self-test (BIST).
- Characterize digital delay lines.
- UCIe monitor, test and repair
- Monitor signal integrity of die-to-die UCIe lane(s).
- Generate algorithmic BIST patterns to detect interconnect fault types, including lane-to-lane crosstalk.
- Perform cumulative lane repair with redundancy allocation (upon manufacturing and in-field).
- High-speed access and test
- Enable testing over functional interfaces (PCIe, USB and SPI).
- For in-field operation as well as wafer sort, final test, and system-level test.
- Can be used in conjunction with automated test equipment.
- Help conduct in-field remote diagnoses and lower-cost test via reduced pin count.
- HBM external test and repair
- Comprehensive, silicon-proven DRAM stack test, repair and diagnostics engine.
- Support third-party HBM DRAM stack providers.
- Provide high-performance die to die interconnect test and repair support.
- Operate in conjunction with HBM PHY and support a range of HBM protocols and configurations.
- SLM hierarchical subsystem
- Automated hierarchical SLM and test manageability solution for system-on-chips (SoCs).
- Automated integration and access of all IP/cores with in-system scheduling.
- Pre-validated, ready ATE patterns with pattern porting.
Silicon test and telemetry in the age of AI
With the scale and processing demands of AI devices and workloads on the rise, system reliability, silicon health and SDC issues are becoming more widespread. While there is no single solution or antidote for avoiding these issues, deeper and more comprehensive test, repair, and telemetry—at the silicon level—can help mitigate them. The ability to detect or predict in-field chip degradation is particularly valuable, enabling corrective action before sudden or catastrophic system failures occur.
Delivering end-to-end visibility through RAS, silicon test, repair, and telemetry will be increasingly important as we move toward the age of AI.
Shankar Krishnamoorthy is chief product development officer at Synopsys.
Krishna Adusumalli is R&D engineer at Synopsys.
Jyotika Athavale is architecture engineering director at Synopsys.
Yervant Zorian is chief architect at Synopsys.
Related Content
- Uncovering Silent Data Errors with AI
- 11 steps to successful hardware troubleshooting
- Self-testing in embedded systems: Hardware failure
- Understanding and combating silent data corruption
- Test solutions to confront silent data corruption in ICs
The post Addressing hardware failures and silent data corruption in AI chips appeared first on EDN.
Photo tachometer sensor accommodates ambient light

Tachometry, the measurement of the speed of spin of rotating objects, is a common application. Some of those objects, however, have quirky aspects that make them extra interesting, even scary. One such category includes outdoor noncontact sensing of large, fast, and potentially hazardous objects like windmills, waterwheels, and aircraft propellers. The tachometer peripheral illustrated in Figure 1 implements optical sensing using available ambient light that provides a logic-level signal to a microcontroller digital input and is easily adaptable to different light levels and mechanical contexts.
Figure 1 Logarithmic contrast detection accommodates several decades of variability in available illumination.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Safe sensing of large rotating objects is best done from a safe (large) distance and passive available-light optical methods are the obvious solution. Unless elaborate lens systems are used in front of the detector, the optical signal is apt to have a relatively low-amplitude due to the tendency of the rotating object (propeller blade, etc.) to fill only a small fraction of the field of view of simple detectors. This tachometer (Figure 1) makes do with an uncomplicated detector (phototransistor Q1 with a simple light shield) by following the detector with a high-gain, AC coupled, logarithmic, threshold detector.
Q1’s photocurrent produces a signal across Q2 and Q3 that varies by ~500 µV pp for every 1% change in incident light intensity that’s roughly (e.g. neglecting various tempcos) given by:
V ~ 0.12 log10(Iq1/Io)
Io ~ 10 fA
This approximate log relationship works over a range of nanoamps to milliamps of photocurrent and is therefore able to provide reliable circuit operation despite several orders of magnitude variation in available light intensity. A1 and the surrounding discrete components comprise high gain (80 dB) amplification that presents a 5-Vpp square-wave to the attached microcontroller DIO pin.
Programming of the I/O pin internal logic for pulse counting allows a simple software routine to divide the accumulated count by the associated time interval and by the number of counted optical features of the rotating object (e.g., number of blades on the propeller) to produce an accurate RPM reading.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Analyze mechanical measurements with digitizers and software
- Monitoring, control, and protection options in DC fans for air cooling
- Motor controller operates without tachometer feedback
- Small Tachometer
- Sparkplug Wire Sensor & Digital Tachometer – Getting Started
The post Photo tachometer sensor accommodates ambient light appeared first on EDN.
How NoC architecture solves MCU design challenges

Microcontrollers (MCUs) have undergone a remarkable transformation, evolving from basic controllers into specialized processing units capable of handling increasingly complex tasks. Once confined to simple command execution, they now support diverse functions that require rapid decision-making, heightened security, and low-power operation.
Their role has expanded across industries, from managing complex control systems in industrial automation to supporting safety-critical vehicle applications and power-efficient operations in connected devices.
As MCUs take on greater workloads, the conventional bus-based interconnects that once sufficed now limit performance and scalability. Adding artificial intelligence (AI) accelerators, machine learning technology, reconfigurable logic, and secure processing elements demands a more advanced on-chip communication infrastructure.
To meet these needs, designers are adopting network-on-chip (NoC) architectures, which provide a structured approach to data movement, alleviating congestion and optimizing power efficiency. Compared to traditional crossbar-based interconnects, NoCs reduce routing congestion through packetization and serialization, enabling more efficient data flow while reducing wire count.
This is how efficient packetization works in network-on-chip (NoC) communications. Source: Arteris
MCU vendors adopt NoC interconnect
Many MCU vendors relied on proprietary interconnect solutions for years, evolving from basic crossbars to custom in-house NoC implementations. However, increasing design complexity encompassing AI/ML integration, security requirements, and real-time processing has made these solutions costly and challenging to maintain.
Moreover, as advanced packaging techniques and die-to-die interconnects become more common, maintaining in-house interconnects has grown increasingly complex, requiring constant updates for new communication protocols and power management strategies.
To address these challenges, many vendors are transitioning to commercial NoC solutions that offer pre-validated scalability and significantly reduce development overhead. For an engineer designing an AI-driven MCU, an NoC’s ability to streamline communication between accelerators and memory can dramatically impact system efficiency.
Another major driver of this transition is power efficiency. Unlike general-purpose systems-on-chip (SoCs), many MCUs must function within strict power constraints. Advanced NoC architectures enable fine-grained power control through power domain partitioning, clock gating, and dynamic voltage and frequency scaling (DVFS), optimizing energy use while maintaining real-time processing capabilities.
Optimizing performance with NoC architectures
The growing number of heterogeneous processing elements has placed unprecedented demands on interconnect architectures. NoC technology addresses these challenges by offering a scalable, high-performance alternative that reduces routing congestion, optimizes power consumption, and enhances data flow management. NoC enables efficient packetized communication, minimizes wire count, and simplifies integration with diverse processing cores, making it well-suited for today’s MCU requirements.
By structuring data movement efficiently, NoCs eliminate interconnect bottlenecks, improving responsiveness and reducing die area. So, the NoC-based designs achieve up to 30% higher bandwidth efficiency than traditional bus-based architectures, improving overall performance in real-time systems. This enables MCU designers to achieve higher bandwidth efficiency and simplify integration, ensuring their architectures remain adaptable for advanced applications in automotive, industrial, and enterprise computing markets.
Beyond enhancing interconnect efficiency, NoC architectures support multiple topologies, such as mesh and tree configurations, to ensure low-latency communication across specialized processing cores. Their scalable design optimizes interconnect density while minimizing congestion, allowing MCUs to handle increasingly complex workloads. NoCs also improve power efficiency through modularity, dynamic bandwidth allocation, and serialization techniques that reduce wire count.
By implementing advanced serialization, NoC architectures can reduce the number of interconnect wires by nearly 50%, as shown in the above figure, lowering overall die area and reducing power consumption without sacrificing performance. These capabilities enable MCUs to sustain high performance while balancing power constraints and minimizing die area, making NoC solutions essential for next-generation designs requiring real-time processing and efficient data flow.
In addition to improving scalability, NoCs enhance safety with features that help toward achieving ISO 26262 and IEC 61508 compliance. They provide deterministic communication, automated bandwidth and latency adjustments, and built-in deadlock avoidance mechanisms. This reduces the need for extensive manual configuration while ensuring reliable data flow in safety-critical applications.
Interconnects for next-generation MCUs
As MCU workloads grow in complexity, NoC architectures have become essential for managing high-bandwidth, real-time automation, and AI inference-driven applications. Beyond improving data transfer efficiency, NoCs address power management, deterministic communication, and compliance with functional safety standards, making them a crucial component in next-generation MCUs.
To meet increasing integration demands, ranging from AI acceleration to stringent power and reliability constraints, MCU vendors are shifting toward commercial NoC solutions that streamline system design. Automated pipelining, congestion-aware routing, and configurable interconnect frameworks are now key to reducing design complexity while ensuring scalability and long-term adaptability.
Today’s NoC architectures optimize timing closure, minimize wire count, and reduce die area while supporting high-bandwidth, low-latency communication. These NoCs offer a flexible approach, ensuring that next-generation architectures can efficiently handle new workloads and comply with evolving industry standards.
Andy Nightingale, VP of product management and marketing at Arteris, has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.
Related Content
- SoC Interconnect: Don’t DIY!
- What is the future for Network-on-Chip?
- Why verification matters in network-on-chip (NoC) design
- SoC design: When is a network-on-chip (NoC) not enough
- Network-on-chip (NoC) interconnect topologies explained
The post How NoC architecture solves MCU design challenges appeared first on EDN.
Aftermarket drone remote ID: Let’s see what’s inside thee

The term “aftermarket” finds most frequent use, in my experience, in describing hardware bought by owners to upgrade vehicles after they initially leave the dealer lot: audio system enhancements, for example, or more powerful headlights. But does it apply equally to drone accessories? Sure (IMHO, of course). For what purposes? Here’s what I wrote last October:
Regardless of whether you fly recreationally or not, you also often (but not always) need to register your drone(s), at $5 per three-year timespan (per-drone for commercial operators, or as a lump sum for your entire drone fleet for recreational flyers). You’ll receive an ID number which you then need to print out and attach to the drone(s) in a visible location. And, as of mid-September 2023, each drone also needs to (again, often but not always) support broadcast of that ID for remote reception purposes…
DJI, for example, firmware-retrofitted many (but not all) of its existing drones with Remote ID broadcast capabilities, along with including Remote ID support in all (relevant; hold that thought for next time) new drones. Unfortunately, my first-generation Mavic Air wasn’t capable of a Remote ID retrofit, or maybe DJI just didn’t bother with it. Instead, I needed to add support myself via a distinct attached (often via an included Velcro strip) Remote ID broadcast module.
I’ll let you go back and read the original writeup to discern the details behind my multiple “often but not always” qualifiers in the previous two paragraphs, which factor into one of this month’s planned blog posts. But, as I also mentioned there, I ended up purchasing Remote ID broadcast modules from two popular device manufacturers (since “since embedded batteries don’t last forever, don’cha know”), Holy Stone and Ruko. And…
I also got a second Holy Stone module, since this seems to be the more popular of the two options) for future-teardown purposes.
The future is now; here’s a “stock” photo of the device we’ll be dissecting today, with dimensions of 1.54” x 1.18” x 0.51”/3.9 x 3 x 1.3 cm and a weight of 13.9 grams (14.2 grams total, including Velcro mounting strips) and a model number variously reported as 230218 and HSRID01:
Some outer box shots to start (I’ve saved you from boring photos of the blank sides):
And opening the box, its contents, with our victim in the middle, within a cushioned envelope:
At bottom is the user manual; I can’t find a digital copy of it on the Holy Stone support site, but Manuals+ hosts it in both HTML and PDF formats. You can also find this documentation (among other interesting info) on the FCC website; the FCC ID, believe it or not, is 2AJ55HOLYSTONEBM. At top is the Velcro mounting pair, also initially cushion-packaged (for unknown reasons):
And now, fully freed from its prior captivity, is our patient, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (once again, I’ve intentionally saved you from exposure to boring blank-side shots):
A note on this next one; the USB-C port shown is used to recharge the embedded battery:
Prior to disassembly, I plugged the device into my Google Pixel Buds Pro earbuds charging cable (which has USB-C connectors on both ends) to test charge functionality, but the left-side battery indicator LED on the front panel remained un-illuminated. That said, when I punched the device’s front panel power switch, it came to life. The result wasn’t definitive; the battery could have been precharged on the assembly line, with the charging circuitry inside still inoperable.
But, on a hunch, I then instead plugged it into the power cable for my Google Chromecast with Google TV, which has USB-A on the power-source end, and the charge-status LED lit up and began blinking, indicative of charging in progress. What’s with Chinese-sourced gear and its non-cognizance of USB Power Delivery negotiation protocols? The user manual shows and discusses an “original charging cable” with USB-A on one end which, had it actually been included as inferred, would have constrained the possible charging-source options. Just sayin’.
Speaking of “circuitry inside,” note the visible screw head at the bottom of this next shot:
That’s, I suspect, our pathway inside. Before we dive in, however, what should we expect to see there, circuitry-wise? Obviously there’s a battery, likely Li-ion in formulation, along with the aforementioned associated charging circuitry for it. There’s also bound to be some sort of system SoC, plus both volatile (RAM) and nonvolatile memory, the latter holding both the program code and user-programmable FAA-assigned Remote ID. Broadcast of that ID can occur over Bluetooth, Wi-Fi or both, via an accompanying antenna. And for geolocation purposes, there’ll need to be a GPS subsystem, comprising both another antenna and a receiver.
Now that the stage is set, let’s get inside, after both removing the previously shown screw and slicing through the serial number sticker on one side:
Voila:
The wire in the lower right corner is, I suspect, the wireless communications antenna. Given its elementary nature, along with the lack of mention of Wi-Fi in the product documentation, I’m guessing it’s Bluetooth-only. To its left is the square mostly-tan GPS antenna. In the middle is the multifunction switch (power cycling and user (re)configuration). Above it are the two LEDs, for power/charging status (left) and current operating mode (right).
And on both sides of it are Faraday cages, the lids of which we’ll need to rip off (hold that thought) before we can further investigate their contents.
The PCB subsequently lifts right out of the other (back) case half:
revealing the “pouch” battery adhesive-attached to the PCB’s other side:
Peel the battery away (revealing a near-blank PCB underneath).
Peel off the tape, and the battery specs (3.7V, 150mAh, 0.55Wh…why do battery manufacturers frequently feel the need to redundantly provide both of the latter two? Can’t folks multiply anymore?) come into view:
Back to the front of the PCB, post-removal of the two Faraday cages’ tops, as foreshadowed previously:
Now fully visible is the USB-C connector, alongside a rubberized ring that had been around it when fully assembled. As for what’s inside those now-mangled Faraday cages, let’s zoom in:
The landscape-dominant IC within the left-located Faraday cage, unsurprisingly given its GPS antenna proximity, is Bekin’s BK1661, a “fully integrated single-chip L1 GNSS [author note: Global Navigation Satellite System] solution” that, as the acronym infers, supports not only GPS L1 but “Beidou B1, Galileo E1, QZSS L1, and GLONASS G1,” for worldwide usage.
The one to the right, on the other hand, was a mystery (although, given its antenna proximity, I suspected it handled Bluetooth transceiver functionality, among other things) until I came across an enlightening Reddit discussion. The company logo mark on the top of the chip is a combination of the letters J and L. And the part number underneath it is:
BP0E950-21A4
Here’s an excerpt of the initial post in the Reddit discussion thread, titled “How to identify JieLi (JL/π) bluetooth chips”:
If you like to open things, particularly bluetooth audio devices, you may have seen chips from manufacturers like Qualcomm, Bestechnic (BES), Airoha, Vimicro WX, Beken, etc.; but cheaper devices have those mysterious chips marked with A3 or AB (from Bluetrum), or those with the JL or “pi” logo (from JieLi).
Bluetrum and JieLi chips have a printed code (like most IC chips), but those codes don’t match any results on Google or the manufacturer’s websites. Why does this happen? Well, it looks like the label on those chips is specific to the firmware they’re running, and there’s no way to know which chip it is exactly (unless the manufacturer of your bluetooth device displays that information somewhere on the package).
I was recently looking at the datasheet for some JieLi chips I have lying around, and noticed something interesting: on each chip the label is formatted like “abxxxxxxx-YYY”, “acxxxxx-YYYY” or similar, and the characters after the “-” look like they indicate part of the model number of the IC.
…
In conclusion, if you find a JL chip inside your device and the label does not show any results, use the last characters (the ones after the “-“) and add ac69 or ac63 at the beginning (those are the series of the chip, like AC69xx or AC63xx. There are more series that I don’t remember, so if those codes don’t work for you, try searching for others).
…
Also, if you find a chip with only one number before the letter in the character group after the “-“, add a 0 before it and then add a series code at the beginning. (For example: 5A8 -> 05A8 -> AC6905A)
By doing so you will probably find the pinout and datasheet of your bluetooth IC.
Based on the above, what I think we have here is the AC321A4 RISC-based microcontroller with Bluetooth support from Chinese company ZhuHai JieLi Technology. To give you an idea of how much (or, perhaps more accurately, little) it costs, consider the headline of an article I came across on a similar product from the same company, “JieLi Tech AC6329C4 is Another Low Cost MCU but with Bluetooth 5.0 Support.” Check out the price tag in the associated graphic:
That said, an AC6921A also exists from the company, although it seems to be primarily intended for stereo audio Bluetooth, so…
That’s what I’ve got for today, folks. Sound off in the comments with your thoughts!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The (more) modern drone: Which one(s) do I now own?
- LED headlights: Thank goodness for the bright(nes)s
- Drone regulation and electronic augmentation
- Google’s Chromecast with Google TV: Dissecting the HD edition
The post Aftermarket drone remote ID: Let’s see what’s inside thee appeared first on EDN.
Building a low-cost, precision digital oscilloscope – Part 2

Editor’s Note:
In this DI, high school student Tommy Liu modifies a popular low-cost DIY oscilloscope to enhance its input noise rejection and ADC noise with anti-aliasing filtering and IIR filtering.
Part 1 introduces the oscilloscope design and simulation.
This part (Part 2) shows the experimental results of this oscilloscope.
Experimental ResultsThree experiments were conducted to evaluate the performance of our precision-enhanced oscilloscope using both analog and digital signal processing techniques.
First, we test the effect of the new anti-aliasing filter described in Part 1. For this purpose, a 2-kHz sinusoidal signal is amplitude modulated (AM) with a 961-kHz sinusoidal waveform by a Rigol DG1022Z signal generator (Rigol Technologies, Inc., 2016) and is used as the analog input to the oscilloscope.
In this scenario, the low-frequency (2 kHz) sinusoidal waveform is our signal, while the high-frequency tones caused by modulation with 961 kHz sinusoidal represent high frequency noises at the signal source. In the experiment, a 10% modulation depth is used to make the high frequency noise easily identifiable by sight. The time division is set at 20 µs with the ADC sampling frequency of 500 KSPS.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Results of anti-aliasing filterThe original DSO138-mini lacks anti-aliasing filter capability due to its insufficient -3-dB cut-off frequencies (around 500 kHz to 800 kHz). As a result, the high-frequency noise tones caused by modulations pass through the analog front-end, without much attenuation, and are sampled by the ADC at 500 KSPS. This creates aliasing noise tones at the ADC output and can be clearly seen in the displayed waveform on the DSO128-mini (Figure 1).
Figure 1 The aliasing noise tones at the ADC output on the DSO138-mini.
Our new anti-aliasing filter provides a significant lower -3-dB cut-off frequency of around 100 kHz, and effectively filters away most of the out-of-band high frequency noises, in this case, the noise tones caused by the signal modulation with 961 kHz sinusoidal. Figure 2 is a screenshot with the new anti-aliasing filter, indicating a significant reduction in the aliasing noise.
Figure 2 Reduction of the aliasing noise with the new anti-aliasing filter.
Detailed analysis on the captured data with the new anti-aliasing filter indicates a 10 dB to 15 dB (3.2x to 5.6x) improvement over the original DSO138-mini on noise rejection at frequencies higher than the oscilloscope’s signal bandwidth.
In practical applications, high frequency noises with a magnitude of a few millivolts RMS are not uncommon. A 5-mV RMS noise at near 900 kHz is attenuated to 0.73 mV (RMS) with our new anti-aliasing filter versus 2.48 mV (RMS) with the original DSO138-mini. With an ADC full-scale input range of 3.3 V, 0.73 mV RMS is of an effective resolution well above 10 bits (ENOB). With the original DSO138-mini, the ENOB would be at only an 8-bit level.
Results of digital post-processing filterThe second test evaluates the performance of the digital post-processing filter. As explained in Part 1, besides the noises at the analog input, other noise sources in oscilloscopes, such as noises on ADC inside the MCU damage the measurement precision. This is evident in Figure 3, which is a screenshot of the DSO138-mini with its Self-Test mode turned on. In Self-Test mode, an internally generated pulse signal—less susceptible to the noises from the external signal source—is used to test and fine tune the oscilloscope. We can see that there are still ripple noises on the pulse waveform.
Figure 3 Ripples on internally generated pulse signal during self-test mode on the DSO138-mini.
It is not easy to identify the magnitude of these ripples due to the limited pixel resolution of the DSO138-mini’s LCD display (320 x 240). We transferred the captured data to a PC via DSO138-mini’s UART-USB link for precise data analysis. Figure 4 shows the waveform of the captured self-test pulses on a PC. The ripple noises are calculated and shown in Figure 5.
Figure 4 Captured self-test pulse signal waveform on PC for more precision data analysis.
Figure 5 Magnitude of noises on self-test pulse with no digital post-processing.
Considering the voltage division setting (1 V, -20 dB on Input) and attenuation setting (x1), the ripple on the self-test pulse has a peak-peak magnitude of 8 mV. This error is about 10 LSB and the calculated RMS value is about 3 mV, yielding an effective resolution of 8.3 bits. Digital post-processing can be used to suppress some of these noises.
Figure 6 is the waveform after first-order infinite impulse response (IIR) digital filtering (α = 0.25) is performed on the PC, and Figure 7 shows the noises on the self-test pulse.
After IIR filtering, the noise RMS value reduces to about 0.75 mV, or by a factor of 4. This brings back the effective resolution from 8.3 bits to 10.4 bits. We notice that the rise and fall transition edges of the pulse look a bit less sharp than the signal before post-processing.
This is due to the low-pass nature of the IIR filter. With α=0.25, the passband (-3 dB) is at around 23 kHz, covering an input bandwidth up to audio frequencies (20 kHz). For tracking faster signals, such as fast transition edges of a pulse signal, we can relax α to a higher value allowing for more input bandwidth.
Figure 6 Self-test pulse with first-order IIR digital filter where α = 0.25.
Figure 7 Noises on self-test pulse with first-order IIR filter where RMS noise reduces to ~0.75 mV.
The effects of both filtersFinally, we test the overall effect of both the new anti-aliasing filter and the digital post processing by inputting a sinusoidal input of 2 kHz from a signal generator to our new oscilloscope. We can see from Figure 8 that even with the new anti-aliasing filter, there are still some noises on the waveform, due to the ADC noises inside the MCU. The RMS value of the noises is about 2.8 mV and the effective resolution is limited to below 9 bits.
Figure 8 Noises on a 2 kHz sinusoidal input waveform despite having the new anti-aliasing filter.
As shown in Figure 9, with the first-order IIR filter in effect, the waveform cleans up. The RMS noise reduces to 0.7 mV and, again, this brings up the effective resolution from below 9 bits to above 10 bits. Other input frequencies, up to 20 kHz (audio), have also been tested and an overall effective resolution of 10 bits or more was observed with the new anti-aliasing filter and the digital post-processing algorithm.
Figure 9 A 2 kHz sinusoidal input waveform after digital post-processing where the RMS noise reduces to 0.7 mV.
Low-cost oscilloscopeMany traditional low-cost DIY type digital oscilloscopes have two major technical drawbacks, namely inadequate anti-aliasing capability and large ADC noises. As a result, these oscilloscopes can only reach an effective resolution of 8 bits or less, even though most of them are based on an MCU, equipped with built-in 12-bit ADCs.
These problems limit DIY oscilloscopes from more demanding professional high school projects. To address these issues, a well-designed first-order analog low-pass filter at the analog front-end of the oscilloscope, plus a programmable first-order IIR digital post-processing filter, are implemented on a popular low-cost DIY platform (DSO138-mini).
Experimental results verified that the new oscilloscope could maintain an overall effective resolution of 10 bits or above with the presence of high frequency noises at its analog input, up to an input bandwidth of 20 kHz and real-time sampling of 1 MSPS. The implementations are inexpensive—the BOM cost of the new anti-aliasing filter is just the cost of a ceramic capacitor (far less than a dollar), and the digital post-processing program is completely implemented in the PC software.
Costing less than fifty dollars, this precision digital oscilloscope can be used in many high schools. This includes high schools without the funds for pricey commercial models and, thus, enable students to perform a wide range of tasks: from the first-time electrical signal capture and observation to the more demanding precision measurement and signal analysis for complex electrical and electronic projects.
Tommy Liu is currently a junior at Monta Vista High School (MVHS) with a passion for electronics. A dedicated hobbyist since middle school, Tommy has designed and built various projects ranging from FM radios to simple oscilloscopes and signal generators for school use. He aims to pursue Electrical Engineering in college and aspires to become a professional engineer, continuing his exploration in the field of electronics.
Related Content
- Building a low-cost, precision digital oscilloscope—Part 1
- Build your own oscilloscope probes for power measurements (part 1)
- Build your own oscilloscope probes for power measurements (part 2)
- Basic oscilloscope operation
- FFTs and oscilloscopes: A practical guide
The post Building a low-cost, precision digital oscilloscope – Part 2 appeared first on EDN.
The advent of AI-empowered fab-in-a-box

What’s a fab-in-a-box, and how it’s far more efficient in terms of cost, space, and chip manufacturing operations. Alan Patterson speaks to CEOs of Nanotronics and Pragmatic to dig deeper into how these $30 million fabs work while using AI to boost yields and make these mini-fabs more cost-competitive. These “cubefabs” are also worth attention because many markets, including the United States, aim to bolster local chip manufacturing.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- Semiconductor Industry Faces a Seismic Shift
- Semiconductor Capacity Is Up, But Mind the Talent Gap
- Building Semiconductor Capacity for a Hotter, Drier World
- Tapping AI for Leaner, Greener Semiconductor Fab Operations
- SEMICON Europa: Building a Sustainable US$1 Trillion Semiconductor Industry
The post The advent of AI-empowered fab-in-a-box appeared first on EDN.
Single sideband generation

In radio communications, one way to generate single sideband (SSB) signals is to make a double sideband signal by feeding a carrier and a modulation signal into a balanced modulator to create a double sideband (DSB) signal and then filter out one of the two resulting sidebands.
If you filter out the lower sideband, you’re left with the upper sideband and if you filter out the upper sideband, you’re left with the lower sideband. However, another way to generate SSB without that filtering has been called “the phasing method.”
Let’s look at that in the following sketch in Figure 1.
Figure 1 Phasing method of generating an SSB signal where the outputs of Fc and Fm are 90° apart with respect to each other
The outputs of the carrier (Fc) quadrature phase shifter and the modulating signal (Fm) quadrature phase shifter need only be 90° apart with respect to each other. The phase relationships to their respective inputs are irrelevant.
Four cases of SSB generationIn the following equations, those two unimportant phase shifts are called “phi” and “chi” for no particular reason other than their pronunciations happen to rhyme. Mathematically, we examine four cases of SSB generation.
Case 1, where “Fc at 90°” and “Fm at 90°” are both +90°, or in the same directions (Figure 2). Case 2, where “Fc at 90°” and “Fm at 90°” are both -90°, or in the same directions (Figure 3).
Figure 2 Mathematically solving for upper and lower side bands where “Fc at 90°” and “Fm at 90°” are both +90°, or in the same directions.
Figure 3 Mathematically solving for upper and lower side bands where “Fc at 90°” and “Fm at 90°” are both -90°, or in the same directions.
Case 3, where “Fc at 90°” is -90°and “Fm at 90°” is +90°, or in the opposite directions (Figure 4). Case 4, where “Fc at 90°” is +90°and “Fm at 90°” is -90°, or in the opposite directions (Figure 5).
Figure 4 Mathematically solving for upper and lower side bands where “Fc at 90°” is -90°and “Fm at 90°” is +90°, or in the opposite directions
Figure 5 Mathematically solving for upper and lower side bands where “Fc at 90°” is +90°and “Fm at 90°” is -90°, or in the opposite directions.
The quadrature phase shifter for the carrier signal only needs to operate at one frequency, which is that of the carrier itself and which we have called “Fc”. The quadrature phase shifter for the modulating signal however has to operate over a range of frequencies. That device has to develop 90° phase shifts for all the frequency components of that modulating signal and therein lies a challenge.
90° phase shifts for all frequency componentsThere is a mathematical operator called the Hilbert transform which is described here. There, we find an illustration of the Hilbert transformation of a square wave. From that page, we present the sketch in Figure 6.
Figure 6 A square wave and its Hilbert transform, bringing about a 90° phase shift of each frequency component of the input signal in its own time base.
The underlying mathematics of the Hilbert transform is described in terms of a convolution integral but in another sense, you can look at the result as bringing about a 90° phase shift of each frequency component of the input signal in its own time base, in the above case, of a square wave. This phase shift property is the very thing we want for our modulating signal in SSB generation.
In the case of Figure 7, I took each frequency component of a square wave—by which I mean the fundamental frequency plus a large number of properly scaled odd harmonics—and phase shifted each of them by 90° in their respective time frames. I then added up those phase-shifted terms.
Figure 7 A square wave and the result of 90° phase shifts of each harmonic component in that square wave.
Please compare Figure 6 to the result in Figure 5. They look very much the same. The finite number of 90° phase shift and summing steps very nicely approximate the Hilbert transform.
The ideal case for SSB generation can be expressed as starting with a carrier signal, you create a second carrier signal at the same frequency as the first, but phase shifted by 90°. Putting this another way, the first carrier signal and the second carrier signal are in quadrature with respect to one another.
You then take your modulating signal and generate its Hilbert transform. You now have two modulating signals in which each frequency component of the one is in quadrature with the corresponding frequency component of the other.
Using two balanced modulators, you apply one carrier and one modulating signal to one balanced modulator and apply the other carrier and the other modulating signal to the other balanced modulator. The outputs of the two balanced modulators are then either added to each other or subtracted from each other. Based on the four mathematical examples above, you end up with either an upper sideband SSB signal or a lower sideband SSB signal.
This offers high performance and thus the costly filters described in the first paragraph above are not needed.
Practically applying a Hilbert transformAs a practical matter however, instead of actually making a true Hilbert transformer (I have no idea how or even if that could be done.), we can make a variety of different circuits which will give us the 90° phase shifts we need for our modulating signals over some range of operating frequencies with each frequency component 90° shifted in its own time frame.
One of the earliest purchasable devices for doing this over the range of speech frequencies was a resistor-capacitor network called the 2Q4 which was made by a company called Barker and Williamson. The 2Q4 came in a metal can with a vacuum-tube-like octal base. Its dimensions were very close to that of a 6J5 vacuum tube, but the can of the 2Q4 was painted grey instead of black. (Yes, I know that I’m getting old.)
Another approach to obtaining the needed 90° phase relationships of the modulating signals is by using cascaded sets of all-pass filters. That technique is described in “All-pass filter phase shifters.”
One thing to note is that the Hilbert transformation itself and our approximation of it can lead to some really spiky signals. The spikiness we see for the square wave arises for speech waveforms too. This fact has an important practical implication.
SSB transmitters tend to have high peak output powers versus their average output power levels. This is why in amateur radio, while there is an FCC-imposed operating power limit of 1000 watts, the limit for SSB transmission is 2000 watts peak power.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- All-pass filter phase shifters
- Spectral analysis and modulation, part 5: Phase shift keying
- Single-sideband demodulator covers the HF band
- SSB modulator covers HF band
- Impact of phase noise in signal generators
- Choosing a waveform generator: The devil is in the details
- Modulation basics, part 1: Amplitude and frequency modulation
The post Single sideband generation appeared first on EDN.
EEPROMs with unique ID improve traceability

Serial EEPROMs from ST contain a unique 128-bit read-only ID for product recognition and tracking without requiring an extra component. Preprogrammed and permanently locked at the factory, the unique ID (UID) enables basic product identification and clone detection as an alternative to an entry-level secure element.
Initially available in 64-kbit and 128-kbit versions, the M24xxx-U series spans storage densities from 32 kbits to 2 Mbits. Each device retains its UID throughout the end-product lifecycle—from sourcing and manufacturing to deployment, maintenance, and disposal. The UID ensures seamless traceability, aiding reliability analysis and simplifying equipment repair.
These CMOS EEPROMs endure 4 million write cycles and retain data for 200 years. They operate from 1.7 V to 5.5 V and support 100-kHz, 400-kHz, and 1-MHz I2C bus speeds. The devices offer random and sequential read access, along with a write-protect feature for the entire memory array.
The 64-kbit M24C64-UFMN6TP is available now, priced from $0.13, while the 128-kbit M24128-UFMN6TP starts at $0.15 for orders of 10,000 units. Additional densities will be released during the second quarter of 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post EEPROMs with unique ID improve traceability appeared first on EDN.
3D Hall sensor meets automotive requirements

Diodes’ AH4930Q sensor detects magnetic fields along the X, Y, and Z axes for contactless rotary motion and proximity sensing. As the company’s first automotive-compliant 3D linear Hall effect sensor, the AH4930Q is well-suited for rotary and push selectors in infotainment systems, stalk gear shifters, door handles and locks, and power seat adjusters.
Qualified to AEC-Q100 Grade 1, the AH4930Q operates over a temperature range of -40°C to +125°C and integrates a 12-bit temperature sensor for accurate on-chip compensation. It also features a 12-bit ADC, delivering high resolution in each measurement direction, down to 1 Gauss per bit (0.1 mT) for precise positional accuracy. An I2C interface supports data reading and runtime programming with host systems up to 1 Mbps, enabling real-time adjustments.
The sensor features three operating modes and a power-down mode with a consumption of just 9 nA. Its modes balance power and data acquisition, ranging from a low-power mode at 13 µA (10 Hz) to a fast-sampling mode at 3.8 mA (3.3 kHz) for continuous measurement. Operating with supply voltages from 2.8 V to 5.5 V, the AH4930Q offers a 10-µs wake-up time, 4-µs response time, and wide bandwidth for fast data acquisition in demanding applications.
Supplied in a 6-pin SOT26 package, the AH4930Q costs $0.50 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 3D Hall sensor meets automotive requirements appeared first on EDN.
Software optimizes AI infrastructure performance

Keysight AI (KAI) Data Center Builder emulates AI workloads without requiring large GPU clusters, enabling evaluation of how new algorithms, components, and protocols affect AI training. The software suite integrates large language model (LLM) and other AI model workloads into the design and validation of AI infrastructure components, including networks, hosts, and accelerators.
KAI Data Center Builder simulates real-world AI training network patterns to speed experimentation, reduce the learning curve, and identify performance degradation causes that real jobs may not reveal. Keysight customers can access LLM workloads like GPT and Llama, along with popular model partitioning schemas, such as Data Parallel (DP), Fully Sharded Data Parallel (FSDP), and 3D parallelism.
The KAI Data Center Builder workload emulation application allows AI operators to:
- Experiment with parallelism parameters, including partition sizes and distribution across AI infrastructure (scheduling)
- Assess the impact of communications within and between partitions on overall job completion time (JCT)
- Identify low-performing collective operations and pinpoint bottlenecks
- Analyze network utilization, tail latency, and congestion to understand their effect on JCT
For more information on the KAI Data Center Builder, or to request a demo or price quote, click the product page link below.
KAI Data Center Builder product page
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Software optimizes AI infrastructure performance appeared first on EDN.
High-power switch operates up to 26 GHz

Leveraging Menlo’s Ideal Switch technology, the MM5230 RF switch minimizes insertion loss and provides high power handling in a chip-scale package. The device is a SP4T switch that operates from DC to 18 GHz, which extends to 26 GHz in SPST Super-Port mode. Designed for high-power applications, it supports up to 25 W continuous and 150 W pulsed power.
The MM5230 is well-suited for defense and aerospace, medical equipment, test and measurement, and wireless infrastructure applications. With an on-state insertion loss of just 0.3 dB at 6 GHz, it minimizes signal degradation, ensuring high performance in sensitive systems, low-loss switch matrices, switched filter banks, and tunable filters. Additionally, the MM5230 provides high linearity with a typical IIP3 of 95 dBm, preserving signal integrity for smooth communication or data transfer.
The switch’s 2.5×2.5-mm chip-scale package eases integration into a wide range of systems and conserves valuable board space. Additionally, the Ideal Switch fabrication process enhances reliability and endurance.
The MM5230 RF switch is available for purchase through Menlo Microsystems’ distributor network.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post High-power switch operates up to 26 GHz appeared first on EDN.
Partners build broadband optical SSD

Kioxia, AIO Core, and Kyocera have prototyped a PCIe 5.0-compatible broadband SSD with an optical interface. The trio is developing broadband optical SSD technology for advanced applications requiring high-speed, large-volume data transfer, such as generative AI. They will also conduct proof-of-concept testing to support real-world adoption and integration.
Combining AIO Core’s IOCore optical transceiver and Kyocera’s OPTINITY optoelectronic integration module, Kioxia’s prototype delivers twice the bandwidth of the PCIe 4.0 optical SSD demonstrated in August 2024. Replacing electrical wiring with an optical interface increases the allowable distance between compute and storage devices in next-generation green data centers while preserving energy efficiency and signal integrity.
The prototype was developed under Japan’s “Next Generation Green Data Center Technology Development” project (JPNP21029), part of NEDO’s Green Innovation Fund initiative. The project aims to reduce data center energy consumption by over 40% through next-generation technologies. Kioxia is developing optical SSDs, AIO Core is working on optoelectronic fusion devices, and Kyocera is creating optoelectronic packaging.
No timeline for commercialization has been announced.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Partners build broadband optical SSD appeared first on EDN.