Українською
  In English
EDN Network
u-blox grows Bluetooth LE module portfolio

New variants in the u-blox Nora-B2 Bluetooth LE 6.0 module family integrate Nordic Semiconductor’s entire nRF54L series of ultra-low power wireless SoCs. Offering a choice of antennas and chipsets, these modules consume up to 50% less current than previous-generation devices while doubling processing capacity.
The NORA-B2 series comprises four variants that differ in memory size, design architecture, and price level. Each variant comes with either an antenna pin or embedded antenna.
- NORA-B20 features an nRF54L15 SoC and integrates a 128-MHz Arm Cortex-M33 processor, a RISC-V coprocessor, and an ultra-low power multiprotocol 2.4-GHz radio. It comes with 1.5 MB of NVM and 256 KB RAM.
- NORA-B21, based on an nRF54L10 SoC, is designed for mid-range applications. It has 1.0 MB of NVM and 192 KB of RAM and handles multiple wireless protocols simultaneously, including Bluetooth LE, Bluetooth Mesh, Thread, Matter, Zigbee, and Amazon Sidewalk.
- NORA-B22 employs an nRF54L05 SoC. It is intended for cost-sensitive applications but still provides access to up to 31 GPIOs. It includes 0.5 MB of NVM and 96 KB of RAM.
- NORA-B26, based on an nRF54L10, is designed for customers using the network coprocessor architecture. It comes pre-flashed with the u-blox u-connectXpress software, allowing customers to easily integrate Bluetooth connectivity into their products with no prior knowledge of Bluetooth LE or wireless security.
All NORA-B2 modules are designed for PSA Certified Level 3 security and meet the Bluetooth Core 6.0 specification, including channel sounding for accurate ranging. They also carry global certification, enabling manufacturers to launch products worldwide with minimal effort.
NORA-B20 samples are available now, while NORA-B21 and B22 are in limited evaluation. A pre-release of u-connectXpress for NORA-B26 is available for early adopters.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post u-blox grows Bluetooth LE module portfolio appeared first on EDN.
Why RISC-V is a viable option for safety-critical applications

As safety-critical systems become increasingly complex, the choice of processor architecture plays an important role in ensuring functional safety and system reliability. Consider an automotive brake-by-wire system, where sensors detect the pedal position, software interprets the driver’s intent, and electronic controls activate the braking system. Or commercial aircraft relying on flight control computers to interpret pilot inputs and maintain stable flight. Processing latencies or failures in these systems could result in unintended behaviors and degraded modes, potentially leading to fatal accidents.
The RISC-V architecture’s inherent characteristics—modularity, simplicity, and extensibility—align with the demands of functional safety standards like ISO 26262 for automotive applications and DO-178C for aviation software. Unlike proprietary processor architectures, RISC-V is an open standard instruction set architecture (ISA) developed by the University of California, Berkeley, in 2011. The architecture follows reduced instruction set computing (RISC) principles, emphasizing performance and modularity in processor design.
RISC-V is set apart by its open, royalty-free nature combined with a clean-slate design that eliminates the legacy compatibility constraints of traditional architectures. The ISA is structured as a small base integer set with optional extensions, allowing processor designers to implement only the features needed for their specific applications.
This article examines the technical advantages and considerations of implementing RISC-V in safety-critical environments.
Benefits for safety-critical industriesTraditional proprietary architectures, such as Arm, have served safety-critical industries well, but challenges around supplier diversity, customization needs, and safety certification requirements have driven interest in RISC-V.
The following sections describe characteristics of RISC-V that make it a viable option for safety-critical development teams.
Architectural independenceOne fundamental challenge in developing safety-critical systems is mitigating supply chain risks. Traditional processor architectures require licensing agreements and create vendor lock-in, which impacts long-term system maintainability and cost.
RISC-V’s open model provides several advantages. The ability to work with multiple silicon vendors reduces single-point-of-failure risks in the supply chain. This is particularly important for long-lifecycle applications in aerospace and automotive, where systems may need to be maintained and supported for decades. When using RISC-V, manufacturers expand their options for semiconductor suppliers and development tool ecosystems, providing flexibility in responding to supply chain issues.
Customization to meet safety-critical requirementsRISC-V’s modular design philosophy allows silicon vendors and system architects to implement custom features at the hardware level. This capability helps address specific safety requirements across mission-specific applications certification standards such as:
- Custom error detection and correction.
- Hardware-level monitoring and diagnostic capabilities.
- Low-latency, deterministic execution features for real-time requirements.
Additionally, RISC-V silicon vendors have products supporting harsh environments, such as processors with radiation hardening and electromagnetic pulse (EMP) protection for space applications.
Memory managementOne of RISC-V’s distinguishing features is its approach to cache memory management, helping developers of safety-critical applications requiring deterministic behavior. The ability to implement level 2 cache memory mapping as RAM gives developers greater control over system latency, a crucial factor in real-time safety-critical applications.
This capability addresses challenges covered in aviation safety guidelines like EASA AMC 20-193 and FAA AC 20-193. By providing better solutions for cache contention mitigation than traditional architectures, RISC-V supports more predictable execution timing—a critical requirement for safety certification.
Dissimilar redundancySafety-critical systems requiring design assurance level A (DAL-A) certification under DO-178C often implement redundancy to protect against common mode failures. RISC-V’s open architecture provides advantages in implementing dissimilar redundancy strategies:
- Implementation of different processor configurations within the same system.
- Diverse redundancy schemes using different vendor solutions.
- Using different architectures in mixed-criticality systems with varying levels of safety requirements.
While RISC-V may not always match the raw performance metrics of modern Arm implementations, its architecture provides several advantages specific to safety-critical applications. The ability to implement custom instructions and hardware features allows optimization for specific safety requirements without compromising overall system performance.
Key performance-related features include:
- Deterministic execution paths for real-time applications.
- Custom instructions for safety monitoring.
- Efficient context switching for mixed-criticality systems.
- Configurable memory protection units to minimize stack and data corruption.
Over the years, the maturation of development tools and verification environments for RISC-V has expanded to cover the entire software lifecycle. For example, LDRA’s target license package (TLP) for RISC-V architectures supports development and on-target testing with multi-core code coverage analysis, worst-case execution time (WCET) measurement for AMC 20-193 compliance, requirements traceability, and integration with major RISC-V development platforms. This TLP makes RISC-V ready for safety and security.
Additionally, LDRA is highly integrated with RISC-V environments, supporting dynamic testing with hardware and commercial and open-source simulation environments, including silicon-level simulation. These environments support comprehensive hardware-accurate testing and verification to develop and test software as the hardware is developed.
Industry momentum around RISC-VA growing number of safety-certified RISC-V IP cores offer designers pre-verified components that meet stringent safety requirements. Microchip, SiFive, CAST, and other vendors have released specialized RISC-V implementations with integrated safety features, fault detection mechanisms, and redundancy capabilities tailored for automotive and aerospace applications. Vendors such as Frontgrade Gaisler add to this with radiation-hardened microprocessors and IP cores for space-based systems.
The mix of industry support, technical guidelines, and certification tools creates a positive feedback loop that accelerates RISC-V adoption in safety-critical systems, making it increasingly attractive for organizations developing next-generation applications.
Jay Thomas, technical development manager for LDRA Technology, San Bruno, Calif., and has worked on embedded controls simulation, processor simulation, mission- and safety-critical flight software, and communications applications in the aerospace industry. His focus on embedded verification implementation ensures that LDRA clients in aerospace, medical, and industrial sectors are well grounded in safety-, mission-, and security-critical processes. For more information about LDRA, visit http://www.ldra.com.
Related Content
- Standards, tools address coding and application errors in embedded software
- Software development model for the ISO/SAE 21434 standard
- How ‘shift left’ helps secure today’s connected embedded systems
- CES 2021: RISC-V’s journey from experimentation to commercial processors
- Accelerating RISC-V development with network-on-chip IP
- Developing safety critical ASICs for ADAS and similar automotive systems
The post Why RISC-V is a viable option for safety-critical applications appeared first on EDN.
Tracking preregulator boosts efficiency of PWM power DAC

This design idea revisits another: “PWM power DAC incorporates an LM317.” Like the earlier circuit, this one implements a power DAC by integrating an LM317 positive regulator into a mostly passive PWM topology. It exploits the built-in features of that time-proven Bob Pease masterpiece so that its output is proportional to the guaranteed 2% precision of the LM317 internal voltage reference and is inherently protected from overloading and overheating.
Wow the engineering world with your unique design: Design Ideas Submission Guide
However, unlike the earlier design idea that requires a separate 15v DC power input, this remake (shown in Figure 1) adds a switching input boost preregulator so it can run from a 5v logic rail. The previous linear design also has a limited power efficiency that actually drops below single-digit percentages when driving low voltage loads. The preregulator fixes that by tracking the input-output voltage differential across the LM317 and maintains a constant 3v. This is the just adequate dropout-suppressing headroom for the LM317, minimizing wasted power.
Here’s how it works.
Figure 1 LM317 and HC4053 combine to make a PWM power DAC while Q1 forces preregulator U3 to track and maintain a constant 3v U2 I/O headroom differential to improve efficiency.
As described in the earlier DI, switches U1b and U1c accept a 10-kHz PWM signal to generate a 0v to 11.25v “ADJ” control signal for the U2 regulator via feedback networks R1, R2, and R3. The incoming PWM signal is AC coupled so that U1 can “float” on U2’s output. U1c provides a balanced inverse of the PWM signal, implementing active ripple cancellation as described in “Cancel PWM DAC ripple with analog subtraction.”
Note that R1||R2 = R3 to optimize ripple subtraction and DAC accuracy. This feedback arrangement makes U2’s output voltage follow this function of PWM duty factor (DF):
Vout = 1.25 / (1 – DF(1 – R1/(R1 + R2))) = 1.25 / (1 – 0.9 DF),
as graphed in Figure 2.
Figure 2 Vout (1.25v to 12.5v) versus PWM DF (0 to 1) where Vout = 1.25 / (1 – 0.9 DF).
Figure 3 plots the inverse of Figure 2, yielding the PWM DF required for any given Vout.
Figure 3 The inverse of Figure 2 or, the PWM DF required for any given Vout, where PWM DF = (1.111 – 1.389/Vout).
About that tracking preregulator thing: Control of U3 to maintain the 3v of headroom required to hold U2 safe from dropout relies on Q1 acting as a simple (but adequate) differential amplifier. Q1 drives U3’s Vfb voltage feedback pin to maintain Vfb = 1.245v. Therefore (where Vbe = Q1’s emitter-base bias):
Vfb/R7 = ((U2in – U2out) – Vbe)/R6
1.245v = (U2in – U2out – 0.6v)/(5100/2700)
U2in – U2out = 1.89 * 1.245v + 0.6v = 3v
Meanwhile, deducing what Q2 does is left as an exercise for the astute reader. Hint: It saves about a third of a wattage over the original DI at Vout = 12v.
Note, if you want to use this circuit with a different preregulator with a different Vfb, just adjust:
R7 = R6 Vfb/2.4
In closing…
Thanks must go to reader Ashutosh for his clever suggestion to improve power DAC efficiency with a tracking regulator, also (and especially) to editor Aalyia for her creation of a Design Idea environment that encourages such free and friendly cooperation!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- PWM power DAC incorporates an LM317
- Cancel PWM DAC ripple with analog subtraction
- A faster PWM-based DAC
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- Cancel PWM DAC ripple with analog subtraction but no inverter
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- Phased-array PWM DAC
The post Tracking preregulator boosts efficiency of PWM power DAC appeared first on EDN.
The future of cybersecurity and the “living label”

New security standards for IoT devices are being released consistently, showing that security is no longer an afterthought in the design of embedded products. Last month, the White House launched the Cyber Trust Mark; a large move towards the security of IoT devices with a more robust concept of the “living label,” acknowledging the dynamic nature of security over time. The standard essentially requires prerequisite devices to be outfitted with a QR code that can be scanned for security information such as whether or not the device will have automatic software support such as security patches. Vendors of IoT products are now meant to partner up with an “accredited and FCC-recognized CyberLAB to ensure it meets the program’s cybersecurity requirements,” according to the FCC.
In a conversation with Silicon Labs’ Chief Security Officer Sharon Hagi, EDN learned a bit more about this new standard, its history, and the potential future security application of this new QR code labelling scheme.
IoT maniaIn the IoT “boom” of the early 2000s that lasted well into the 2010s, companies were anxious to wirelessly-enable practically all devices, and when paired with the right MCU of choice, the applications seemed endless. Use cases from home automation and smart city to agritech and industrial automation were all explored, with supporting industry-specific or open protocols that could vary in spectrum (licensed or unlicensed), modulation technique, topology, transmit power, maximum payload size, broadcast schedule, number of devices, etc. With the growing hype and litany of hardware/protocol options, network security was still mostly discussed at the sidelines, leaving some pretty major holes for bad actors to exploit.
Cybersecurity historyWith time and experience, it has become abundantly clear that IoT security is, in fact, pretty important. Undesirable outcomes like a Mirai botnet could lead to multiple IoT devices to be infected with malware at once allowing for larger-scale attacks such as distributed denial of service (DDoS). Moreover, a massive common vulnerability and exposure (CVE) found that lands a high score on the common vulnerability scoring system (CVSS) can potentially involve the US government’s cybersecurity and infrastructure security agency (CISA) and, if it’s not resolved, lead to fines. This is just adding insult to the reputational injury a company might experience with an exploited security issue. Sharon Hagi expands on IoT-device vulnerabilities, “these devices are in the field, so they’re subjected to different kinds of attack. There’s software-based attacks, remote attacks over the network, and physical attacks like side-channel attacks, glitching, and fault injection,” speaking towards how Silicon Labs included countermeasures for many of these attacks. The company’s initial developments in the area of security, namely centered around its “Secure Vault” technology with a dedicated security core with cryptographic functionality encapsulated within it. The core manages the root of trust (RoT) of the device, manages keys, and governs access to critical interfaces such as the ability to lock/unlock the debug port.
Hagi went on to describe the background of the US cybersecurity standards that lead to the more recent regulatory frameworks, citing the NIST 8259 specification as the foundational set of cybersecurity requirements for manufacturers to be aware of (Figure 1). Another baseline standard is the ETSI european standard (EN) 303 645 for consumer IoT devices.
Figure 1 NIST 8259A and 8259B technical capabilities and non-technical support activities for manufacturers to consider in their products. Source: NIST
Hagi expanded more on the history of the Cyber Trust Mark, “The history of the Cyber Trust Mark kind of followed right after [the establishment of NIST 8259] in 2021 during the Biden administration with Executive Order 14028,” which had to do with security measures for critical software, “and that executive order basically directed NIST to work with other federal agencies to further develop the requirements and standards around IoT cybersecurity.” He mentioned how this order specified the need for a labeling program to help consumers identify and judge the security of embedded products (Figure 2).
Figure 2 NIST labeling considerations for IoT device manufacturers where NIST recommends a binary label that is coupled with a layered approach using either a QR code or a URL that leads consumers to additional details online. Source: NIST
“After this executive order, the FCC took the lead and started implementing what we now know as the Cyber Trust Mark program,” said Hagi, mentioning that Underwriter Laboratories (UL) was the de facto certification and testing lab for compliance with the US Cyber Trust Mark program as well as the requirements of the connectivity security alliance (CSA) with its product security working group (PSWG).
Evolving security standardsIn fact, the PSWG consists of over 200 companies with promoters that include tech giants like Google, Amazon, and Apple as well as OEMs such as Infineon, NXP Semiconductors, TI, STMicroelectronics, Nordic Semiconductor and Silicon Labs. The aim of the PSWG is to unite the disparate emerging regional security requirements including but not limited to the US Cyber Trust Mark, the Cyber Resilience Act (CRA) in the EU with the “CE marking”, and the Singapore Cybersecurity Label Scheme (CLS).
Many of the companies within the PSWG have formulated their own security measures within their chips, NXP, for instance, has their EdgeLock Assurance program, and ST has their STM32Trust security framework. TI has an allocated product security incident response team (PSIRT) that responds to reports of security vulnerabilities for TI products while Infineon created a Cyber Defense Center (CDC) with a corresponding Computer Security Incident Response Teams (CSIRT/CERT) and PSIRT team for the same purpose. Hagi stated that Silicon Labs set itself apart by implementing security “down to the silicon level” in product design early on in the IoT development game.
These wireless SoCs and MCUs are the keystone of the IoT system, providing the intelligent compute, connectivity, and security of the product. Using more secure SoCs will inevitably ease the process of meeting the ever-changing security compliance standards. Engineers can choose to enable features such as secure boot, secure firmware updates, digitally signed updates with strong cryptographic keys, and anti-tampering, to ultimately enhance the security of their end product.
Living label use casesPerhaps the most interesting aspect of the interview were the potential applications of these labeling schemes and how to make them more user-friendly. “The labeling scheme could be compared to a food label,” said Hagi, “You go to the supermarket, take the product off the shelf and it shows you the ingredients and nutritional value and make a decision on whether or not this is something you want to buy.” In the future, a more objectively secure product could be a more pricey option to the more basic alternative, however it would be up to the consumer to decide. While this analogy served its purpose, its similarities ended there. While the label contains all “ingredients” of security built into the product, the Cyber Trust Mark is not meant to be static, since vulnerabilities can still be discovered well after the product is manufactured.
“You might be able to see the software bill of materials (SBOM) where maybe there is a certain open source library that the product is using and there is a vulnerability that has been reported against it. And maybe, when you get home, you need to update the product with new software to make sure that the vulnerability is patched,” said Hagi as he discussed potential use cases for the label.
The hardware BOM (HBOM) may also be very relevant in terms of security, bringing into light the entire supply chain that is involved in assembling the end product. The overall goal of the label is to incentivize companies to foster trust and accountability with transparency on both the SBOM and HBOM.
Hagi continues to go down the checklist of security measures the label might include, “What is the original and development history of the product’s security measures? Can it perform authentication? If so, what kind of authentication? What kind of cryptography does it have? Is this cryptography certified? Does the manufacturer include any guarantees? At what point will the manufacturer stop issuing security updates for the product? Does the product contain measures that would comply with people in specific jurisdictions?” These regional regulations on security do vary between, for instance, the EU’s General Data Protection Regulation (GDPR) and of course, the US Cyber Trust Mark.
ML brings on another dimension of security considerations to these devices, “The questions would then be what sort of data does the model collect? How secure are these ML models in the device? Are they locked? Are they unlocked? Can they be modified? Can they be tampered with?” The many attributes of the models bring other levels of security considerations with them and avenues of attack.
The future of the labeling schemeUltimately putting this amount of information on a box is impossible, even more pertinent is how users are meant to interpret the sheer amount of information. Consumers were more than likely to not really understand all the information on a robust security label, even if it was human-readable. “Another angle is providing some sort of API so that an automated system can actually interrogate this stuff,” said Hagi.
He mentioned one example of securely connecting devices from different ecosystems, “Imagine an Amazon device connecting to an Apple device, with this API, security information is fetched automatically letting users know if it is a good idea to connect the device to the ecosystem.”
As it stands, the labelling scheme is meant to protect the consumer in more of an abstract sense, however it might be difficult for the consumer to accurately understand the security measures put into the product. In order to make full use of a system like this, “it is likely that a bit of automation is necessary for consumers to make appropriate decisions just in time.” This could eventually enable consumers to make informed decisions on product purchasing, replacement, upgrades, connection to a network, and the security risks when throwing out an item that could contain private information in its memory.
Aalyia Shaukat, associate editor at EDN, has worked in design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
- Navigating IoT security in a connected world
- Understand the hardware dependencies of IoT security
- 6 core capabilities an IoT device needs for basic cybersecurity
- 7 steps to security for the Internet of Things
The post The future of cybersecurity and the “living label” appeared first on EDN.
A class of programmable rheostats

For many variable resistor (rheostat) applications, one of the device’s terminals is connected to a voltage source VS. Such a source might be a reference DC voltage, an op amp output carrying an AC plus DC signal, or even ground. If freed from the constraint of (programmable) “floating” rheostats satisfied by recently disclosed solutions in “Synthesize precision Dpot resistances that aren’t in the catalog” and “Synthesize precision bipolar Dpot rheostats,” there is a compelling alternative approach. Yes, it’s slightly simpler in that it avoids MOSFETs, and that the +5V supply for the digital potentiometer is the only supply needed (especially if rail-to-rail input and output op-amps are employed.) But more importantly, it’s distinct in that it exhibits no crossover distortion when there is a change in the sign of an AC signal between terminals A and VS.
Wow the engineering world with your unique design: Design Ideas Submission Guide
As seen in Figure 1, I’m shamelessly appropriating the same digital pot used in those other solutions. (Note the limited operating voltage range of potentiometer U2.)
Figure 1 A basic programmable rheostat leveraging the same digital pot used in other solutions.
The resistance between terminals A and voltage source VS looking into terminal A is res = R1/(1 – αa·α2·αb) where the alphas are the gains of U1a, U2, and U1b respectively. αa and αb are slightly less than unity at DC, falling in value with loop gain as frequency increases. α2 is equal to one of the numerator integers 0, 1, 2… 256 divided by a denominator of 256 as determined by the programming of U2.
By changing the numerator from 0 to 255, it would appear that resistor value ratios of 1:256 could be achieved. Unfortunately, U2’s integral non-linearity (INL) is specified as ± 1 LSB. Strictly following this spec, operation with a numerator of 255 could drive the value of res close to infinity at DC and so should be avoided. But that’s not the only concern. For an α2 numerator value “num”, a resistance error factor EF of roughly ± 1/(256-num) could be encountered because of the ± 1 LSB accuracy. To minimize uncertainty, num should be held to less than some maximum value (solutions in “Synthesize precision Dpot resistances that aren’t in the catalog” and “Synthesize precision bipolar Dpot rheostats” have similar problems for small values of “num”). Another reason for such a limit is that resistance resolution is much better with lower than higher values of “num”. For instance, the ratio of resistor values with numerators of 10 and 11 is 1.004. But the values of 240 and 241 yield a ratio of 1.07, and those of 250 to 251, 1.2.
Enhanced programmable rheostatThe simple addition of U3 and R2 in the Figure 2 circuit mitigates these problems by reducing the required maximum value of “num”. For R2 greater than R1, resistances between R1 and R2 should be implemented by having analog switch U3 select R1 rather than R2. For larger resistances, R2 should be selected.
Figure 2 Enhanced programmable rheostat that mitigates the uncertainties problems of the basic programmable rheostat by reducing the required maximum value of “num”.
To see why Figure 2 offers an enhancement, consider a requirement to provide resistance over the range of 1k to 16k. In Figure 1 and Figure 2 circuits, R1 would be 1k. To produce a value of 1k, “num” would be 0. For 16k, “num” in Figure 1 would be 240, yielding a maximum EF of ± 1/(256 – 240) or approximately 6.3%. But in Figure 2, resistance values of 4K and above would be derived by having U3 switch R1 out in place of a 4k R2. The maximum required value of “num” would be 192, and EF would be reduced by a factor of 4 to 1.6%. It will also be seen that the Figure 2 circuit significantly relaxes op-amp performance requirements for limiting the errors due to finite open loop gains. To see this, some analysis is necessary. Given the maximum allowed fractional resistance error (OAerr) introduced by the op-amp pair, it can be seen that:
Therefore, for closed loop op amp gains:
At DC, op amp voltage follower closed loop gain α is 1/(1-1/a0L), where a0L is the op amp open loop DC gain. To satisfy requirements at DC:
Matters are more complicated with AC signals. At a frequency f Hz, the voltage follower open loop gain HOLG(j·f) is 1 / (1/A0 + j·f/GBW), where GBW is the part’s gain-bandwidth product and j = √-1.
The closed loop gain HCLG(j·f) is 1/( 1 + 1/ HOLG(j·f)). Substitution of HCLG(j·f) for αa and αb in Equation (1) yields a fourth order polynomial due to the real and imaginary terms of HCLG(j·f). It’s easier to solve the problem with a simulation in LTspice than to solve it algebraically.
LTspice offers a user-specifiable op-amp called…well, “opamp”. It can be configured for user-selected values of a0L and GBW. The tool is configured as shown in Figure 3 to solve this problem.
Figure 3 LTspice can be used to determine op-amp requirements for an AC signal application.
The a0L value required for AC signals will be larger than that calculated in equation (3). It’s suggested to start with an a0L default value of 10000 (100 dB) and try different values of GBW. Use the results to select an op amp for the actual circuit and either simulate it if a model exists or at least update the simulation with the minimum specified values of a0L and GBW for the selected op amp.
Table 1 shows some examples of the behaviors of the circuit with different idealized op-amps. It’s clear that DC performance in either circuit is not a challenge for almost any op-amp. But it’s also evident that the AC performance of a given op-amp is notably better in the Figure 2 circuit than in that of Figure 1, and that a given error can be achieved with a lower performance and less costly op-amp in the Figure 2 circuit.
Figure 1, R1 = 1k | Figure 2, R2 = 4k enabled | |||||||||
num | 240 | 192 | ||||||||
a0L, dB | 69 | 80 | 80 | 100 | 100 | 55 | 80 | 80 | 100 | 100 |
GBW, MHz | 1 | 10 | 50 | 10 | 50 | 1 | 10 | 50 | 10 | 50 |
DC resistance error due to op-amp pairs, % | 1.000 | 0.299 | 0.299 | 0.030 | 0.030 | 0.999 | 0.060 | 0.060 | 0.006 | 0.006 |
20kHz resistance error due to op-amp pairs, % | 15.952 | 0.495 | 0.307 | 0.227 | 0.038 | 2.024 | 0.071 | 0.060 | 0.017 | 0.006 |
20kHz phase shift, degrees | -30.22 | -3.42 | -0.69 | -3.43 | -0.69 | -6.71 | -0.69 | -0.14 | -0.69 | -0.14 |
equivalent parallel capacitance at 20kHz, pf | 84.3 | 9.5 | 1.9 | 9.5 | 1.9 | 18.5 | 1.9 | 0.4 | 1.9 | 0.4 |
Table 1 Examples of the circuits’ behavior producing 16kΩ with various op-amp parameters.
Note: The cascade of the two op-amps with their AC phase shifts means that there is an effective capacitance in parallel with the resistance R created by the circuits. Because the two op-amps create a second order system, there is no equivalent broadband capacitance. However, a capacitance C at a spot frequency f Hz can be calculated from the phase shift Φ radians at that frequency. C = tan(Φ)/(2·π·f·R). Simulations have shown that over the full range of resistances and operating frequencies of the examples listed in table, phase shift magnitudes are less than 70 degrees.
The approach taken in Figure 2 can be generalized by supporting not just two but four or more different resistors. Doing so further minimizes both op-amp performance requirements and worst-case errors by reducing the maximum required value of “num”. It also extends the range of resistor values achievable for a given error budget.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Synthesize precision Dpot resistances that aren’t in the catalog
- Synthesize precision bipolar Dpot rheostats
- Error assessment and mitigation of an innovative data acquisition front end
The post A class of programmable rheostats appeared first on EDN.
How to measure PSRR of PMICs

Ensuring stable power management is crucial in modern electronics, and the power supply rejection ratio (PSRR) plays a key role in achieving this. This article serves as a practical guide to measuring PSRR for power management ICs (PMICs) and offering clear and comprehensive instructions.
PSRR reflects a circuit’s ability to reject fluctuations in its power supply voltage, directly impacting performance and reliability. By understanding and accurately measuring this parameter, engineers can design more robust systems that maintain consistent operation even under varying power conditions.
Figure 1 Here is the general methodology to measure PSRR. Source: Renesas
PSRR is a vital parameter that assesses an LDO’s capability to maintain a consistent output voltage amidst variations in the input power supply. Achieving high PSRR is crucial in scenarios in which the input power supply experiences fluctuations, thereby ensuring the dependability of the output voltage. Figure 1 below illustrates the general methodology for measuring PSRR.
The mathematical expression to calculate the PSRR value is:
PSRR = 20 log10 VIN/VOUT
Where VIN and VOUT are the AC ripple of the input and output voltage, respectively.
Equipment and setup
To ensure an accurate measurement of the PSRR, it’s essential to set up the test environment with precision. The following design outlines the use of the listed equipment to establish a robust and reliable test configuration.
First, connect the power supply—in our case it’s a Keithley 2460—to the input of the Picotest J2120A line injector. The power supply should be configured to generate a stable DC voltage while the AC ripple component is provided by a Bode 100 network analyzer output using the J2120A line injector to simulate power supply variations.
Note that J2120A line injector includes an internally biased N-channel MOSFET. This means that there is a voltage drop between the J2120A input and output. The voltage drop is non-linear, and its dependency is shown on Figure 2. This means that each time the load current is adjusted, the source power supply must also be adjusted to maintain a constant DC output voltage at the J2120A terminals.
Figure 2 J2120A’s resistance and voltage drop is shown versus output current. Source: Renesas
For example, to get 1.2 V at the input of the LDO regulator, and depending on the current load, it might be required to set the voltage on the input of the line injector from 2.5 V to 3.5 V. The MOSFET operates as open loop so not to become unstable when connected to the external regulator.
Next, a digital multimeter is used to monitor both the input and output voltages of the PMIC. Ensure that proper grounding is used, and minimal interference is present in the connections to maintain measurement integrity.
Finally, a Bode 100 from Omicron Lab is used to record and analyze the measurements. This data can be used to compute the PSRR values and evaluate the PMIC’s ability to maintain a stable output despite variations in the input supply.
By carefully following this setup, one can ensure accurate and reliable PSRR measurements, contributing to the development of high-performance and dependable electronic systems.
Table 1: Here is an outline of the instruments used in PSRR measurements. Source: Renesas
Table 2 See the test conditions for LDOs. Source: Renesas
Settings for PSRR bench measurements setup
Figure 3 Block diagram shows the key building blocks of PSRR bench measurement. Source: Renesas
The PSRR measurement is performed with the Bode 100. The Gain/Phase measurement type should be chosen in the Bode Analyzer Suite software as shown on Figure 4.
Figure 4 Start menu is shown in the Bode Analyzer Suite software. Source: Renesas
Set the Trace 1 format to Magnitude (dB).
Figure 5 This is how to set Trace 1. Source: Renesas
To get the target PSRR measurement, choose the following settings in the “Hardware Setup”:
- Frequency: Change the Start frequency to “10 Hz” and Stop frequency to “10 MHz”.
- Source mode: Choose between Auto off or Always on. In Auto off mode, the source will be automatically turned off whenever it’s not used (when a measurement is stopped). In Always on mode, the signal source stays on after the measurement has finished. This means that the last frequency point in a sweep measurement defines the signal source frequency and level.
- Source level: Set the constant source level to “-16 dB” or higher for the output level. The unit can be changed in the options. By default, the Bode 100 uses dBm as the output level unit. 1 dBm equals 1 mW at 50 Ω load. “Vpp” can be chosen to display the output voltage in peak-to-peak voltage. Note that the internal source voltage is two times higher than the displayed value and valid when a 50 Ω load is connected to the output.
- Attenuator: Set the input attenuators 20 dB for Receiver 1 (Channel 1) and 0 dB for Receiver 2 (Channel 2).
- Receiver bandwidth: Select the receiver bandwidth used for the measurement. Higher receiver bandwidth increases the measurement speed. Reduce the receiver bandwidth to reduce noise and to catch narrow-band resonances.
Figure 6 The above diagram shows hardware setup in Gain/Phase Measurement mode and measurement configuration. Source: Renesas
Before starting the measurement, the Bode 100 needs to be calibrated. This will ensure the accuracy of the measurements. Press the “Full Range Calibration” button as shown in Figure 7. To achieve maximum accuracy, do not change the attenuators after external calibration is performed.
Figure 7 Press the “Full Range Calibration” button to ensure measurement accuracy. Source: Renesas
Figure 8 Here is how Full Range Calibration Window looks like. Source: Renesas
Connect OUTPUT, CH1, and CH2 as shown below and perform the calibration by pressing the Start button.
Figure 9 In calibration setup, Connect OUTPUT, CH1 and CH2, and press the Start button. Source: Renesas
Figure 10 This is how performed Calibration Window looks like. Source: Renesas
For all LDOs:
- The input capacitor will filter out some of the signals injected into the LDO, so it’s best to remove the input capacitors for the tested LDO or keep one as small as possible.
- Configure the network analyzer; use the power supply to power the line injector and connect the output of the network analyzer to the open sound control (OSC) input of the line injector.
- Power up the device under test (DUT) and configure the tested LDO’s output voltage. To prevent damage to the PMIC, the LDO’s input voltage should be less than or equal to the max input voltage. It’s highly recommended to power up the LDO without a resistive load, then apply the load and adjust the input voltage.
- Configure the LDO VOUT as specified in Table 2.
- Enable the LDO under test and use a voltmeter to check the output voltage.
- To ensure that the start-up current limit does not prevent the LDO from starting correctly, connect the resistive load to the LDO once the VOUT voltage has reached its max level.
- Adjust the voltage at the J2120A OUT terminals to their target VIN.
- Connect the first channel (CH1) of the network analyzer to the input of the LDO under test using a short coaxial cable.
- Connect the second channel (CH2) of the network analyzer to the output of the LDO under test using a short coaxial cable.
- Monitor the output voltage of the line injector on an oscilloscope. Perform a frequency sweep and check that the minimum input voltage and an appropriate peak to peak level for test are achieved. Make sure that the AC component is 200 mVpp or lower.
Figure 11 This simplified example shows headroom impact on the ripple magnitude. Source: Renesas
Note that headroom for the PSRR is not the same as the dropout voltage parameter (Vdo) specified in the datasheets (see Figure 11). Headroom in the context of PSRR refers to the additional voltage margin above the output voltage that an LDO requires to effectively reject variations in the input voltage.
Essentially, it ensures that the LDO can maintain a stable output despite fluctuations in the input power supply. Dropout voltage (Vdo), on the other hand, is a specific parameter defined in the datasheets of LDOs.
It’s the minimum difference between the input voltage (VIN) and the output voltage (VOUT) at which the LDO can still regulate the output voltage correctly under static DC conditions. When the input voltage drops below this minimum threshold, the LDO can no longer maintain the specified output voltage, leading to potential performance issues.
Figure 12 Example highlights applied ripple and its magnitude with DC offset for LDO’s input. Source: Renesas
- Set up the network analyzer by using cursors to measure the PSRR at each required frequency (1 kHz, 100 kHz and 1 MHz). Add more cursors if needed to measure peaks as shown in Figure 13.
Figure 13 This is how design engineers can work with cursors. Source: Renesas
- Capture images for each measured condition.
Figure 14 Example shows captured PSRR graph for the SLG51003 LDO. Source: Renesas
Figure 15 Bench measurement setup is shown for the SLG51003 PSRR.
Clear and precise PSRR measurement
This methodology provides a clear and precise approach for measuring the PSRR for the SLG5100X family of PMICs using the Omicron Lab Bode 100 and Picotest J2120A. Accurate PSRR measurements in the 10 Hz to 10 MHz frequency range are crucial for validating LDO performance and ensuring robust power management.
The accompanying figures serve as a valuable reference for setup and interpretation, while strict adherence to these guidelines enhances measurement reliability. By following this framework, engineers can achieve high-quality PSRR assessments, ultimately contributing to more efficient and reliable power management solutions.
Oleh Yakymchuk is applications engineer at Renesas Electronics’ office in Lviv, Ukraine.
Related Content
- ADC Power Supply Noise: PSRR & PSMR
- Measuring amplifier DC offset voltage, PSRR, CMRR, and open-loop gain
- Power Supply Ripple Rejection and Linear Regulators: What’s all the noise about?
- Designing with a complete simulation test bench for op amps: Input-referred errors
- Understand how PSRR and other power-supply factors affect mobile-phone audio quality
The post How to measure PSRR of PMICs appeared first on EDN.
LED headlights: Thank goodness for the bright(nes)s

My wife’s 2018 Land Rover Discovery looks something like this:
with at least one important difference, related (albeit not directly) to the topic of this writeup: hers doesn’t have fog lights. They’re the tiny shiny things at the upper corners of the front bumper of the “stock” photo, just below the air intake “scoops”. In her case, bumper-colored plastic pieces take their places (and the on/off control switch normally at the steering wheel doesn’t exist either, of course, nor apparently does the intermediary wiring harness).
More generally, the from-factory headlights were ridiculously dim yellow-color temperature things, halogen-based and H7 in form factor. This vehicle, unlike most (I think) uses two identical pairs of H7, albeit aimed differently, one for the “low” (i.e. “dipped” or “driving”) set and the other for the “high” (i.e. “full” or “bright”) set. Land Rover didn’t switch to LED-based headlights until 2021, but the halogens were apparently so bad that at least one older-generation owner contracted with a shop to update them with the newer illumination sets both front and rear.
On a hunch, I purchased a set of Auxito LED-based replacement bulbs from Amazon for ~$30, figuring them to be a fiscally rationalizable experiment regardless of the outcome success-or-not. These were the fanless 26W 800 lumen variant found on the manufacturer’s website:
Here’s an accompanying “stock” video:
Auxito also sells a brighter (1000 lumens), more power-demanding (30W) variant with a nifty-looking integrated cooling fan:
When they arrived, they slipped right into where the halogens had been; the removal-and-replacement process was a bit tedious but not at all difficult. I’d been pre-warned from my preparatory research (upfront in the manufacturer’s product page documentation both on its and Amazon’s websites, in fact, which was refreshing) that dropping in LEDs in place of halogens can cause various issues, resulting from their ongoing connections to the vehicle’s CAN bus communication network system, for example:
LED upgrade lights are great. They’re rugged, they last far longer than conventional bulbs, and they offer brilliant illumination. But in some vehicles, they can also trigger a false bulb failure warning. Some cars use the vehicle’s computer network (CANbus) system to verify the functioning of the vehicle’s lights. Because LED bulbs have a lower wattage and draw much less power than conventional bulbs, when the system runs a check, the electrical resistance of an LED may be too low to be detected. This creates a false warning that one of the lights has failed.
Here’s the other common problem:
A lot of auto manufacturers use PWM (or pulse width modulation) to precisely control the voltage to a bulb. One of the benefits of doing this is to improve bulb life. These quick, voltage pulses (PWM) do not give a bulb filament time to cool down and dim, so for halogen bulbs the pulses are not noticeable. However, with an LED bulb, these pulses are enough to turn the LEDs off and on very quickly, which results in a flashing of the light.
Philips sells LED CANbus adapters which claim to fix both issues. Auxito also says that it will ship free adapters to customers who encounter problems, albeit noting (in charming broken English):
Built-in upgraded CANBUS decoder, AUXITO H7 bulbs is perfectly compatible with 98% of vehicles. A few extremely sensitive vehicles may require an additional decoder.
I’m delighted to be able to say—hopefully not jinxing myself in the process—that I’m apparently one of those 98%. The LED replacement bulbs fired up glitch-free and have remained problem-less for the multiple months that we’ve used them so far. The color temperature mismatch between them (6500K) and the still-present halogen high beams, which we sometimes also still need to use and which I’m guessing are closer to 3000K, results in a merged illumination pattern beyond the hood that admittedly looks a bit odd, but I’ve bought a second Auxito LED H7 headlight set that I plan to install in the high-beam bulb sockets soon (I promise, honey…).
I’ve also bought a third set, actually, one bulb for use as a spare and the other for future-teardown purposes. In visual sneak-peek preparation, here are some photos of an original halogen bulb, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
and the LED-based successor, first boxed (I’m only showing the meaningful-info box sides):
and then standalone:
For above-and-beyond (if I do say so myself) reader-service purposes, I also scanned the user manual, whose PDF you can find here:
And with that hopefully illuminating (see what I did there?) info out of the way, I’ll close for today, with an as-usual invitation for reader-shared thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The headlights and turn signal design blunder
- Headlights and turn signals, part two
- Control individual LEDs in matrix headlights with integrated 8-Switch flicker-free driver
- Are “beam array” headlights in automotive’s future?
- Slideshow: LEDs Design Ideas
The post LED headlights: Thank goodness for the bright(nes)s appeared first on EDN.
How shielding protects electronic designs from EMI/RFI disruptions

Electromagnetic interference (EMI) and radiofrequency interference (RFI) refer to electromagnetically generated noise that can interfere with products’ performance and reliability. RFI is a subset of EMI that refers to radiated emissions such as those from power or communication lines.
Design engineers must strategically reduce EMI and RFI at every opportunity, especially since some sources are naturally occurring and impossible to remove from the environment.
Engineering professionals should begin by using design choices that mitigate these unwanted effects. For example, trace placement can reduce undesirable interference since a PCB’s traces carry current from drivers and receivers.
One widely established tip is to keep the distance between traces at least several times the width of individual traces. Similarly, designers should separate signal-related traces from others, including those associated with audio or video transmission.
The design-centered tools can help all parties test different possibilities to find the ones most likely to work in the real world. One such tool allows designers to ease the transition from design to manufacture by creating a digital twin of the production environment. This format-agnostic platform also enables real-time collaboration, shortening the time required for clients to approve designs.
Select appropriate internal filters and shields
Besides following design-related best practices, professionals building electronics while reducing EMI and RFI must identify opportunities to suppress and deflect them without adding too much weight to the devices. That is especially important in cases where people build electronics for aerospace and automotive applications.
The general process is identifying trouble spots after making all appropriate design-related improvements. Engineers should then proceed by applying filtering circuits on the inputs and outputs. Next, they can apply shields. These products surround at-risk components, creating a protective barrier.
The shields are typically metal or polyester, and engineers use industrial machines to form them into the desired shapes. While filters allow harmless frequencies to pass through them, shields block and redistribute EMI to mitigate their potentially dangerous effects.
A particular point is that filters only block EMI moving through physical connections such as cables. EMI transmission occurs through the air and needs no entry point. Additionally, designers will get the best results by scrutinizing how the electronic device functions and acting accordingly. One possibility is to install filters at heat sinks to control the EMI that would otherwise come through the holes that promote thermal management.
Consider electrospray technologies
An emerging EMI protection is to deposit electrospray materials onto surfaces or components. In addition to its cost-effectiveness, this solution offers customizable results because engineers can add as much as their applications require.
Although many of these efforts are in the early stages, design engineers should monitor their progress and consider how to incorporate them into their future products. One example comes from a mechanical engineering doctoral student exploring how to apply protective layers to electronics by dispensing aerosols or liquids onto them with electricity. This approach could be especially valuable to manufacturers that create increasingly small products for which traditional shielding techniques are less suitable.
The student argues that electrospray technologies for shielding can open opportunities for protecting miniaturized devices. Her technique deposits a silver layer onto the surface, minimizing the space and costs required to protect devices.
This strategy and similar efforts could also be ideal for engineers who want to safeguard delicate electronics without adding weight. Many consumers perceive lightweight, tiny devices as more innovative than heavier, larger ones. Electrospray caters to these devices while meeting modern manufacturing requirements.
Take project-specific approaches
In addition to following these tips, electronics designers must always engage with their clients throughout their work. Such engagements allow engineering professionals to understand specific needs and identify the most effective ways to achieve successful outcomes.
What worked well in one case may be less suitable for others that seem similar. However, client feedback ensures everyone is on the same page.
Ellie Gabel is a freelance writer as well as associate editor at Revolutionized.
Related Content
- PCB design for EMI in three easy steps
- RFI: keeping noise out of your designs
- EMI, RFI, EMC and radiated susceptibility
- How EVs, EMI/RFI are influencing AM radio’s future
- The Importance of EMI & RFI Shielding in Medical Equipment
The post How shielding protects electronic designs from EMI/RFI disruptions appeared first on EDN.
Basic oscilloscope operation
Whether you just received a new oscilloscope or just got access to a revered lab instrument that you are unfamiliar with, there is a learning curve associated with using the instrument. Having run a technical support operation for a major oscilloscope supplier, I know that most technical people don’t read manuals. This article (shorter than the typical user manual) is intended to help those who need to use the instrument right away get the instrument up and running.
The front panelOscilloscopes from different manufacturers look different, but they all have many common elements. If the oscilloscope has a front panel, it will have basic controls for vertical, horizontal, and trigger settings like the instrument shown in Figure 1.
Figure 1 A typical oscilloscope front panel with controls for vertical, horizontal, and trigger settings. Source: Teledyne LeCroy
Many controls have alternate actions evoked by pushing or, in some cases, pulling the knob. These are generally marked on the panel.
Many oscilloscopes, like this one, use the Windows operating system and can be controlled from the display using a pointing device or a touch screen. Feel free to use any interface that works for you.
Getting a waveform on the screenIt’s crucial to note that digital oscilloscopes retain their last settings. If you’re using the oscilloscope for the first time, it’s a smart practice to recall its default setting. This step ensures you’re starting from a known setting’s state. Some oscilloscopes, like the one used here, have a dedicated button on the front panel; recalling the default setting can also be done using a pulldown menu (Figure 2).
Figure 2 Recalling the default setup of an oscilloscope places the instrument in a known operational state. Source: Arthur Pini
In the example shown, the default setting is recalled from the “Recall Setup” dialog box using the Recall Default button, highlighted in orange.
Auto SetupUsing the oscilloscope’s “Auto Setup” feature to obtain a waveform on the screen from the default state is simple.
As a basic experiment, connect channel 1 of the oscilloscope to the calibration signal on the oscilloscope’s front panel using one of the high-impedance probes included with the oscilloscope. This calibration signal is a low-frequency square wave used to adjust the low-frequency compensation of the probe’s attenuator.
Press the oscilloscope’s Auto Setup button on the front panel or use the Vertical pulldown menu to select Auto Setup (Figure 3).
Figure 3 The “Auto Setup” is either a front-panel push button or a selection on a pulldown menu, as shown here. Source: Arthur Pini
“Auto Setup” in this instrument scans all the input channels in order and configures the instrument based on the first signal it detects. Based on the detected signal(s), the vertical scale (volts/div) and vertical offset are adjusted. The trigger is set to an edge trigger with a trigger level of fifty percent of the amplitude of the first signal found. The horizontal timebase (time/div) is set so that at least ten signal cycles are displayed on the display screen.
Different oscilloscopes handle this function differently. In some, the signal must be connected to channel 1. Other models, like the one shown, will search through all the channels and set up the first signal found. “Auto Setup” in all oscilloscopes should get you to a point where you have a waveform on the screen.
The basic controls—vertical settingsThe basic oscilloscope controls include vertical, horizontal, or timebase and trigger. In Figure 3, these appear, in that order from left to right, as pull-down menus on the menu bar. These controls are duplicated on the front panel and grouped under the same headings. Either of the control types can be used.
Vertical controls, either on the front panel or on the screen, are used to set up the individual input channels. Selecting a channel creates a dialog box for controlling the corresponding channel. The vertical channel controls include vertical sensitivity (volts/div) and offset. The channel setup controls include coupling, bandwidth, rescaling, and processing (Figure 4).
Figure 4 The vertical channel setup includes the principal controls, including vertical scaling, offset, and coupling. Source: Arthur Pini
The vertical scaling should be set so that the waveform is as close to full scale as possible to maximize the oscilloscope’s dynamic range. This oscilloscope has a “Find Scale” function icon the channel setup, which will scale the vertical gain and offset to get the waveform centered on the screen with a reasonable amplitude. It is good practice not to overdrive the input amplifier by having the waveform exceed the selected full-scale voltage limits. Use the zoom display to expand the trace for a closer look at tiny features. The offset control centers the waveform on the display. Coupling offers a choice of a 50 Ω DC coupling or 1 MΩ input termination and AC or DC coupling.
The other controls include a selection of input bandwidth limiting filters, the ability to rescale the voltage reading based on the probe attenuation factor, and the ability to rescale the amplitude reading in a sensor or a probe’s units of measure (e.g., amperes for a current probe). Signal processing in the form of averaging or digital (noise) filtering can be applied to improve the signal-to-noise ratio of the acquired signals.
Channel annotation boxes, like the one labeled C1 in Figure 4, show the vertical scale setting, offset, and coupling for channel 1. When the cursors are turned on, cursor amplitude readouts can also appear in this box.
Timebase settingsSelecting “Horizontal Settings” from the “Timebase” pull-down menu or using the front panel horizontal controls adjusts the horizontal scaling and delay of the horizontal axis, the acquisition sampling modes, the acquisition memory length, and the sampling rate (Figure 5).
Figure 5 The timebase setup controls the sampling mode, horizontal scale, time delay, and acquisition setup. Source: Arthur Pini
The “Horizontal” controls simultaneously affect all the input channels. Generally, three standard sampling modes are real-time, sequence, and roll mode. Real-time is the normal default mode, sampling the input signal at the sampling rate for the entire duration set by the horizontal scale. Sequence mode breaks the acquisition memory into a user-set number of segments and triggers and acquires a signal in each segment before displaying them. Sequence mode acquisitions provide a minimum dead time between acquisitions. Roll mode is for long acquisition times with low sampling rates. Data is written to the display as it is acquired, producing a display that looks like a strip chart recorder.
The time per division (T/div) setting sets the horizontal time scale. The acquisition duration will be ten times the T/div setting. The acquisition delay shifts the trigger point on the display. The default delay is zero. Negative delays shift the trace to the left, and positive delays shift it to the right.
The “Maximum Sample Points” field sets the maximum length of the acquisition memory. By selecting “Set Maximum Memory”, the memory length varies as the T/div setting is changed until the maximum memory is allocated. Beyond that point, increasing the T/div will cause the sampling rate to drop. Basically, the time duration of the acquisition is equal to the number of samples in the memory divided by the sampling rate. If the fixed sampling rate mode is selected, the oscilloscope sampling rate will remain at the user-entered sampling rate as the T/div setting changes. The T/div setting will be restricted to settings compatible with the selected sampling rate.
The sample rate also affects the span of the fast Fourier transform (FFT) math operation, while the time duration of the acquisition affects its frequency resolution.
This oscilloscope allows the user to select the number of active channels. Note that the memory is shared among the active channels.
The “Navigation Reference” setting controls how the oscilloscope behaves when you adjust T/div. The centered (50%) selection keeps the current center time point fixed, and other events move about the center as T/div changes. With this setting, the trigger point could move off the grid as the scale changes. The “Lock to Trigger” setting holds the trigger point location fixed. The trigger event remains in place as T/div changes, while other events move about the trigger location.
Basic trigger settingsOscilloscopes require a trigger, usually derived from or synchronous with the acquired waveform. The function of the trigger is to allow the acquired waveform to be displayed stably. The trigger setup, either on the front panel or using the “Trigger” pulldown provides access to the trigger setup dialog box (Figure 6).
Figure 6 The basic setup for an edge trigger will allow the acquired waveform to be displayed stably. Source: Arthur Pini
The edge trigger is the traditional default trigger type. In edge trigger, the scope is triggered when the source trace crosses the trigger threshold voltage level with the user-specified positive or negative slope. Trigger sources can be any input channel, or an external trigger applied to the EXT. input. Edge trigger is the most commonly used trigger method and is selected in the figure. The current scope settings shown use channel 1 as the trigger source. The trigger is DC coupled with a trigger threshold level of nominally 500 millivolts (mV) and a positive slope. Note the “Find Level” button in the “Level” field will automatically find the trigger level of the source signal. The trigger annotation box on the right side of the screen summarizes selected trigger settings.
The trigger mode, which can be stop, automatic (auto), normal, or single, is selected from the trigger pulldown menu. The trigger mode determines how often the instrument acquires a signal. The default trigger mode is auto; in this mode, if a trigger event does not occur within a preset time period, one will be forced. This guarantees that something will be displayed. Normal trigger mode arms the oscilloscope for a trigger. When the trigger event occurs, it acquires a trace which is then displayed. After the acquisition is complete, the trigger automatically re-arms the instrument for the next trigger. Traces are displayed continuously as the trigger events occur. If there are no trigger events, acquisitions stop until one occurs.
In single mode, the user arms the trigger manually. The oscilloscope waits until the trigger event occurs and makes one acquisition, which is displayed. It then stops until it is again re-armed. If a valid trigger does not occur, invoking Single a second time will force a trigger and display the acquisition. Stop mode ceases acquisitions until one of the other three modes is evoked. Other, more complex triggers are available for more complex triggering requirements; however, they are beyond the scope of this article.
DisplayThe oscilloscope display is controlled from the display pull-down menu. The type of display can be selected from the pull-down, or the “Display Setup” can be opened (Figure 7).
Figure 7 “Display Setup” allows for the selection of the number of grids and other display-related settings. This example shows the selection of a quad grid with four traces. Source: Arthur Pini
This oscilloscope allows the user to select the number of displayed grids. There is also an “Auto Grid” selection, which turns on a new grid when each trace is activated. Multiple traces can be located in each grid, allowing comparison of the waveforms. Having a single trace in each grid provides an unimpeded view while maintaining the full dynamic range of the acquisition. In addition to normal amplitude versus time displays, the “Display Setup” includes cross plots of two traces producing an X-Y plot.
Display expansion-zoomZoom magnifies the view of a trace horizontally and vertically. The traditional method to leverage the zoom functions uses the pull-down “Math” menu to open “Zoom Setup” as shown in Figure 8.
Figure 8 Zoom traces can be turned on using the Zoom Setup under the Math pull-down menu. Source: Arthur Pini
Many oscilloscopes have a “Zoom” button on the front panel to open a zoom trace for each displayed waveform. Oscilloscopes with touch screens support drop and drag zoom. Touch the trace near the area to be expanded and then drag the finger diagonally. A box will be displayed; continue dragging your finger until the box encloses the area to be expanded. Remove the finger, and the zoom trace can be selected to show the expanded waveform.
A quick start guideThis should get you started. Most Windows-based oscilloscopes have built-in help screens that may be context-sensitive and provide helpful information about settings. If you get stuck, contact the manufacturer’s customer service line; they will get you going quickly. If all else fails, consider reading the manual.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
- Basic jitter measurements using an oscilloscope
- Understanding and applying oscilloscope measurements
- Build your own oscilloscope probes for power measurements (part 1)
- Trigger an oscilloscope, get a stable display
The post Basic oscilloscope operation appeared first on EDN.
Rad-tolerant RF switch works up to 50 GHz

Teledyne’s TDSW050A2T wideband RF switch operates from DC to 50 GHz with low insertion loss and high isolation. The radiation-tolerant device, fabricated with 150-nm pHEMT InGaAs technology, is well-suited for complex aerospace and defense applications.
Based on a MMIC design process, the reflective SPDT switch maintains high performance across frequencies, including millimeter-wave bands. It has a typical input P1dB of 23 dBm and port isolation of 23 dB at 50 GHz. The TDSW050A2T operates from ±5-V power supplies with minimal DC power consumption and is controlled with TTL-compatible voltage levels.
The switch withstands 100 krads (Si) TID, making it useful for satellite systems exposed to radiation. It meets MIL-PRF-38534 Class K equivalency for space applications and operates over an extended temperature range of -40°C to +85°C. The TDSW050A2T is supplied as a 1.15×1.47×0.1-mm die for hybrid assembly integration.
The TDSW050A2T RF switch is available now for immediate shipment from Teledyne HiRel and authorized distributors.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Rad-tolerant RF switch works up to 50 GHz appeared first on EDN.
Vishay expands SiC Schottky diode portfolio

Vishay has launched 16 SiC Schottky diodes with 650-V and 1200-V ratings in SOT-227 packages, enhancing efficiency in high-frequency applications. The devices offer, according to the manufacturer, the best trade-off between capacitive charge (QC) and forward voltage drop in their class.
The recently released components include dual diodes in parallel configuration with total forward current ratings ranging from 40 A to 240 A, along with single-phase bridge devices rated at 50 A and 90 A. The diodes feature a forward voltage drop as low as 1.36 V, reducing conduction losses and improving efficiency. They also offer better reverse recovery parameters than Si-based diodes, with virtually no recovery tail.
The SOT-227 package aids efficiency through improved thermal management and reduced parasitic inductance and resistance. The diodes’ low QC down to 56 nC enables high-speed switching, while their industry-standard package provides a drop-in replacement for competing solutions.
Samples and production quantities of the SiC Schottky diodes are available now, with lead times of 18 weeks. To access the datasheets for the dual-diode and single-phase bridge devices, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Vishay expands SiC Schottky diode portfolio appeared first on EDN.
PMIC extends primary battery operating time

Integrating an efficient boost regulator, the nPM2100 PMIC from Nordic Semiconductor prolongs the life of primary non-rechargeable batteries. Along with a range of energy-saving features, the device ensures that the full charge is used before the cell is discarded.
Powered by an input voltage range of 0.7 V to 3.4 V, the nPM2100’s boost regulator provides an output voltage from 1.8 V to 3.3 V, with a maximum current of 150 mA. It also drives a load switch/LDO, supplying up to 50 mA across an output range of 0.8 V to 3.0 V. The regulator features a quiescent current of 150 nA, with power conversion efficiency of up to 95% at 50 mA and 90.5% at 10 µA.
The nPM2100 manages the power supply for low-power SoCs and MCUs, including Nordic’s nRF52, nRF53, and nRF54 series of wireless multiprotocol devices. Configured via an I2C-compatible two-wire interface, it provides easy access to advanced functions such as ship mode and battery fuel gauging. Additionally, the PMIC furnishes two GPIOs that can be repurposed for time-critical control functions, offering an alternative to serial communication.
Ship mode supports a 35-nA sleep current with multiple wakeup options, including a break-to-wake function that allows a buttonless product to wake from ship mode when an electrical connection is broken. The voltage- and temperature-based fuel gauge runs on the host microprocessor, providing accurate battery level measurements and ensuring full access to the battery’s energy.
Samples of the nPM2100 are now available in a 1.9×1.9-mm WLCSP, with additional variants to be offered in 4×4-mm QFN packages. Volume production is expected in the first half of 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post PMIC extends primary battery operating time appeared first on EDN.
Partners simplify FPGA-based wireless development

Outfitted with an Altera Agilex 7 FPGA, Hitek’s eSOM7 embedded system-on-module pairs with ADI’s Apollo Mixed Signal Front End (MxFE) AD9084/AD9088 evaluation boards. This combined wideband development setup enables customers to seamlessly evaluate and develop high-performance Apollo MxFE-based wireless products in conjunction with Agilex 7 FPGAs.
The Hitek development platform includes two modules: the eSOM7, featuring two 400-pin high-speed mezzanine connectors, and a carrier board that breaks out the FPGA’s SERDES and I/Os. The eSOM7’s Agilex 7 F-tile FPGA integrates hard IP for networking up to 400G Ethernet and PCIe Gen 4. Adding soft IP such as JESD204C, a UDP/IP offload engine (UOE), and DIFI enables an optimized front-end processing and transport design.
For flexibility, designers can choose between two eSOM7 variants: the eSOM7-4F (four F-tiles), which supports all MxFE ADC/DAC channels, or the eSOM7-2F (two F-tiles), which supports half. The carrier module includes a VITA57.4 FMC+ connector with level translation and control logic to interface with ADI’s Apollo MxFE evaluation boards.
The high-performance platform eases the development of a wide range of applications, including radar, electronic warfare systems, phase array antennas, broadband and satellite communication systems, and electronic test and measurement systems.
The HiTek development platform with the eSOM7-2F is available now. The version featuring the eSOM7-4F will be available in Q1 2025. To learn more, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Partners simplify FPGA-based wireless development appeared first on EDN.
ASIL-D MCUs and compiler enhance SDV safety

HighTec’s Rust compiler now supports ST’s Stellar automotive MCUs, accelerating safety-critical system development for software-defined vehicles (SDVs). Stellar 28-nm MCUs are certified to ISO 26262 ASIL D, the highest level of risk management, while the Rust compiler is qualified to the same safety level.
Rust’s safety, performance, and reliability make it an emerging choice for automotive mission-critical systems. It includes provisions to safeguard memory, process threads, and data types, with runtime efficiency comparable to C/C++ in execution time and memory usage. HighTec’s C/C++ and Rust compilers enable the integration of newly developed Rust code, with its inherent safety benefits, alongside legacy C/C++ code.
ST’s Stellar automotive MCUs feature Arm Cortex-R52+ cores and a safety-focused architecture. In addition to ISO 26262 ASIL D certification, they comply with ISO 21434 cybersecurity standards and UN155 requirements, ensuring alignment with the latest safety and security standards.
For more information about the HighTec ASIL D Rust compiler for ST’s Stellar 32-bit automotive MCUs, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post ASIL-D MCUs and compiler enhance SDV safety appeared first on EDN.
Will open-source software come to SDV rescue?

Modern cars’ capture of advanced features for safety, driver assistance, and infotainment is now intrinsically tied to software-defined vehicles (SDVs), which automakers have already accomplished using lower levels of software based on closed-source, proprietary solutions. However, an SDV can be defined in six levels, with a true SDV starting at level three.
Moritz Neukirchner explains these six levels and argues that open-source software will be crucial in realizing proprietary alternatives for SDVs. While acknowledging that design teams have tried and failed to develop safety-centric, Linux-based solutions for automotive, he provides an update on Linux solutions’ recent progress in incorporating safety functionality into SDVs.
Read the full story at EDN’s sister publication, EE Times.
Related content
- Software-defined vehicles (SDVs) come of age
- Redefining Mobility with Software-Defined Vehicles
- CES 2025: Moving toward software-defined vehicles
- Software-defined vehicle (SDV): A technology to watch in 2025
- Understanding the Architecture of Software-Defined Vehicles (SDVs): Key Components and Future Insights
The post Will open-source software come to SDV rescue? appeared first on EDN.
Converting from average to RMS values

We had a requirement to measure the RMS value of a unipolar square wave being fed to a resistive load. Our resistive loads were light bulb filaments (Numitrons) so the degree of brightness was dependent on the applied RMS.
Our digital multimeters did not have an RMS measurement capability, but they could measure the average value of the waveform at hand.
Conversion of a measured average value to the RMS value was accomplished by taking the average value and dividing that by the square root of the waveform’s duty cycle.
The applicable equations are shown in Figure 1.
Figure 1 Equations used to convert a measured average value to RMS value by taking the average value and dividing that by the square root of the waveform’s duty cycle.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- RMS stands for: Remember, RMS measurements are slippery
- Root mean square versus root sum square
- When do you need true rms?
- Rise Time: The Role of RMS
The post Converting from average to RMS values appeared first on EDN.
Flip ON flop OFF

Toggle, slide, push-pull, push-push, tactile, rotary, etc. The list of available switch styles goes on and on (and off?). Naturally, as mechanical complexity goes up, so (generally) does price. Hence simpler generally translates to cheaper. Figure 1 goes for economy by adding a D-type flip-flop and a few discretes to a minimal SPST momentary pushbutton to implement a classic push-on, push-off switch.
Figure 1 F1a regeneratively debounces S1 so F1b can flip ON and flop OFF reliably.
Wow the engineering world with your unique design: Design Ideas Submission Guide
An (almost) universal truth about mechanical switches, unless they’re the (rare) mercury-wetted type, is contact bounce. When actuated, instead of just one circuit closure, you can expect several, usually separated by a millisecond or two. This is the reason for the RC network and other curious connections surrounding the F1a flip/flop.
When S1 is pushed and the circuit closed, a 10 ms charging cycle of C1 begins and continues until the 0/1 switching threshold of pin 4 is reached. When that happens, poor F1a is simultaneously set to 1 and reset to 0. This contradictory combination is a situation no “bistable” logic element should ever (theoretically) have to tolerate. So, does it self-destruct like standard sci-fi plots always paradoxically predict?
Actually, the 4013-datasheet truth table tells us that nothing so dramatic (and unproductive) is to be expected. According to that, when connected this way, F1a simply acts as a non-inverting buffer with pin 2 following the state of pin 4, snapping high when pin 4 rises above its threshold, and popping low when it descends below. Positive feedback through C1 sharpens the transition while ensuring that F1a will ignore the inevitable S1 bounce. Meanwhile the resulting clean transition delivered to F1b’s pin 11 clock pin causes it to reliably toggle, flipping ON if it was OFF and flopping OFF if it was ON where it remains until S1 is next released and then pushed again.
Thus, the promised push-ON/push-OFF functionality is delivered!
The impedance of F1b’s pin 13 is supply-voltage dependent, ranging from 500 Ω at 5 V to 200 Ω at 15 V. If the current demand of the connected load is low enough, then power can be taken directly from F1b pin 13 and the Q1 MOSFET is unnecessary. Otherwise, it is, and a suitably capable transistor should be chosen. For example, the DMP3099L shown has an Ron less than 0.1 Ω and can pass 3 A.
But what about that “no switch at all” thing?
The 4013 input current is typically only 10 pA. Therefore, as illustrated in Figure 2, a simple DC touchplate comprising a small circuit board meander can provide adequate drive and allow S1 to be dispensed with altogether. It’s hard to get much cheaper than that.
Figure 2 An increase in RC network resistances allows substitution for S1 with a simple touchplate.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Setup and Hold Time Basics
- Use an op amp as a set/reset flip-flop
- New VFC uses flip-flops as high speed, precision analog switches
- CMOS flip-flop used “off label” implements precision capacitance sensor
The post Flip ON flop OFF appeared first on EDN.
Build ESD protection using JFETs in op amps

Design engineers aiming to protect the input and output of op amps have several options. They can use an electrostatic discharge (ESD) diode or input current-limiting resistor alongside a transient voltage suppressor (TVS) diode. However, both design approaches have limitations. Here is why an op amp with integrated JFET input protection has better design merits.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- Op amps: the 10,000-foot view
- Op-Amp Measurements Explained
- New op amps address new—and old—design challenges
- Are you violating your op amp’s input-common-mode range?
- Op Amp Circuitry Ultra-High-Precision Resistor Usage: Matching and Stability Importance
The post Build ESD protection using JFETs in op amps appeared first on EDN.
Investigating injection locking with DSO Bode function

Oscillator injection locking is an interesting subject; however, it seems to be a forgotten circuit concept that can be beneficial in some applications.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This design idea shows an application of the built-in Bode capability within many modern low-cost DSOs such as the Siglent SDS814X HD using the Peltz oscillator as a candidate for investigating injection locking [1], [2], [3].
Figure 1 illustrates the instrument setup and device under test (DUT) oscillator schematic with Q1 and Q2 as 2N3904s, L ~ 470 µH, C ~ 10 nF, Rb = 10K, Ri = 100K and Vbias = -1 VDC. This arrangement and component values produce a free running oscillator frequency of ~75.5 kHz
Figure 1 Mike Wyatt’s notes on producing a Peltz oscillator and injector locking setup where the arrangement and component values produce a free running oscillator frequency of ~75.5 kHz.
As shown in Figure 2, the analysis from Razavi [2] shows the injection locking range (± Δfo) around the free running oscillator frequency fo. Note the locking range is proportional to the injected current Ii. The component values shown reflect actual measurements from an LCR meter.
Figure 2 Mike Wyatt’s notes on the injection-locked Peltz oscillator showing the injection locking range around the free running oscillator frequency fo.
This analysis predicts a total injecting locking range of 2*Δfo, or 2.7 kHz, which agrees well with the measured response as shown in Figure 3.
Figure 3 The measured response of the circuit shown in Figure 1 showing an injection locking range of roughly 2.7 kHz.
Increasing the injection signal increases the locking range to 3.7 kHz as predicted, and measurement shows 3.6 kHz as shown in the second plot in Figure 4.
Figure 4 The measured response of the circuit shown in Figure 1 where increasing the injection signal increases the locking range to 3.7 kHz.
Note the measured results show a phase reversal as compared to the illustration notes (Figure 2) and the Razavi [2] article. This was due to the author not defining the initial phase setup (180o reversed) in agreement with the article and completing the measurements before realizing such!!
Injection locking use caseInjection locking is an interesting subject with some uses even in today’s modern circuitry. For example, I recall an inexpensive arbitrary waveform generator (AWG) which had a relatively large frequency error due to the cheap internal crystal oscillator utilized and wanted the ability to use a 10 MHz GPS-disciplined signal source to improve the AWG waveform frequency accuracy. Instead of having to reconfigure the internal oscillator and butcher up the PCB, a simple series RC from a repurposed rear AWG BNC connector to the right circuit location solved the problem without a single cut to the PCB! The AWG would operate normally with the internal crystal oscillator reference unless an external reference signal was applied, then the oscillator would injection lock to the external reference. This was automatic without need for a switch or setting a firmware parameter, simple “old school” technique solving a present-day problem!
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.
References
- “EEVblog Electronics Community Forum.” Injection Locked Peltz Oscillator with Bode Analysis, www.eevblog.com/forum/projects/injection-locked-peltz-oscillator-with-bode-analysis.
- B. Razavi, “A study of injection locking and pulling in oscillators,” in IEEE Journal of Solid-State Circuits, vol. 39, no. 9, pp. 1415-1424, Sept. 2004, doi: 10.1109/JSSC.2004.831608.
- Wyatt, Mike. “Simple 5-Component Oscillator Works below 0.8V.” EDN, 3 Feb. 2025, www.edn.com/simple-5-component-oscillator-works-below-0-8v/.
Related Content
- Simple 5-component oscillator works below 0.8V
- Injection-lock a Wien-bridge oscillator
- Ultra-low distortion oscillator, part 2: the real deal
- The Colpitts oscillator
- Clapp versus Colpitts
The post Investigating injection locking with DSO Bode function appeared first on EDN.
Intel comes down to earth after CPUs and foundry business review

While finetuning its products and manufacturing process roadmap, Intel has realized that there are no quick fixes. After a briefing from Intel co-CEOs Michelle Holthaus and David Zinsner on upcoming CPUs and a slowdown in the ramp of the 18A node, Alan Patterson caught up with industry analysts to take a closer look at Intel’s predicament. He spoke with them about delayed CPU launches, the lack of an AI story, and the fate of Intel Foundry.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- Intel outside?
- Intel: Gelsinger’s foundry gamble enters crunch
- Pat Gelsinger: Return of the engineer CEO that wasn’t
- Intel’s Commitments to Europe: From Pride to Prejudice
- Intel Years From Success in Foundry Business, Analysts Say
The post Intel comes down to earth after CPUs and foundry business review appeared first on EDN.