EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 51 min 8 sec ago

Mixed-signal scope doubles responsiveness

Fri, 12/08/2023 - 23:19

Tektronix has launched the 4 Series B mixed-signal oscilloscope (MSO), enabling quicker analysis and faster data transfer speeds. The MSO offers the same signal fidelity and touch user interface as the earlier version 4 Series, but with an upgraded processor system. According to Tektronix, the user interface on the 4 Series B is twice as responsive and provides accelerated advanced analysis.

Among the MSO’s attributes are bandwidths from 200 MHz to 1.5 GHz, real-time sampling at 6.25 Gsamples/s, and 12-bit vertical resolution (16-bit in high res mode). With up to six analog input channels, the scope is suitable for three-phase power analysis. Tektronix Spectrum View technology provides multichannel spectrum analysis in sync with time domain waveforms.

In addition to accelerating front-panel operation, the upgraded processor system also enhances remote operation. The 4 Series B can be remotely accessed and controlled using a simple web browser, dedicated TekScope PC software, or custom program via a programming interface. More than 25 serial decode packages are available for interchip, automotive, power, and aerospace buses, to name a few.

The 4 Series B mixed-signal oscilloscope is available now, with a base price of $8830.

4 Series B product page

Tektronix

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Mixed-signal scope doubles responsiveness appeared first on EDN.

DC/DC converter increases 5G radio head power

Fri, 12/08/2023 - 23:19

The NE070DC58AZ DC/DC converter from OmniOn boosts and regulates the nominal -48-VDC output provided by the Infinity S or Infinity M power system to -58 VDC. This latest addition to the Infinity family of products helps ensure constant boosted voltage output to cellular tower radio heads during normal operation, as well as in the event of an outage requiring battery backup.

The -58-VDC output is provided to an integrated, common system bus structure and is then deployed to loads through pluggable DC breakers. Boosted voltage extends the reach of remote radio heads and reduces current flow in cables. This, in turn, decreases heat loss and improves overall system efficiency. Converters can be connected in parallel for larger loads and with N+1 redundancy.

Medium-voltage Infinity DC power systems are configured in racks or cabinets to ease 5G and wireless infrastructure expansion. Boost converter slots and associated load circuits can be integrated into the Infinity power plants during the build process. Dual-voltage systems already deployed in the field can be retrofit to support the -58-V boost capability.

OmniOn products are available globally through its network of distributors and manufacturer representatives.

NE series product page

Infinity S product page

Infinity M product page

OmniOn Power

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post DC/DC converter increases 5G radio head power appeared first on EDN.

ICs hone sensorless BLDC motor design

Fri, 12/08/2023 - 23:19

Three motor driver ICs from Renesas enable full torque at zero speed from brushless DC (BLDC) motors without sensors. The parts enable the creation of sensorless BLDC motor systems with higher horsepower and speed at a given torque. According to the manufacturer, the ICs improve power consumption and reliability, while reducing board space by lowering the number of components needed.

The RAA306012 is a 65-V, 3-phase smart gate driver that pairs with a variety of MCUs from Renesas or from other sources. Two other devices, the RAJ306101 and RAJ306102, save board space by integrating the smart gate driver with a Renesas 32-bit or 16-bit MCU, respectively, in a single package.

The ability to enable full torque at zero speed without sensors is made possible by two Renesas patent-pending algorithms. Enhanced inductive sensing (EIS) offers stable position detection when the motor is completely stopped. When the motor is operating at extremely low speed, motor rotor position identification (MRI) is used. At higher speeds, the driver ICs use conventional methods.

All three devices are available now. The RAA306012 comes in a 7×7-mm, 48-pin QFN package. The RAJ306101 and RAJ306102 are housed in 8×8-mm QFNs with 56 pins and 64 pins, respectively. Evaluation kits for each device are also available.

RAA306012 product page

RAJ306101 product page

RAJ306102 product page 

Renesas Electronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ICs hone sensorless BLDC motor design appeared first on EDN.

Ethernet switch has on-chip neural network

Fri, 12/08/2023 - 23:19

Trident 5-X12, a software-programmable Ethernet switch from Broadcom, packs an on-chip neural-network inference engine—a first according to the company. NetGNT, which stands for networking general-purpose neural network traffic analyzer, improves network efficiency and performance.

NetGNT works in parallel to augment the standard packet-processing pipeline. The standard pipeline is one-packet/one-path. In other words, it looks at one packet as it takes a specific path through the chip’s ports and buffers. NetGNT, in contrast, is an ML inference engine and can be trained to look for different types of traffic patterns that span the entire chip. The enhanced telemetry capabilities of Trident 5 allow deeper real-time insights into bandwidth, which can then be used to train NetGNT.

The Trident 5-X12 provides 16 terabits/s of bandwidth, double that of its predecessor, the Trident 4-X9. Built using a 5-nm manufacturing process, the chip consumes 25% less power per 400G port than the Trident 4-X9. It also adds support for 800G ports, allowing direct connection to Broadcom’s Tomahawk 5, which is used as the spine/fabric in compute and AI/ML data centers.

The Trident 5-X12 BCM78800 Ethernet switch is now shipping to qualified customers.

Trident 5-X12 product page

Broadcom

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Ethernet switch has on-chip neural network appeared first on EDN.

5G OTA test system gains CTIA approval

Fri, 12/08/2023 - 23:19

R&S announced that its TS8991 over-the-air (OTA) test system now provides CTIA-compliant testing of 5G A-GNSS antenna performance. The company believes this establishes the TS8991 as the first 5G FR1 A-GNSS OTA test platform to be approved by CTIA Certification. In addition to 5G A-GNSS, the TS8991 meets CTIA requirements for 2G/3G A-GPS and 4G LTE A-GNSS.

The single-source turnkey platform comprises the CMX500 5G one-box signaling tester, the SMBV100B satellite simulator, and an anechoic wireless performance test chamber. All components are controlled by system software that enables fully automated OTA measurements in compliance with CTIA Certification test plans.

The CMX500 enables the TS8991 antenna test system to provide signaling to the device-under-test. Supporting the latest 3GPP specifications, the CMX500 measures receive sensitivity and transmit power. Along with the CMX500’s location-based services, the SMBV100B satellite simulator covers all 3GPP-based Assisted GNSS technologies.

Highly integrated and upgradeable, the TS8991 antenna performance test system comes in a variety of configurations and test chamber sizes.

TS8991 product page

Rohde & Schwarz 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 5G OTA test system gains CTIA approval appeared first on EDN.

Automated IC layout tool integrated into foundry PDK

Fri, 12/08/2023 - 13:06

The Calibre DesignEnhancer software, which Siemens EDA unveiled at the Design Automation Conference (DAC) in July 2023, has been incorporated into a process design kit (PDK) of Samsung Foundry. The software provides multiple use models that perform automatic layout optimizations to improve power robustness and reduce design cycles during IC design implementation.

IC design engineers have traditionally relied on third-party place-and-route tools to incorporate design for manufacturing (DFM) optimizations. However, that often requires multiple time-consuming runs before converging on a “DRC-clean” solution.

Source: Siemens EDA

Calibre DesignEnhancer—which supports automated layout optimization during the IC design and implementation stages—helps IC designers deliver “DRC-clean” designs to tape out faster while improving design manufacturability and circuit reliability. Here, its via insertion use-model automatically inserts additional Calibre nmDRC-correct vias into layouts to reduce resistance that can minimize both voltage (IR) drop and electrostatic discharge (ESD) events in IC designs.

Siemens EDA claims that its layout tool enables design teams to significantly shorten turnaround time and reduce EM/IR issues while preparing a layout for physical verification. Calibre DesignEnhancer optimizes IC designs by reducing or even eliminating IR drop and electromigration (EM) issues quickly and accurately.

It does that by automatically implementing ‘Calibre correct-by-construction’ design layout modifications much earlier in the IC design and verification process. Luigi Rolandi, senior director for R&D at STMicroelectronics, acknowledged that Calibre DesignEnhancer proved instrumental in addressing and resolving out of specification resistance and IR drop issues.

JoongWon Jeon, a distinguished engineer of foundry technology development team at Samsung Electronics, also recognized that Calibre DesignEnhancer has provided mutual customers a step function increase in via insertion efficiency. “That leads to faster turnaround time compared to classic approaches to the problem.” For instance, while traditional place-and-route processes often require design teams to conduct multiple runs, Calibre DesignEnhancer in Samsung PDK helps IC designers achieve substantial first-pass gains.

Calibre DesignEnhancer integrates with all major design and implementation environments using industry interface standards and comes with kits for all leading foundries, supporting designs from 130-nm to 2-nm process nodes.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Automated IC layout tool integrated into foundry PDK appeared first on EDN.

Exploring software-defined radio (without the annoying RF) – Part 1

Thu, 12/07/2023 - 17:16

I needed to come up with a communications device to transmit a handful of bytes, every hour, from a small off-grid solar system to my shop about 150 feet away. The first thought was Wi-Fi, but I already have many dozens of devices on my Wi-Fi and keeping them all working is like spinning plates (for those under 50 see this video). I wanted a different solution and thought of using ultrasonic transducers configured in a software-defined radio (SDR) type framework as the transceiver. While working on the design I realized that it would be very useful for anyone that wants to explore, or teach about, the physical (PHY) layer of the SDR. (The PHY layer of the firmware takes the received signal and demodulates it, slices it, checks it, and sends the data packet off for further processing on the next layer of the OSI model. On the transmit end, it builds a modulated signal based on the data to be sent, the baud rate, and the modulation scheme selected, then sends the modulated signal to a transmitter.)

Wow the engineering world with your unique design: Design Ideas Submission Guide

Why use ultrasonics

As you will see, by using ultrasonics to send and receive the data, we eliminate the need for expensive RF equipment such as high-speed oscilloscopes, spectrum analyzers, vector network analyzers, not to mention the cost of an SDR transceiver. All signals in this system can be viewed using an inexpensive oscilloscope—a basic 10 MHz bandwidth oscilloscope will work for viewing all signals. The ultrasonic system also eliminates the need to do any FPGA programming, which can be problematic for some embedded firmware engineers. Along with these advantages we also eliminate the pesky issues of RF systems (my apologies to RF engineers) such as parasitics, tricky board routing, antenna matching, not to mention working with s-parameters and smith charts.

SDR firmware development

The last reason for exploration of an SDR on this ultrasonic system is that the developed firmware is almost exactly the same as the firmware on a high frequency SDR RF transceiver. So, any knowledge gained will translate to these larger SDR systems. For some years I wrote firmware for an SDR system that transmitted and received in the 900 MHz ISM band. The system was designed to receive 64 channels at 100k baud simultaneously, while also transmitting 16 channels at 100k baud. Although it ran on a system with 3 ARM processors and 160 GFLOPs of DSP, the PHY level code is very much the same as the code used in this ultrasonic system. If you can develop firmware on this ultrasonic system, you can develop SDR firmware on larger, RF systems.

SDR systems

As you may know SDR is one of the hot areas in electrical engineering. It is currently being designed in and used in radar, military communications, cell phone services, satellites, and even car infotainment systems. From a high level view, it strips away much of the RF design in favor of digitized reception via analog-to-digital conversion, digital signal processing, and digital-to-analog conversion as close to the transmitting antenna as possible. There are a number of different architectures of SDR that vary in the amount of RF that is replaced by digital processing. Some of these architectures are superheterodyne, direct conversion, and direct sampling. The holy grail of SDR is direct sampling as shown in Figure 1.

Figure 1 Direct sampling architecture for an SDR where the RF is digitized very early on in the receive chain and no upconversion in the transmit chain, requiring the ADCs and DACs to have very high sampling rates.

The essence of this is that the received RF is almost immediately digitized going in—no LO mixing, no down conversion. Also, the transmit side does no upconversion in frequency. I say this is the holy grail because, in high frequency RF, this is very hard to do due to the speeds involved in the analog-to-digital converters (ADCs) and digital-to-analog converters (DACs), and therefore the data rate that needs to be processed in the FPGA and/or the attached processor.

But direct sampling is doable when dealing with ultrasonic signals, in fact we’ll do this using a 16 MHz Arduino Nano and no FPGA.

The SDU-X system

Figure 2 shows the general design of a system I am calling the Software Designed Ultrasonic Transceiver or the SDU-X. The figure shows a minimum number of parts: a couple of ultrasonic transducers, a receiver amplifier/bandpass filter, an Arduino Nano, a DAC, and a transmit amplifier. Not shown is a handful of things like power supplies and some LEDs.

Figure 2 The SDU-X design with a minimum number of parts excluding power supplies and LEDs.

To assist in signal reception, there is a 3D printable parabolic dish for the receiver transducer. This dish mounts on a 3D printable tower that holds the transmitter transducer and has a set of sighting holes for aiming the dish. The dish gives about a 9 dB improvement in reception.

Current code allows for selection of different modulation types. Some are fully implemented for sending and receiving. Some are only implemented as transmitters and the receiver for that modulation type is something the user can experiment with creating. It is intended as something like a student exercise.

An SDU-X system, as seen in Figure 3, consists of two assemblies: one is called the Requester and the other the Responder. The hardware (PCBA) is the same for both, but they have different compiles of the firmware. When running, the Requester will send out a transmission asking for data from the Responder or for an action to be executed by the Responder. In the current firmware, the requested data can be something like the Responders SNR measurements, or a value from onboard analog input or digital I/O status. This data is transmitted back to the Requester who can then print it out on a serial port (available in the Arduino IDE). Requested actions can, for example, include having the Responder blink its LEDs or set some digital I/O high or low.

Figure 3 SDU-X system architecture with a Requester and Responder assembly connected to two 3D-printable parabolic dish antennas for the receiver transducer.

The SDU-X schematic

Let’s look at the schematic in Figure 4.

 Figure 4 Schematic of the SDU-X system with the receiver transducer (upper left) and transmitter transducer (upper right).

On the upper left side, we see the receiver transducer connected to an op-amp (U1A) acting as a band-pass filter with an adjustable gain of up to 100 (40 dB). Typically, this gain is just set to the maximum, unless the units are a few feet apart. The -3 dB roll-off frequencies are at about 7 kHz and 90 kHz. The transducer itself is also a good filter centered at 40 kHz. From my testing the 40 kHz acoustic band seems to be pretty quiet, so filtering of the input is not very critical.

Following the first stage is an op-amp (U1B) configured as a Sallen-Key band-pass filter with a gain around 10 (20 dB). When combined with the first stage, the receiver amplifier gives a total gain of 1000 (60 dB). The -3 dB roll-off frequencies of the combined two stages are at 38 kHz and 42 kHz, cleaning up the signal even further. The 0 to +5 V output of this amplifier is then run to an analog input pin of the Arduino Nano which is internally connected to the Nano’s 10-bit ADC.

Moving to the upper right of the schematic, we can see a pair of op-amps. The upper one (IC U6A) is configured as a non-inverting amplifier with a high-pass response rolling off -3 dB at around 1.5 kHz. The lower op-amp (IC U6B) circuit is configured as an inverting amplifier with about the same high pass response. When the outputs of these circuits are tied to the transmitter transducer they act as a differential driver, allowing the transducer to see a signal of approximately +/-12 V. Low-pass output filtering relies on the filter characteristics of the transmit transducer, which works very well. The transducer response rolls off -3dB at roughly +/-1 kHz. Note that the two resistors (R37 And R38) on the output the op-amps are to aid in stability, if needed, in driving the highly capacitive transducer. The input to these two op-amps comes from an 8-bit DAC (IC U3) which is driven directly from the Nano. Note that the two dual op-amps are TL082’s, selected primarily for the slew rate.

That’s essentially the complete direct sampling systems hardware.

A few more parts round out the schematic. The power supply generates 3 voltages; +8 VDC to power the Nano, +5 VDC to power the receiver op-amps and LEDs, and +15 VDC to power the transmit circuits op-amps. There are a couple of red/green bi-color LEDs on the board also, along with a few green LEDs for power indicators. The output power is a 10 to 13 VDC at 100 mA minimum. A simple AC adapter works fine.

There are spare digital I/O and analog inputs and a small proto area for experimentation and new features. Also, on the PCB are several strategic test points to monitor analog in and out as well as for use in monitoring streams of internal data such as received samples, correlation shape, flagging sync times, etc. Some test points are appropriately spaced and sized to allow for use with an oscilloscope probe ground spring for convenient hands-free probing.

Exploring software-defined radio (without the annoying RF) – Part 2

In the next installment, we will look at the firmware in the SDU- X. In the meantime, the PCB, schematic, 3D files for the parabolic towers, design notes, and firmware can be found at: https://www.thingiverse.com/thing:6268613

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Exploring software-defined radio (without the annoying RF) – Part 1 appeared first on EDN.

Control chip preps for cryo temperatures in quantum computers

Thu, 12/07/2023 - 16:37

A consortium funded by Innovate UK and led by sureCore is implementing a cryogenic control chip on the GlobalFoundries 22FDX process, and Agile Analog is working closely with sureCore to implement and verify this cryogenic test ASIC for quantum computers. That will establish the viability of cryogenic ASICs aiming to migrate control electronics into the cryostat in order to be closer to qubits.

According to Paul Wells, sureCore’s CEO, quantum computing technology has been solidly proven by now in terms of the use of qubits. “There are various technologies to implement qubits, but they need to go to as low as 77 K (-196°C) down to the near absolute zero temperatures.” In other words, quantum computing outfits must prove that their qubits work fine at these temperatures.

See full article at Planet Analog, EDN’s sister publication

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Control chip preps for cryo temperatures in quantum computers appeared first on EDN.

Using oscilloscope filters for better measurements

Wed, 12/06/2023 - 16:22

Most oscilloscopes are fitted with filters to help improve measurement by reducing overall noise in an acquisition to improve signal-to-noise ratio (SNR). Even the most basic oscilloscopes include an analog 20 MHz low pass filter in the input channel signal path. High end oscilloscopes with a GHz or better bandwidth generally offer multiple input low pass filters, some analog and some digital. Noise or enhanced resolution (ERES) digital filters are digital low pass filters used to increase the amplitude resolution of the oscilloscope trading bandwidth for improved SNR. Beyond these input band limiting filters, oscilloscopes often include optional digital filter software providing more general filter types for more complex filtering needs.

This article will deal with how to use all of these filtering tools.

 Input band limit and noise filters

Oscilloscopes with bandwidths greater than 100 MHz generally include a 20 MHz band limit filter intended to reduce the oscilloscope’s bandwidth to lessen broadband noise in low frequency measurements. This filter is usually enabled in the input channel setup similar to the one shown in Figure 1.

Figure 1 The input channel setup for an oscilloscope with the bandwidth limit filter selection highlighted in yellow and the noise filter highlighted in orange. Source: Arthur Pini

The oscilloscope in this example has a full bandwidth of 4 GHz and input band limit filters of 20 and 200 MHz. Oscilloscopes with higher bandwidths usually offer more band limit filter selections. It also has the setup for a noise filter with user selected bandwidths.

These filters improve SNR by reducing the measurement bandwidth. The amount of improvement depends on the frequency distribution of the noise signals. If the noise source were spectrally flat Gaussian white noise, the SNR improvement would be proportional to the square root of the bandwidth reduction. So, reducing the bandwidth by a factor of four would cut the noise level in half for a spectrally flat noise signal.

Using the band limit filter

As an example of how the oscilloscope’s filters can be applied, let’s look at the ripple voltage on the 5-volt bus of a circuit card to measure ripple due to circuit loading effects and see the effect of the band limit low pass filters on the measurement. We’ll evaluate a 20 MHz band limit filter (Figure 2).

Figure 2 The measurement made at full bandwidth is shown on top. The lower trace was acquired using the 20 MHz band limit filter with a reduction in high frequency noise. Source: Arthur Pini

The upper trace was acquired at full bandwidth using AC coupling and a high-impedance probe with a ground spring to reduce stray pickup on wire ground leads. The low frequency pulse-like waveform is the ripple caused by circuit loading as various devices on the PC board turn on and off. This desired signal is obscured by higher frequency noise. The waveform acquired using the 20 MHz filter shows a significant reduction in the high frequency noise, but it has not eliminated it. The 20 MHz filter has had little effect on the low frequency ripple components, which have a very low frequency below 20 MHz. Comparing the peak-to-peak amplitudes in the segments between 2 and 4 ms, where the high-frequency noise is the main component, the filter has reduced the peak-to-peak noise from 50.4 down to 13.1 mV. The noise that remains has frequency components that are lower than 20 MHz.

Looking more closely at the full bandwidth acquisition using zoom and the fast Fourier transform (FFT) to take a closer look at the acquired signal and evaluate the frequency distribution of the noise. The signal acquired at full bandwidth shows why the 20 MHz filter left a fair amount of noise on the signal (Figure 3).

Figure 3 The zoom expansion and the FFT of the noisy waveform show additional details of the noise components.

 Expanding the time signal horizontally (second trace from the top), the high frequency noise appears as narrow capacitively coupled impulses. The fast edges on these noise components have high frequency components that will be spectrally spread. Looking at the FFT (third trace from the top), we see a broad spectrum caused by these noise elements. Also, the spectral peaks representing periodic signal components have the highest density below 15 MHz. Voltage variations due to circuit loading are rectangular pulse-like low frequency variations. In the FFT spectrum, these appear at the extreme left, below 50 kHz. The bottom trace is the FFT of the low frequency components from 0 to about 1 MHz. The 20 MHz input band limit filter attenuates those noise components above 20 MHz but leaves the other spectral components unattenuated. 

Reducing the noise further requires reducing the measurement bandwidth further. That can be accomplished using the noise filter, allowing the user to select one of six possible reduced bandwidths.

Enhance resolution noise filters

The noise filters are also known as ERES filters because they increase the effective number of bits of resolution of the oscilloscope. These filters are also available via the input channel setup, as is seen in Figure 1, and also as a math function in the oscilloscope used in this example. The ERES filter processes ‘n’ samples at a time from the acquired input and weights them to produce a finite impulse response (FIR) filter with a Gaussian low pass frequency response. The Gaussian low pass filter has no side lobes in the frequency domain, and it never causes overshoot, undershoot, or ringing in the time domain, maintaining signal integrity. The ERES filter uses any of six sample lengths: 2, 5, 11, 25, 52, and 106 taps to achieve resolution enhancement of 0.5 to 3.0 bits in steps of half a bit each. The low pass filter’s cutoff frequencies depend on the acquisition sampling rate. Table 1 shows the noise filter characteristics of all six steps for the sample rate of 100 mega-samples per second (MS/s) which was used during the acquisition.

Number of Bits

Number of Taps

Bandwidth (MHz)

0.5

2

25.00

1.0

5

12.05

1.5

11

6.05

2.0

25

2.90

2.5

52

1.45

3.0

106

0.800

Table 1 The number of taps and resulting bandwidth of the noise filter for the six possible filter bandwidth limit settings and a sample rate of 100 MS/s.

The ERES noise filter provides a range of low pass cutoff frequencies that decrease proportionately to the number of taps in the filter. Changing the acquisition sample rate will scale the cutoff frequencies proportionally, providing still greater choice in the cutoff frequencies.  Selecting the 3-bit enhancement uses 106 samples to achieve an 800 kHz bandwidth. The result of applying the 800kHz low pass filter to the acquired signal is shown in Figure 4.

Figure 4 The 800kHz low pass noise filter eliminates most of the high frequency noise, allowing a detailed study of the lower frequency ripple components due to circuit loading. Source: Arthur Pini

 The 800 kHz selection of the noise filter has removed much of the high frequency noise, and the voltage variations due to the circuit loading are more clearly visible.

Low pass filters can attenuate or eliminate high frequency noise. In some cases, you may want to be able to separate the low and high frequency components and study them independently. That requires the use of both a high and a low pass filter. This oscilloscope includes an optional digital filter package that offers various filter types and a broader range of cutoff frequencies than the standard noise filter.

General purpose digital filters

The digital filter package option broadens the offering of filters. It can create four types of filters: low pass, high pass, band pass, and band stop, as shown in Figure 5.

Figure 5 Examples of the frequency responses of low pass (yellow trace), high pass (red trace), band pass (blue trace), and band stop (green trace) filter types. Source: Arthur Pini

These filters can be created using FIR or infinite impulse response (IIR) topologies. IIR filters allow users to select digital filter types identical in response to well-known analog filters, including Butterworth, Bessel, Chebyshev, or Inverse Chebyshev, examples are shown in Figure 6.

Figure 6 Comparing the amplitude frequency responses of Bessel (red trace), Butterworth (yellow trace), Chebyshev (blue trace), and inverse Chebyshev (green trace) IIR low pass filters. Source: Arthur Pini

These are the most commonly used analog filter types. The Butterworth or ‘maximally flat’ filter has the flattest amplitude response of all the available filters. The Bessel filter is noted for its uniform phase response as

a function of frequency. If you need the fastest roll off, the Chebyshev and inverse Chebyshev filters have the narrowest transition region for a given number of stages. On the negative side, the Chebyshev filter has amplitude ripple in the passband, while the inverse Chebyshev filter exhibits a flat passband response but has ripple in the stop band. The filter package provides control of the cutoff frequencies, the filter order, the transition width, and the stop band attenuation of each filter. The filter option package also allows users to use a custom-designed filter.

Two different instances of a Butterworth filter are applied to the acquired signal separately to separate the power rail ripple’s low and high frequency components. The high frequency components are removed by using a sixth order Butterworth low pass filter with a cutoff frequency of 50 kHz. The low frequency components are separated out by applying a sixth-order Butterworth high pass filter with the same 50 kHz cutoff frequency.  The results are shown in Figure 7.

Figure 7 Using a low pass and a high pass filter to separate the low and high frequency components of the ripple. Source: Arthur Pini

The 50 kHz cutoff frequency was selected to be below the nominal 61.7 kHz switching frequency of the power source so that the filter would reasonably attenuate signal components at that frequency due to the power switching. The acquired signal is shown in the upper left grid. The extracted low frequency components appear below it in the left center grid. The bottom left grid shows the FFT of the low pass filtered signal. The cursor marks the 61.7 kHz switching frequency, which is attenuated more than 30 dB below the low frequency maxima.

The high pass filter output appears in the upper right grid. Notice that the low frequency ripple due to circuit loading is gone. The zoom of that waveform, shown in the center right grid, shows the familiar noise spikes but without the load related ripple. The bottom right grid shows the FFT of the high pass filtered ripple signal with the cursor marking 61.7 kHz. Note that the spectral components below the high pass cutoff are attenuated.

With the high and low frequency ripple components separated, it is possible to measure them independently. For example, the measurement parameter P1 is set to measure the amplitude of the load related ripple. The ripple amplitude is 16.33 mV. The high pass filtered waveform can also be measured or studied to reveal the sources and effects of the high frequency ripple components.

More accurate measurements with filters

Oscilloscope filters provide users with several options to improve SNR to make more accurate measurements. They can reduce high frequency noise components or selectively separate high and low frequency noise mechanisms, allowing measurements of the selected elements. Spectrum analysis tools, like the FFT, help determine how to set the filter parameters to obtain the most accurate results.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Using oscilloscope filters for better measurements appeared first on EDN.

Single supply function generator outputs buffered squares, triangles, and sines

Tue, 12/05/2023 - 17:00

The traditional analog function generator with its customary triple-threat ensemble of square, triangle, and sine waveform outputs is a familiar tool on electronics lab benches. It’s also a classical design exercise. Generally, the square and triangle are easy, so the problem is how to generate an acceptably accurate sine wave.  This usually involves some method of conversion of the triangle. Figure 1’s generator circuit employs the popular integrator solution, but with a useful twist.

Figure 1 Quad highspeed RRIO TLV9064 op-amp performs as comparator, integrator, and clipper while sipping single-digit milliwatts from a single, flexible, power source.

A1 and A2 combine to form a conventional multivibrator generating symmetrical (around Vdd/2) squares and triangles. The peak-to-peak amplitude of the latter is fixed by R5 and R6 at 0.909Vdd, and the frequency of both is settable over two decades (and perhaps a bit more) by R1C1.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Conversion of A2’s triangles into a (more or less) serviceable approximation of a sine wave could now be, and popularly would be, accomplished by simple integration of A2’s output. But the downside of unadorned integration is found in that ominous phrase “more or less”. Unfortunately, the resulting approximation, while definitely looking a lot like a sinusoid, would quantitatively differ from a true sine function by the +/-3% of full-scale, mainly third harmonic, error shown in Figure 2.

Figure 2 Simple integration of triangle would result in +/-3%, 3rd harmonic sine error.

 But maybe we can do better.

A little experimentation and simulation revealed that simple truncation of the triangle at +/- 2/3rds of full-scale (Vpp = 0.67Vdd) prior to integration yields a surprising 3x improvement in sinewave accuracy, shown in

Figure 3’s plot of the residue error function.

Figure 3 Imposing trapezoidal truncation of triangle at +/- 67% prior to integration reduces peak sine error to less than +/-1% of mainly 5th harmonic.

I say “simple” because we already have an extra amplifier (A3) available. So, it only costs two extra resistors (R7 and R8) to generate the clipped at 67% trapezoidal waveform. This does a better job of approximating the dV/dT of a true sine, reducing error to the +/- 1% 5th harmonic squiggle shown in Figure 3.It’s interesting to note that 1% sine accuracy is similar to the performance of the famous Intersil ICL8038 function generator chip. But that was achieved only after in-circuit trimming. The circuit in Figure 1 needs none. Not to brag.

Integration now occurs in A4, with the DC restoration R9C3 network providing zero stability, and R2 controlling sine amplitude. 

This last is an important feature because the fact that the sine waveshape results from an integration makes its amplitude inherently inversely proportional to frequency. Therefore, because frequency is unaffected by the sine amplitude adjustment but not vice-versa, the most efficient way to set these two parameters is to adjust frequency first, then set sine amplitude as required. This avoids a potentially time wasting (and frustrating) iteration.

A final comment on Figure 1’s circuit: The TLV9064 is particularly suited for the A1 comparator and the A3 clipper because of its remarkably fast 200 ns overload recovery time. This is unusual performance for an op-amp, particularly such a low power one as the 9064.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Single supply function generator outputs buffered squares, triangles, and sines appeared first on EDN.

LPDDR flash: A memory optimized for automotive systems

Tue, 12/05/2023 - 14:07

Next-generation automotive systems are advancing beyond the limits of currently available technologies. The addition of advanced driver assistance systems (ADAS) and other advanced features requires greater processing power and increased connectivity throughout the vehicle. On top of this, automotive OEMs are expanding the user experience (UX) to introduce innovations that improve convenience, efficiency, and safety for drivers and passengers.

This combination of new and advanced features is straining the capacity of traditional automotive E/E architectures (see Figure 1). To address this need, OEMs are consolidating more functions into fewer systems by taking a domain/zonal architectural approach. And these systems often need substantially more non-volatile memory for code storage than is available as embedded flash integrated into processors.

Figure 1 A domain/zonal architecture approach consolidates many safety-critical functions and must be able to process huge amounts of data in real-time as well as store a significantly larger code image. Source: Infineon

Furthermore, many of these consolidated systems are safety critical, and must eliminate both performance and memory access bottlenecks to meet real-time performance deadlines. Finally, automotive systems need to be able to operate in a wide range of harsh environments, including extreme temperatures.

More code storage for software-defined vehicles

The evolution and consolidation in automotive architectures is driving the industry toward software-defined vehicles. The central car computer will connect to the cloud but also needs to have enough centralized processing to be completely autonomous from the cloud for critical driving operations. From a functional standpoint, car functionality will shift to be more service-oriented and maintain a higher level of safety and security.

Part of the challenge OEMs face is the need for more code storage because all this added functionality and connectivity in turn require more complex software. Software is stored in non-volatile memory so it can be updated in the field. Ideally, when there is enough embedded flash on a processor to hold all the software, the processor executes code directly from this on-chip flash in a process known as execute-in-place (XiP). This provides the best execution performance while maintaining system flexibility.

However, when there is not enough embedded on-chip flash, software must be stored in an external flash. But flash bus latency and limited throughput prevent external flash from accessing the system-on-chip (SoC) to XiP at speed. For this reason, the software is copied from flash to faster DRAM to achieve the necessary performance, an approach known as ‘shadow code’.

But a shadow code approach comes at the cost of additional DRAM memory—plus associated board space—to store the copy of the software. In addition, system start takes longer, which negatively impacts user experience, or worse, operational safety.

For next-generation automotive systems, neither approach is sufficient. Specifically, to meet the real-time requirements of a consolidated domain or zone, a higher performance, highly integrated SoC processor is required. To achieve such a level of integration, these SoCs must utilize smaller manufacturing process nodes.

This enables SoCs to provide all the integrated capabilities with a single chip in a cost-effective manner. However, as manufacturing process nodes shrink to 22 nm and below, it becomes expensive to integrate embedded flash in the densities required. Thus, an alternative to embedded flash for XiP is needed.

Overcoming external memory bottlenecks

To be able to take full advantage of high-performance SoCs built using smaller process nodes, engineers need to once again turn to external NOR flash. Figure 2a shows one approach to using external flash where the SoC can XiP from an embedded flash while an external NOR flash is used as a memory extension.

Figure 2 A traditional execute from embedded flash architecture uses an external NOR flash as a memory expansion to hold software that is loaded into embedded flash for XiP. This approach is limited by embedded flash capacity as well as xSPI throughput (left). A next-generation execute from external flash architecture combines the flexibility of an embedded flash approach with the performance of DRAM (right). Source: Infineon

The problem with the former approach is the limited bandwidth of the xSPI bus used to interface between the embedded flash and external NOR flash. Standard xSPI throughput is 400 MB/s, and even at 16-bits, xSPI at 200 MHz can only achieve 800 MB/s maximum throughput. This falls substantially short of what is required to support real-time code execution.

In addition, xSPI uses a multiplexed command/address/data bus that negatively impacts effective throughput because code reads cannot be pipelined efficiently. Furthermore, LVCMOS leaves little room for advancement beyond 200 MHz.

Figure 2b shows an alternative approach that directly executes code from external flash. This approach eliminates the need for embedded flash by utilizing a high-performance low power double data rate (LPDDR) interface that enables XiP from external flash.

LPDDR: Proven interface finds a new use case

LPDDR is a memory interface commonly used with DRAM for high-performance data access. The LPDDR interface has been adapted to work with NOR flash and optimized for efficient XiP from external NOR flash. With a throughput potential of many gigabytes per second, LPDDR flash provides the performance of DRAM with the non-volatile reliability and flexibility of embedded NOR flash. It’s the best of both worlds.

One of the factors that makes the LPDDR NOR flash interface a compelling technology is that its physical layer is completely compatible with the LPDDR standard interface. This compatibility reduces the risk associated with adopting a new interface as the signal integrity of LPDDR has already been proven in the market in myriad real-world applications and operating environments.

At a high level, the LPDDR NOR flash interface is focused on code storage and real-time XiP for applications where code is written once and read many, many times. Thus, the interface is optimized to increase read performance overwrite efficiency. Again, the physical layer is untouched, so optimizations have been implemented in the controller protocol. These optimizations are tailored to meet the requirements of high-performance applications like autonomous vehicles.

The LPDDR advantage

The benefits of LPDDR NOR flash over SDRAM and xSPI-based NOR flash architectures are substantial. Take the example of the LPDDR4-based SEMPER X1 NOR flash, which is specifically optimized for operation in automotive applications. In comparison to standard NOR flash, it provides the throughput of LPDDR to support XiP using an external NOR flash (Figure 3).

Figure 3 SEMPER X1 has been optimized for operation in automotive applications and to support XiP using an external NOR flash. Source: Infineon

The entire memory architecture has been designed for functional safety and reliability to ensure uninterrupted operation in applications where failure is not an option.

To understand the advantages of LPDDR flash over both xSPI NOR flash and DRAM, consider how the SEMPER X1 utilizes the LPDDR4 interface to improve real-time performance (see Figure 4).

Figure 4 The LPDDR4 interface optimized for data read access provides superior performance compared to LPDDR4 SDRAM and xSPI NOR flash, as illustrated by these performance comparison figures. Source: Infineon

XiP from external memory becomes possible through several different optimizations as shown in Figure 4:

  1. Separating read from write

SEMPER X1 NOR flash has multiple ports: a quad SPI for write/read and an LPDDR4 dedicated for read only. Eliminating write operations from the LPDDR4 port enables the interface to be further optimized than if it also supported write operations.

  1. Faster training

Eliminating the write path in LPDDR eliminates the need for write DQ training. This results in 100x faster training compared to LPDDR4 SDRAM.

  1. Faster read commands

When accessing DRAM, it takes four commands to retrieve data. The LPDDR4 interface of SEMPER X1 uses a simpler format, requiring just two commands (Figure 5). Combined with no banking restrictions, row activation, or refresh compared to LPDDR4 DRAM, SEMPER X1 delivers 5x faster random read transactions. It’s also 5x faster than xSPI NOR flash for a 32-byte single read operation from command request to read data.

Figure 5 LPDDR flash requires just two commands—NVR-1 and NVR-2—to complete each read operation. Source: Infineon

  1. Separate command from data

A typical xSPI interface is 8 pins, and both commands and data must share that bus. LPDDR, in contrast, has separate pins for commands and data, enabling the efficiency of command pipelining. The result is 20x better performance for pipelined random read transactions compared to xSPI NOR flash.

  1. Data bus efficiency

The underlying embedded charge trap (eCT) technology in this memory ensures that there is no need for periodic refresh to interrupt throughput. That enables it to achieve up to 99% bus efficiency at 125° C, a 14% improvement compared to LPDDR4 DRAM.

  1. Better determinism

Determinism is a measure of the consistency of performance. Zone controllers often have many cores operating in parallel to provide the necessary processing capacity required for consolidated automotive systems. When multiple cores share memory, accesses by one core can create delays in accesses by all the other cores. Such delays can impact real-time reliability. A memory with multiple banks allows each core to have its own bank. This minimizes memory access interference and interdependence between cores, thus improving overall determinism and reliability.

  1. Zero downtime updates

With multiple banks, firmware-over-the-air (FOTA) updates can be loaded into an alternate memory bank. Once the update is complete and has been authenticated, the system can switch over to the alternate bank, allowing for a seamless transition to the update with zero system downtime.

  1. Better power efficiency

LPDDR optimizations impact more than just performance and reliability. Compared to xSPI NOR flash, SEMPER X1 consumes 8x lower read energy per MB while providing 8x higher throughput performance. For high-performance applications, these savings add up fast.

  1. Outperforms shadow code

The ability of LPDDR flash to provide throughput of 3.2 GB/s puts its XiP performance on par with SDRAM using a shadow code approach. In addition, the LPDDR approach requires fewer memory ICs, has faster setup time, and consumes less energy, making it a compelling alternative to SDRAM.

Scalability for the future

A key advantage of LPDDR is that it is a scalable interface that can support the increasing complexity of automotive applications in the future. xSPI is limited in its ability to scale as LVCMOS has little room for advancement past 200 MHz, capping bandwidth at 400 MB/s (x8) and 800 MB/s (x16). In short, xSPI can no longer keep up.

LPDDR4, on the other hand, allows for frequencies up to 1,600 MHz and can scale throughput from 1,600 MB/s to 12,800 MB/s (Figure 6). With this wide range of capacity, LPDDR4 offers the scalability and performance required for XiP in increasingly advanced systems, with newer LPDDR generations providing even more headroom.

Figure 6 xSPI is limited in its ability to scale, capping performance at 800 MB/s (x16). In contrast, LPDDR4 allows for frequencies up to 1,600 MHz and can scale from 1,600 MB/s to 12,800 MB/s, providing the scalability and performance required for XiP in today’s and tomorrow’s increasingly complex automotive applications. Source: Infineon

As the computational load and software complexity of vehicles increase with more automation, greater convenience and advanced user experience, greater code storage is required. Many of these software-defined functions are mission-critical and need real-time code execution to maintain reliability and safety.

To maintain performance and efficiency, today’s automotive applications need a fast underlying memory array to leverage all possible efficiencies. However, at advanced manufacturing process nodes, automotive-qualified embedded non-volatile memory technology faces high cost (die area) and lack of scalability. An external memory approach is required but the needs of the increasingly software-defined architecture of vehicles exceed the capabilities of today’s most advanced xSPI NOR flash, which simply cannot provide real-time XiP performance.

LPDDR is a key technology for next-generation automotive systems, providing an interface to external flash memory with enough performance to enable real-time computing and XiP capabilities for domain and zone controllers. With efficiencies such as 20x faster random read transactions and 8x lower read energy consumption per megabyte, LPDDR enables next-generation vehicles to provide advanced capabilities with enhanced safety and architectural flexibility.

Sandeep Krishnegowda is VP of marketing and applications for flash solutions at Infineon Technologies.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post LPDDR flash: A memory optimized for automotive systems appeared first on EDN.

Google’s Chromecast with Google TV: Dissecting the HD edition

Mon, 12/04/2023 - 21:10

At the end of a few-months-ago writeup, wherein I successfully reassembled a 4K-version Google Chromecast with Google TV:

that I’d taken apart earlier in the year, I wrote:

In closing, a foreshadowing. As I mentioned in my earlier teardown, written and submitted for publication in mid-March:

 “Mine’s one of the original 4K resolution-capable units introduced in September 2020, not the newer (September 2022) and less expensive ($29.99 vs $49.99) albeit “only” 1080p-max “HD” model.”

 Well, I subsequently also picked up one of those newer Google Chromecast with Google TV HD (wow, that’s a mouthful!) devices, in late April when Walmart had them on sale for $19.98:

 It looks just like its “4K” big brother, doesn’t it? Makes you wonder if the hardware is identical, and Google just differentiates the two devices solely via software, doesn’t it? There’s only one way to find out; it’s in my teardown pile, and I’ve promised Aalyia it’s near the top of the stack. Stay tuned for the “full monty” coming soon; until then, share your thoughts in the comments.

That time is now! Some upfront if-necessary clarification: while the earlier 4K “CGTV” (for short) supported output video resolutions of, per the published tech specs “Up to 4K HDR, 60 FPS”, this version conversely is “only” capable of “Up to 1080p HDR, 60 FPS”. Strictly speaking, leveraging the terminology commonly found with computer monitors, Google maybe should have called it the “FHD” edition, for 1080p (1920×1080 progressive-scan) “full HD”, since 720p (1080×720 pixel progressive-scan) support is all that’s necessary to claim conventional “HD” capabilities. Further to this point, I should also point out that “FHD” (or “4K”, for that matter) doesn’t also require HDR (high dynamic range) or 60 frame-per-second display output capabilities, both of which both of these device variants also support as feature supersets.

The differentiation continues with the devices’ supported audio and video formats. First, here’s the 4K CGTV:

  • Audio: Dolby Digital, Dolby Digital Plus, and Dolby Atmos via HDMI passthrough
  • Video: Dolby Vision, HDR10, HDR10+, HLG (Hybrid Log-Gamma)

Now for the “HD” version:

  • Audio: Dolby Digital, Dolby Digital Plus, and Dolby Atmos via HDMI passthrough
  • Video: HDR10, HDR10+, HLG (note: no Dolby Vision support in this case)

While the various “4K” version’s A/V enhancements could come solely from different software builds installed on a common hardware foundation, I’ve always suspected that the two versions have unique hardware designs. For one thing, the FCC IDs are different, suggestive of two different hardware platform certifications: A4RGZRNL for the 4K version and A4RG454V for the newer HD version (that said, any Bluetooth, Wi-Fi or other RF broadcast differences between the two products might also result solely from different software builds that selectively enable and tweak various wireless subsystems).

The HD version is also nearly half the price of its 4K sibling; any resultant higher sales volumes might not be enough to counterbalance lower unit profit margin on a common hardware foundation, making a bill-of-materials cost reduction potentially also necessary to hit revenue and profit targets. That all said, why might Google consider basing both product proliferations off a common hardware foundation at all? The advantages here would involve manufacturing line and inventory simplification. Obviously, the end retail packaging for the two versions would need to differ, but they’d be identical right up to the point where final firmware is programmed into a particular device (via the USB-C port, for example) and it’s packaged and shipped either to a retail partner or directly to the customer (for products sold from the online Google Store).

So, which is it in reality: one hardware foundation, or two? Let’s find out. I’m not going to repeat the entire suite of images shown for the 4K device back in May, except selectively to reinforce commonality (or not), so I encourage you to back-reference my earlier writeup for the “full picture” (bad pun intended). But I will as-usual start with some packaging shots post-clear plastic shrink-wrap removal (to preclude annoying camera reflections in the shots). Front (note the “HD” mark in the lower left corner this time):

Left side:

Back:

Right side:

Top:

And bottom, again with varying markings from last time reflective of HD-vs-4K differentiation:

Same contents as last time:

The USB-to-USB C power adapters are the same. 4K version:

And HD version:

As are the remote controls. 4K version:

And HD version:

Now we get to the deviations, specifically the players themselves:

although truth be told, everything looks the same as before (including the ever-present accompanying 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes)…

until you flip the HD unit over and peer closely at the markings stamped on its backside:

Time to dive inside:

Things still look pretty much the same at this point, save for an altered number and pattern of thermal paste imprints. 4K version:

and HD version:

Remove four screws:

and pop off the topside heat sink/Faraday cage, however, and the uniqueness of the PCB this around this time is unmistakable. 4K version:

and HD version:

The pattern of the inner Faraday Cage rim is different in the upper left corner this time, for one thing. Also look at the four quadrants inside the outer Faraday cage. Last time there were notable ICs in the upper and lower right quadrants; that extra blob of thermal paste on the left side was, as I mentioned at the time, to dissipate heat from an IC on the other side of the PCB. This time, three of the four quadrants (also including the upper left) are large-IC-populated.

Clearly, I’m going to need to remove the entire PCB again to get a full comparative picture. Let’s first clean the paste off the ICs and PCB itself:

Now remove the two additional screws holding the HDMI cable in place and attached to the enclosure shell lower half (although I didn’t bother disconnecting the cable itself from the PCB):

And the PCB-plus-cable combo then pops right out of the remainder of the case:

One more heat sink/Faraday cage to go; no screw removals needed this time:

And after some more thermal paste cleanup courtesy of isopropyl alcohol and elbow grease, here’s how the two versions compare from a PCB backside perspective. 4K version:

and HD version:

The types of ICs—system processor and DRAM—are the same. The specifics are where the variance lies. Last time, the system SoC was Amlogic’s S905 and the DRAM was a Hynix H9HCNNNBKUMLHR-NME 16 Gbit LPDDR4-3733 SDRAM. This time, the system SoC is Amlogic’s newer, lower-end S805X2. And the DRAM? Smaller capacity, slightly slower, and from a different supplier: Nanya Technology’s NT5AD256M16E4-HR 4 Gbit standard DDR4-2666 SDRAM.

Back to the PCB front side for one more comparison. 4K version:

and HD version:

As mentioned earlier, the 4K product design includes only two sizeable ICs on this side of the PCB: a Samsung KLM8G1GETF-B041 8 GByte eMMC flash memory and a Broadcom-then-Cypress Semiconductor-now-Infineon Technologies BCM43598 “single-chip IEEE 802.11 b/g/n MAC/baseband/radio with integrated Bluetooth 5.1 compliance”. The nonvolatile storage chip in the upper right quadrant is unchanged for the HD generation, albeit rotated, but the wireless connectivity is now supplied by a NXP Semiconductors 88W8987 in the bottom right, originally intended by its supplier (per Google’s cache) for automotive applications, interestingly. And that “new” IC in the upper left? It’s another SDRAM, this one from Micron Technology, the MT40A512M16TB-062E (identified by a D8BPK code stamped atop it) 8 Gbit DDR-3200 device.

One DRAM for code execution, and the other for a video-decode buffer? That’s my guess. Let me know your thoughts on this or anything else in the comments! ICs aside, the hardware designs are otherwise largely identical, including (beyond already-mentioned bits) their reset switches, status LEDs and various PCB-embedded antennae. I’ll eventually tackle reassembling this HD device to fully functional “new” condition, as with its 4K sibling, but for now I’ll keep it in pieces so I can answer any other questions you all might have about its insides.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Google’s Chromecast with Google TV: Dissecting the HD edition appeared first on EDN.

Secure Bluetooth LE adoption on rise in automotive applications

Mon, 12/04/2023 - 10:24

With a developed ecosystem, an ultra-low-power consumption profile, and an established presence in mobile phones, it is understandable why Bluetooth Low Energy (Bluetooth LE) technology has emerged as the preferred wireless protocol for new connectivity use cases in automotive applications.

This article examines the drivers behind the rising use of wireless connectivity in automobiles and reviews some current and potential future use cases for Bluetooth LE.

BLE driving factors in vehicles

The automotive industry is undergoing an unprecedented revolution, with a near-simultaneous convergence in the trends toward electrification, autonomous driving, and vehicle-to-everything (V2X) connectivity. Cars are evolving from providing an essential transport service to providing occupants with a rewarding travel experience. Vehicle occupants will increasingly look to use their smartphones to gain access to their vehicles and customize this experience.

In addition, as the number of sensors, safety and infotainment systems in cars grows, so does the requirement to interconnect them to in-vehicle computers. Here, using cables, which add significant weight and volume to a vehicle, poses challenges for manufacturability, cost, and complexity.

Figure 1 Wireless connectivity in vehicles enhances user experience. Source: onsemi

Bluetooth LE is a low-power and cost-effective alternative to traditional interconnectivity solutions based on controller area networks (CAN) and local interconnect networks (LIN). So, several automotive OEMs are trying to leverage a Bluetooth LE infrastructure to replace these technologies in some use cases.

Bluetooth LE has several advantages over other wireless technologies, which makes it the preferred choice for automotive applications, including:

  • Proven communication with smartphones allays concerns about interoperability
  • Standardized specification and certification
  • Robust performance in electrically noisy and harsh environments
  • Availability of AEC-Q100 automotive qualified parts
  • Low-power consumption which is a critical requirement in electric vehicles
  • Availability of low-cost system-on-chip (SoC) components and antennas

Bluetooth LE automotive use cases

Bluetooth technology in automobiles was first used with vehicle access systems, enabling features like the phone-as-a-key feature for passive-entry and passive-start. Future developments around Bluetooth LE in this application will see customized user experiences based on individual digital keys and profiles. For example, a vehicle will be able to automatically identify a profile stored in a driver’s or passenger’s mobile phone and then seamlessly adjust the position of mirrors, seats, and the steering wheel to match individual preferences.

Additionally, it will be possible to create shared keys for other vehicle users, eventually making phone-as-a-key a practical solution for the emerging trend of shared autonomous vehicles. However, this will also require profiles to be protected by the highest security levels to prevent them from being copied by unauthorized third parties who could steal or alter how the vehicle operates.

Here, it’s worth mentioning that low power is crucial in infotainment systems like telematics boxes and head-unit displays. Often, these systems include high-power-consumption connectivity devices like cellular telecommunications modems, Wi-Fi, and other connectivity protocols. These systems have stringent power budgets that must be adhered to so as not to place a drain on a vehicle’s battery when a car is not in use.

Meeting these requirements is driving system developers to look for low-power wireless MCUs that can shut off the higher power consumption components in the vehicle but still wake them up when needed. Bluetooth LE is an excellent option for this purpose, allowing a telematics box or head-unit display to determine if it needs to wake up to handle over-the-air software updates or perform other diagnostic functions, for example.

Apart from vehicle body applications, another emerging trend is to use radios featuring Bluetooth technology in battery-management systems to send periodic temperature and voltage information about battery packs to the main computer. Bluetooth LE can also help OEMs to reduce costs with features like wireless tire pressure monitoring systems (TPMS) that allow drivers to check tire pressure using their phones or even receive notifications when a tire is flat.

Bluetooth LE can also simplify designs for controlling multi-position power seats, mirrors, locks, and sunroofs. Apart from ultra-low-power consumption, a small form factor and the ability to secure data communication within and outside the vehicle are critical requirements when selecting a Bluetooth LE-enabled MCU for use in a car.

Low-power wireless MCUs

Besides connectivity, wireless MCUs also feature embedded security and ultra-low power for automotive applications. The wireless MCU shown below has four low-power modes to reduce power consumption while maintaining system responsiveness. These include sleep, standby, smart sense, and idle. Smart sense mode takes advantage of the low-power capability of sleep mode while allowing some digital and analog peripherals to remain active with minimal processor intervention.

Figure 2 The NCV-RSL15 wireless MCU is designed with a smart sense power mode. Source: onsemi

These features allow wireless MCUs to support applications like vehicle access, tire pressure, and tire monitoring systems for up to 10 years off the power from a single coin cell. Next, OEMs continue to find ways to exploit Bluetooth LE-enabled MCUs in developing lighter, more scalable battery management systems that are easier to manufacture.

Moreover, the wireless MCU shown above is built around an Arm Cortex−M33 processor core with TrustZone Armv8−M security extensions, which form the basis of its security platform. The MCU also incorporates embedded security with an Arm CryptoCell featuring hardware-based root-of-trust secure boot, many user-accessible hardware-accelerated cryptographic algorithms, and firmware-over-the-air (FOTA) capabilities to support future firmware updates and deployment of security patches.

Such security features make Bluetooth LE-enabled MCUs highly suitable for remote access devices.

Ben Widsten is product manager for Bluetooth Low Energy solutions at onsemi.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Secure Bluetooth LE adoption on rise in automotive applications appeared first on EDN.

RISC-V’s embedded foray with a 32-bit MCU development

Fri, 12/01/2023 - 17:27

One of the largest vendors of embedded processors has independently developed a CPU core for the 32-bit general-purpose RISC-V market; it can be used as the main CPU or on-chip subsystem and can even be embedded in an application-specific standard product (ASSP).

Renesas Electronics, which has designed and tested a 32-bit CPU core based on the open-standard RISC-V instruction set architecture (ISA), is currently sampling devices based on this new core to select customers. It plans to launch its first RISC-V-based MCU and associated development tools in the first quarter of 2024.

It’s important to note that while several MCU suppliers have announced the development of RISC-V products, Renesas is the first company to unveil a 32-bit RISC-V MCU architecture development. Also worth noting is that the Japanese chipmaker’s 32-bit MCU portfolio includes its proprietary RX Family as well as RA Family based on the Arm Cortex-M architecture.

Another important fact in Renesas’s RISC-V foray is that it has already introduced 32-bit ASSP devices for voice control and motor control built on CPU cores developed by Andes Technology. Renesas has also unveiled 64-bit general-purpose microprocessors (MPUs) built on Andes CPU cores.

The high-level block diagram highlights the 32-bit RISC-V MCU architecture development. Source: Renesas

Renesas claims its RISC-V CPU achieves a 3.27 CoreMark/MHz performance, outperforming similar architectures in performance and code size reduction. It’s a versatile CPU that is suitable for different application contexts. For instance, it can serve as a main application controller, a complementary and secondary core in system-on-chips (SoCs), and in on-chip subsystems and deeply embedded ASSPs.

Giancarlo Parodi, principal product marketing engineer at Renesas, also claims in his blog that CPU’s implementation is very efficient regarding silicon area. Besides smaller cost impact, it helps reduce operating current and leakage current during standby time. Finally, despite targeting small embedded systems, this RISC-V core provides a high level of computational throughput.

Next, in line with RISC-V ISA foreseeing several ‘extensions’ that target specific functionality more efficiently, Renesas has included extensions to improve performance and reduce code size. Additionally, the CPU core has added a stack monitor register to enhance the robustness of the application software. It will help designers detect and prevent stack memory overflows, a common issue spotted through test coverage alone.

Parodi’s blog provides more details about the CPU features and capabilities and how they assist developers in benchmarking an application and verifying its behavior. More details about its performance score will be available on the EEMBC website once the first product is unveiled in early 2024.

The RISC-V processors, known for their flexibility, are gradually making inroads in the embedded systems landscape. In this design journey, the availability of a homegrown CPU from a major MCU supplier lends RISC-V fray significant credibility in offering embedded processing solutions for a broad range of applications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RISC-V’s embedded foray with a 32-bit MCU development appeared first on EDN.

D-band power sensor is NMI-traceable

Fri, 12/01/2023 - 16:07

The NRP170TWG(N) thermal waveguide power sensor from Rohde & Schwarz enables power level measurements from 110 GHz to 170 GHz. It provides full traceability to national metrology institute (NMI) standards in this frequency range, an important prerequisite for commercializing the D-band. According to R&S, it is the only NMI-traceable RF power sensor for the D-band.

The plug-and-play device comes in two variants: the NRP170TWG, controlled via a USB connection, and the NRP170TWGN, offering both USB and LAN connections. Both models are calibrated for long-term stability and compensate for environmental temperature changes within the operating range of 0°C to +50°C. Sensors have a dynamic range of -35 dBm to +20 dBm and handle up to 500 measurements/s. 

The thermal power sensors can be used in general R&D for 6G mobile communications, novel sub-THz communications, sensing, and future automotive radar applications. No calibration is required prior to performing measurements, since the sensors are fully characterized over frequency, level, and temperature. All calibration data is stored in the sensor.

The NRP170TWG(N) thermal power sensors are available now from Rohde & Schwarz.

NRP170TWG(N) product page

Rohde & Schwarz 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post D-band power sensor is NMI-traceable appeared first on EDN.

8-bit MCUs strengthen code protection

Fri, 12/01/2023 - 16:07

Microchip’s PIC18-Q24 MCUs employ a programming and debugging interface disable (PDID) function that enhances chip-level security. When enabled, this enhanced code protection feature disables the programming/debugging interface and blocks unauthorized attempts to read, modify, or erase firmware.

The PIC18-Q24 microcontroller family also provides a multi-voltage I/O (MVIO) interface. MVIO allows the MCU to interface with digital inputs or outputs at different operating voltages. This integrated level shifting not only eliminates the need for external level shifters, but also reduces both design area and BOM costs. MVIO pins support a voltage range of 1.62 V through 5.5 V.

With PDID and MVIO, PIC18-Q24 8-bit MCUs are particularly useful as system management processors, performing monitoring and telemetry for a larger processor. These routine tasks are typically most vulnerable to potential hackers as they attempt to gain access to embedded systems.

Other features of the PIC18-Q24 include a 10-bit ADC with computation capable of 300 ksamples/s and an 8-bit signal routing port to interconnect digital peripherals without using external pins. The PIC18-Q24 devices are available in a variety of packages with pins counts ranging from 28 to 48 pins.

To learn more, visit Microchip’s 8-bit PIC MCU webpage. For purchase information, contact a Microchip sales representative, authorized distributor, or visit the Microchip Direct website.

PIC18-Q24 product brief

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 8-bit MCUs strengthen code protection appeared first on EDN.

32-bit MCU packs fast AFE sensor interface

Fri, 12/01/2023 - 16:07

With its precision analog front end, the Renesas RX23E-B 32-bit MCU is well-suited for high-end industrial sensor systems and measuring instruments. The part’s 24-bit delta-sigma ADC achieves a conversion speed of up to 125 ksamples/s, which is eight times faster than the company’s existing RX23E-A MCU. It performs accurate A/D conversion, while reducing RMS noise to one-third (0.18 µVRMS @ 1 ksamples/s) that of the RX23E-A.

The RX23E-B microcontroller enables accurate analog signal measurements of critical parameters like strain, temperature, pressure, flow rate, current, and voltage. It also offers sufficient measurement speed to drive force sensors used in industrial robots.

In addition to a 32-MHz RXv2-based CPU with DSP instructions and a floating point unit, the RX23E-B provides a 16-bit DAC to enable measurement adjustments, self-diagnosis, and analog signal output. The MCU’s ±10-V analog input enables ±10-V measurements with a 5-V power supply without requiring external components or an additional power supply.

The RX23E-B is available now, as is a Renesas Solution Starter Kit for the MCU.

RX23E-B product page

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 32-bit MCU packs fast AFE sensor interface appeared first on EDN.

Silicon nitride light source suits FTIR spectrometry

Fri, 12/01/2023 - 16:06

Kyocera has developed a silicon nitride (SN) light source for Fourier-transform infrared (FTIR) spectrometers using its SN heater and glow plug portfolio. The company’s SN heaters are robust and fast ramping, serving as glow plugs for diesel engines and igniters for furnaces. Applied to spectrometry, Kyocera’s SN technology delivers high spectral emissivity to enable more accurate material identification.

The heater structure embeds a printed heating element in silicon nitride ceramic. Each heater pattern can be customized to meet application requirements, including such parameters as wattage, output temperature, and heating area.

The durability of the SN material results in lower failure rates and an extended duty cycle compared to conventional light source solutions. Its fracture toughness is more than twice that of silicon carbide, providing enhanced resistance to cracking and chipping during handling and installation. According to Kyocera, its SN heaters maintain consistent performance across more than 150,000 cycles without significant degradation.

For more information about Kyocera’s SN light source, click here.

Kyocera

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon nitride light source suits FTIR spectrometry appeared first on EDN.

Hall sensors minimize stray-field impact

Fri, 12/01/2023 - 16:06

Hall-effect position sensors from TDK-Micronas reduce interference from stray magnetic fields in automotive and industrial applications. The HAL 3930-4100 (single die) and HAR 3930-4100 (dual die) sensors offer robust stray-field compensation and user-configurable PWM or SENT digital output interfaces. Single-die devices are ISO 26262 ASIL C-ready for integration into automotive safety-related systems.

The sensors offer a range of measurement capabilities, including 360° angular measurements, linear movement tracking, and 3D position information of a magnet. A modulo function—primarily for chassis position sensing—allows the partitioning of the 360° measurement range into smaller, more precise segments like 90°, 120°, and 180°.

Both sensors conduct self-tests when starting up and during regular operation to enhance reliability. In addition to chassis position sensing, the devices can be used to detect steering angle, transmission, gear shifter, accelerator, and brake pedal positions.

The HAL 3930-4100 is available in an SOIC8 package, while the HAR 3930-4100 is housed in an SSOP16. For more information on the TDK-Micronas lineup of 3D position sensors, click here.

TDK-Micronas

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Hall sensors minimize stray-field impact appeared first on EDN.

Developing a spectrophotometer with integrated analog peripherals

Thu, 11/30/2023 - 17:18

One of the biggest selling points of a microcontroller (MCU) is the peripherals—integrated blocks of specialized hardware that offload a task from the central processing unit (CPU) or integrate new functionality into the device. One of the most common examples of this is integrated analog-to-digital Converters (ADC). But more sophisticated MCUs can have other analog peripherals on-board, such as digital-to-analog converter(s) (DAC), analog comparator(s) (CMP), fixed voltage reference (FVR), and operational amplifier(s) (op-amp). These analog peripherals can be used in conjunction with the digital logic of the MCU to create intelligent analog solutions.

As an example of an intelligent analog design, this article will discuss the author’s current (work in progress) home project, and how intelligent analog can help to increase functionality and minimize the bill of materials (BOM).

Background of the project

In August of 2020, I started to work on building a spectrophotometer for fun. A spectrophotometer is an instrument that measures the intensity of light at specific wavelength(s). There are multiple ways to design this type of instrument; my implementation uses a monochromator to select a single wavelength from a white light source, then passes this wavelength through a sample and into a detector. The detector measures the intensity of the light at that specific wavelength, which can be used to measure transmittance of the unknown material. A simplified diagram of this is shown in Figure 1, and the current (but incomplete) prototype in Figure 2.

Figure 1 A simplified diagram of the spectrophotometer where the detector measures the intensity of the light at that specific wavelength, which can be used to measure transmittance of the unknown material. Source: Robert Perkel

 

Figure 2 The current prototype of the monochromator with a light source, spherical mirrors, diffraction grating, and optical slit. Source: Robert Perkel

Design Considerations

Since this is a one-off build, the primary concern is performance and assemblability. Performance in this context refers to the sensitivity of the detector, the purity of the monochromator output and the implemented feature set. Assemblability refers to my ability to construct and implement this device. Physical assemblability is a big issue, but most of the physical parts can be purchased, machined, or 3D printed at a reasonable cost or effort. Electronic assemblability is mostly about reducing the number of parts, when possible, and avoiding packages that are difficult to solder by hand, like QFN and BGA. Additionally, the parts used in the design must be in stock and obtainable.  

Project Modules

There are three planned modules for this system (as of the time of writing):

  • Analog front end (AFE)
  • Smart LED power supply
  • Data acquisition and control (DAC)

The AFE, the main subject of this article, contains the photodiode and converts the photocurrent to a voltage for acquisition on external boards or hardware. Beyond that, there are a couple of secondary outputs and other I/O signals that are used in this module. Adding an MCU simplifies implementing these secondary outputs and other side-band signals.

The smart LED power supply is a specially designed linear power supply for the main light source. The current is regulated by a custom analog-feedback loop, with support for blanking (off time) and linear intensity control. This solution was developed because the standard variable intensity regulators for LEDs work on pulse width modulation (PWM), but the ripple from the PWM dimming may become visible to the sensitive detector downstream. While this board will likely contain MCUs, it must be connected to the supervising MCU for power-up and monitoring of the main light source, which is hazardous to directly view. (Author’s note: There are multiple other integral safety circuits located on this board to prevent power-up of the high-current stage).

Finally, there is the DAC board. The primary objective of this board is to measure the output of the AFE and report it back to the user. It connects to the smart LED power supply to power-up and monitor the main light source. This board will also contain a high-resolution ADC along with an MCU that oversees the entire system.

Mixing MCUs and analog

In the AFE, there are three features that the MCU helps to integrate:

  • Clipping detection
  • Self-zeroing
  • Relative output control

The MCU was selected by looking at new device families with analog peripherals. However, the analog peripherals are not used directly as part of the signal chain—they are used for auxiliary signals that aren’t as noise sensitive as the main signal chain. For the main signal chain, high-end (precision, low noise, etc.) parts are used to minimize noise and increase the likelihood of success. The simplified diagram in Figure 3 shows how the MCU fits into the design.

Figure 3 Simplified block diagram of the AFE where high-end parts are used in the main signal chain to minimize noise. Source: Robert Perkel

Clipping detection

During normal operation, light from the monochromator passes through a sample and into the photodiode detector, generating a photocurrent. However, if the photocurrent exceeds the allowable output range, it will be clamped to the maximum output value. When this occurs, an error indicator should be illuminated on the exterior and a signal should be sent to the main controller, in the event automatic current control was enabled by the user.

As shown in Figure 4, this is implemented on the MCU using the comparator and a set point signal from either an:

  • Internal DAC
  • Internal FVR
  • External source

Using the DAC is the most flexible approach but requires the dedicated use of one of the DAC peripherals. In some situations, this is acceptable, but in others the DAC is needed elsewhere. The internal FVR (voltage reference) is another option on some devices. The same reference as the DAC can be used with the comparator, but this requires the resistor divider to be setup to match this reference, and the setpoint cannot be modified at runtime. Finally, there is the external source option. External sources can take many forms—external DAC outputs, resistor ladders, voltage references, etc. The disadvantage is the use of extra components and I/O pins.

Figure 4 Implementation of a clipping detector on an MCU with a set point signal from either an internal DAC, internal FVR, or an external source. Source: Robert Perkel

Self-zeroing

Due to dark current from the photodiodes and small offsets from the op-amps, the system will have an output above zero, even when dark. To null this error out, an external buffered DAC is connected to the error correction stage and controlled by the MCU.

To trigger the self-zeroing operation, the user can either press a physical button or connect a blanking signal from the power supply. The blanking signal is an off time when the light source is not powered. While in the middle of a blanking interval, the system can recalibrate itself, like a chopper-stabilized amplifier, although care must be taken to ensure the emitters and samples have stopped fluorescing when performing this operation. This can be achieved by adding a small-time delay after the rising edge. Figure 5 shows a simplified timing diagram.

Figure 5 A timing diagram showing the zeroing points where the blanking signal in an off time when the light source is not powered, in the middle of the blanking interval, the system can recalibrate itself. Source: Robert Perkel

In the case of the user-accessible button, debouncing can be performed by the configurable logic cells (CLC) and a timer on the MCU. This combination operates independently of the CPU, allowing it to focus on other tasks, as shown in Figure 6.

Figure 6 A logic diagram showing how to implement a debouncer using hardware peripherals. Source: Robert Perkel

Relative Output Control

The relative output mode is a secondary output that indicates the % transmission of light through a sample when compared to a reference. In other words, if the instrument was run without a filled sample vial and recorded an output of 500mV, and, with a loaded sample vial it recorded 250mV, then only 50% of the light was transmitted at the wavelength of interest.

For the highest resolution in this mode, the dedicated measurement board produces the best results, due to the high-resolution ADC. But, if the output is above the clipping threshold, the MCU will have a higher measurement range, as its input is divided, not clipped.

To improve sample acquisition time with the resistor network, an internal op-amp can be used to buffer the input signal and to increase the gain of the signal into the ADC, as shown in Figure 7. This improves the performance of the system at lower signal levels.

Figure 7 Block diagram of the relative output mode where an internal op-amp is used to buffer the input signal and increase the gain of the signal into the ADC. Source: Robert Perkel

Challenges of Integrating

There are a couple things to keep in mind when mixing an MCU with analog circuits. Firstly, the signal range of the analog section can go much higher/lower than the absolute maximum ratings of the MCU. So, appropriate limiting of the signal range is a must. Often, this increases the impedance of the signal, but this can be easily solved by buffering the network with one of the internal op-amps, as shown below in Figure 8. The output impedance of the resistor network is equal to R1 in parallel with R2.

Figure 8 Buffering a signal with an integrated op-amp the output impedance of the resistor network is equal to R1 // R2. Source: Robert Perkel

Another challenge is to prevent electromagnetic interference (EMI) from the MCU from influencing the analog circuits. This is a complex challenge that is application specific. But there are a few common methods:

  • Physically separate the analog and digital circuits
  • Do not interrupt ground planes
  • Limit the slew rate of the digital signals
  • Use separate power supplies for analog and digital electronics
  • Decouple digital circuits appropriately

Other methods are more application specific. For instance, in this design, the photodiode and transimpedance amplifier (TIA) will be shielded to reduce the influence of other signals and environmental conditions on this stage. 

Concluding thoughts

Returning to the original topic of this article, what do integrated analog peripherals help with? The answer is that they are an invaluable tool for creating smarter analog systems. Integrated analog peripherals can be used to improve measurement resolution or to alert the system when certain events occur. This reduces the part count and the PCB layout complexity in the application.

Additional photos of the project are also available on my LinkedIn profile.

Robert Perkel is an applications engineer focusing on embedded and mixed-signal systems at Microchip. He graduated in 2019 with a B.S. in Computer Engineering from Virginia Tech.  

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Developing a spectrophotometer with integrated analog peripherals appeared first on EDN.

Pages