EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 57 min ago

Efficient digitally regulated bipolar voltage rail booster

Thu, 04/18/2024 - 16:41

The challenge of improving analog/digital accuracy by preventing amplifier saturation in systems supplied with only a single logic-level power rail has been receiving a lot of activity and design creativity recently. Voltage inverters generating negative rails for keeping RRIO amplifier output circuitry “live” at zero have received most of the attention. But frequent and ingenious contributor Christopher Paul points out that precision rail-to-rail analog signals need similar extension of the positive side for exactly the same reason. He presents several interesting and innovative circuits to achieve this in his design idea “Parsing PWM (DAC) performance: Part 2—Rail-to-rail outputs”.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The design idea presented here addresses the same topic but offers a variation on the theme. It regulates inverter output through momentary (on the order of tens of microseconds) digital shutdown of the capacitive current pumps instead of post-pump linear regulation of pump output. This yields a very low quiescent, no-load current draw (<50 µA) and achieves good current efficiency  (~95% at 1 mA load current, 99% at 5 mA)

Figure 1 shows how it works.

Figure 1 Direct charge pump control yields efficient generation and regulation of bipolar beyond-the-rails voltages.

Schmidt trigger oscillator U1a provides a continuous ~100 kHz clock signal to charge pump drivers U1b (positive rail pump) and U1c (negative rail). When enabled, these drivers can supply up to 24 mA of output current via the corresponding capacitor-diode charge pumps and associated filters: C4 + C5 for the positive rail, C7 + C8 for the negative. Peak-to-peak output ripple is ~10 mV.

Output regulation is provided by the charge pump control from the temperature compensated discrete transistor comparator Q1:Q2 for U1c on the negative rail and Q3:Q4 for U1b on the positive. Average current draw of each comparator is ~4 µA, which helps achieve those low power consumption figures mentioned earlier. Comparator voltage gain is ~40 dB = 100:1.

The comparators set beyond-the-rails voltage setpoint s in ratio to +5 V of:

= -5 V*R4/R5 for the negative rail = -250 mV for values shown
+ = 5 V*R2/R5 for the positive = +250 mV for values shown

Note that the output of the Q1:Q2 comparator is opposite to the logic polarity required for correct U1c control. Said problem being fixed by handy inverter U1d.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Efficient digitally regulated bipolar voltage rail booster appeared first on EDN.

EPR spectrometer and its AWG and digitizer building blocks

Thu, 04/18/2024 - 08:35

A new electron paramagnetic resonance (EPR) spectrometer aims to open the technology to a larger pool of scientists by making it cheaper, lighter, and easier to use without needing an experienced operator. Its control software—designed to be intuitive with several automated features—makes the set-up straightforward and doesn’t require an expert in EPR spectroscopy to obtain results.

EPR or electron spin resonance (ESR) spectroscopy, while quite similar to nuclear magnetic resonance (NMR) spectroscopy, examines the nature of unpaired electrons instead of nuclei such as protons. It’s commonly used in chemistry, biology, material science, and physics to study the electronic structure of metal complexes or organic radicals.

Figure 1 The new EPR spectrometer is modular in design and is smaller, lighter, and cheaper than traditional solutions. Source: Spectrum Instrumentation

However, EPR spectrometers are commonly built around massive electromagnets, so they can weigh over a ton and are usually placed in basements. Bridge12, a startup located near Boston, Massachusetts, claims to have produced an EPR spectrometer that is about half the cost of current instruments and a tenth of the size and weight so that it can be placed on any floor of a building (Figure 1).

The new EPR spectrometer is built around two basic building blocks: an arbitrary waveform generator (AWG) to generate the pulses and a digitizer to capture the returning signal. These building blocks are implemented as cards supplied by German firm Spectrum Instrumentation, making the design modular and flexible for end users.

First, an AWG generates 10 to 100-ns long pulses in the 200 to 500 MHz range as required by the experiment, which are then first up-converted to 10-GHz X band range using an RF I/Q mixer and then up-converted to the Q band range. The microwave pulses are then fed into a 100-W solid-state amplifier before being sent to the EPR resonator.

Next, the reflected signal is down-converted to an IF frequency in the 200 to 500 MHz range and sent to the digitizer. Unlike the traditional EPR spectroscopy, where the signal is down-converted to DC, this new approach drastically reduces noise and artifacts.

Figure 2 shows an example of AWG-generated pulses used in an EPR experiment. See WURST (Wideband, Uniform Rate, Smooth Truncation) pulses, which are broadband microwave pulses with an excitation bandwidth and profile that exceeds that of a simple rectangular pulse by far. These pulses facilitate broadband excitation in EPR spectroscopy while heavily relying on the performance of the AWG.

Figure 2 The AWG-generated WURST pulses are displayed in an EPR spectroscopy experiment. Source: Spectrum Instrumentation

The modular design of this EPR spectrometer built around AWG and digitizer cards is integrated into Netboxes, which can be connected to a PC through Ethernet. So, a compact PC can replace a system that is big enough to insert cards, which inevitably leads to a bulky rack solution. As a result, it’s much easier to service EPR spectrometer and replace components in the field.

Another noteworthy design feature of this new EPR spectrometer relates to the much smaller, super-conducting magnet to produce the required magnetic field strength. EPR spectrometers usually use huge, heavy electromagnets to generate intense magnetic fields in the order of 1 to 1.5 Tesla.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post EPR spectrometer and its AWG and digitizer building blocks appeared first on EDN.

Practical tips for automotive ethernet physical layer debug

Wed, 04/17/2024 - 17:36

Automotive ethernet is increasingly utilized for in-vehicle electronics to transmit high speed serial data between interconnected devices and components. Due to the relatively fast data rates, and the complexity and variation of the networked devices, signal integrity issues can often arise. This article outlines several real-world challenges and provides insight into how to identify and debug automotive ethernet physical layer signal integrity problems using an oscilloscope. The following is a case study of automotive ethernet debugging performed at Inspectron, a company that designs and manufactures borescopes, embedded Linux systems, and camera inspection tools.

Automotive ethernet hardware debug configuration

The automotive ethernet signal path is bi-directional (full duplex on a single twisted pair), so hardware transceivers must be able to discern incoming data by subtracting their own outbound data contributions from the composite signal. If one were to directly probe an automotive ethernet data line, a jumbled superposition resembling a bus collision would be acquired. To make sense of the individual signals being sent, bi-directional couplers can be used.

Figure 1 shows the hardware configuration used to debug an automotive ethernet setup. The two automotive ethernet devices under test (DUTs) are a ROCAM mini-HD display and a Raspberry Pi (with a 100Base-TX to 100Base-T1 bridge). The Raspberry Pi is used to simulate an ethernet camera. The twisted pairs from the DUTs are attached to adapter boards which break out the single 100 Ω differential pair into two 50 Ω single-ended SMA connectors. Each DUT has its pair of SMA cables connected to a calibrated active breakout fixture (Teledyne LeCroy TF-AUTO-ENET). The breakout fixture maintains an uninterrupted communication link, while two calibrated and software-enhanced hardware directional couplers tap off the traffic from each direction into separate streams which isolate the automotive ethernet traffic from each direction for analysis on the oscilloscope.

Figure 1 (a) The hardware configuration used to debug an automotive ethernet setup involves two DUTs, passive fixtures to adapt from automotive ethernet to SMA, and a calibrated active breakout fixture with bi-directional couplers to isolate traffic from each direction. The oscilloscope will analyze both upstream and downstream traffic. (b) The block diagram of the test setup. Source: Teledyne LeCroy

Identifying where signal loss occurs

Intermittent signal loss occurred between the ROCAM mini-HD display and the Raspberry Pi. One method to capture an intermittent loss of data transmission is a hardware Dropout trigger. In Figure 2, a Dropout trigger is armed to trigger the oscilloscope if no signal edge crosses the threshold voltage within 200 nanoseconds (ns). The two Zoom traces scaled at 200 ns/div show the trigger point one division to the right of the previous automotive ethernet edge. A loss of signal occurred for approximately 800 ns before data transmission recommenced. Note that since automotive ethernet 100Base-T1 is a three-intensity level (+1, 0, -1) PAM3 signal, the eye pattern with over 192,000 bits in the eye still shows good signal integrity (data dropout blends in with “0” symbols), but the Zoom traces at the Dropout trigger location reveal the location of signal loss.

Figure 2 The eye pattern shows a clean automotive ethernet 100Base-T1 signal, while the Dropout trigger identifies and locates a signal loss event. Source: Teledyne LeCroy

Amplitude modulation of serial data

Anomalous amplitude modulation or baseline wander issues can often be caught by triggering at a high threshold, slightly above the logic +1 voltage level (for the non-inverting input from the split differential signal). Intermittent anomalous amplitude modulation occurred on the automotive ethernet signal, and an instance was captured with the edge trigger set slightly above the highest expected voltage level, shown in Figure 3. The red histogram with three peaks, taken from a vertical slice through the eye diagram in the center of the symbol slot, show an asymmetry in the statistical distribution of the lowest and highest of the three voltage levels; this is due to the intermittent anomalous amplitude modulation of the signal. There is also an asymmetry of the eye width between the upper and lower eyes, identified in the eye measurement parameter table below the waveforms.

Figure 3 The three red histograms in the lower right grid show an asymmetry in the eye pattern due to intermittent anomalous amplitude modulation. The edge trigger raised to a high voltage threshold, catches an instance of the anomalous amplitude modulation. Source: Teledyne LeCroy

Intermittent amplitude reduction of signal

During the debug process, a malfunction was detected in which the amplitude of the signal would drop to 50% of the expected level. This problem was initially detected with the eye pattern, in which there was a collapse of the eye. In order to detect the location in time where the problem occurred, a dropout trigger was set with a threshold level at approximately 80% of the amplitude of the automotive ethernet signal. When the signal dropped to half amplitude, the Dropout trigger caught the event, showing the amplitude reduction at the point of occurrence. Zoom traces superimposed over the original waveform captures shows poor signal integrity in the time domain traces, which is also indicated in the collapsed eye.

Figure 4 The location of occurrence of the automotive ethernet amplitude reduction is caught using the Dropout trigger with a threshold set to approximately 80% of the waveform amplitude. The poor signal integrity of the reduced amplitude signal is shown in both the eye pattern and in the time synchronized Zoom traces. Source: Teledyne LeCroy

Addressing real-world automotive ethernet scenarios

Physical layer problems in automotive ethernet designs can be elusive and difficult to detect. This article outlined several real-world scenarios which occurred during the implementation of an automotive ethernet network with specific techniques used to identify each type of problem and where in time it occurred. This was accomplished using a combination of triggering, Zooms, eye patterns, statistical distributions, and measurement parameters.

Dave Van Kainen is a Founding Partner of Superior Measurement Solutions and holds a BSEE from Lawrence Tech.

Mike Hertz is a Field Applications Engineer at Teledyne LeCroy and holds a BSEE from Iowa State and an MSEE from Univ. Arizona.

Patrick Caputo is Chief Product Architect at Inspectron, Inc., and holds dual BSs in EE and Physics and an MS in ECE from Georgia Tech.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Practical tips for automotive ethernet physical layer debug appeared first on EDN.

Challenges in designing automotive radar systems

Wed, 04/17/2024 - 04:39

Radar is cropping up everywhere in new car designs: sensing around the car to detect hazards and feed into decision making for braking, steering, and parking and in the cabin for driver and occupancy monitoring systems. Effective under all weather conditions, now high-definition radar can front-end AI-based object detection, complementing other sensor channels to further enhance accuracy and safety.

There’s plenty of potential for builders of high value embedded radar systems. However, competitively exploiting that potential can be challenging. Here we explore some of those challenges.

Full system challenges

Automotive OEMs aren’t simply adding more electronic features to new vehicles; they are driving unified system architectures for their product lines to manage cost, simplify software development and maintenance, and enhance safety and security.

So, more compute and intelligence are moving into consolidated zonal controllers, communicating on one side between relatively small sensor units and processors within a small zone of the car, and on the other side, between zonal controllers and a central controller, managing overall decision making.

Suppliers aiming at automotive radar system markets must track their solution architectures with these changes, providing scalability between relatively simple processing for edge functions and more extensive capability for zonal or central controllers, while being flexible to adapt to different OEM partitioning choices.

One important implication is that however a solution might be partitioned, it must allow for significant amounts of data to be exchanged between edge, zonal, and central compute. Which raises the importance of data compression during transmission to manage latency and power.

In addition to performance, power and cost constraints, automotive systems must also factor in longevity and reliability. The full lifetime of a car may be 10, 20 or more years during which time software and AI model upgrades may be required to fix detected problems or to meet changing regulatory requirements.

Those constraints dictate a careful balance in radar system design between the performance/low power of hardware and the flexibility of software to adapt to changes. Nothing new there, but radar pipelines present some unique demands when compared to vision pipelines.

Pipeline challenges

A full radar system flow is shown in the figure below, from transmit and receive antennae all the way to target tracking and classification. Antennae configurations may run from 4×4 (Tx/Rx) for low-end detection up to 48×64 for high-definition radars. In the system pipeline following the radar front-end are FFTs for computing first range information and then Doppler information. Next is a digital beamforming stage to manage digital streams from multiple radar antennae.

A complete radar system pipeline spans from transmit/receive antennae all the way to target tracking and classification. Source: Ceva

Up to this point, data is still somewhat a “raw signal”. A constant false alarm rate (CFAR) stage is the first step in separating real targets from noise. Angle of Arrival (AoA) calculations complete positioning a target in 3D space, with Doppler velocity calculation adding a 4th dimension. The pipeline rounds out with target tracking, using for example an Extended Kalman Filter (EKF), and object classification typically using an OEM-defined AI model.

OK, that’s a lot of steps, but what makes these complex? First, the radar system must support significant parallelism in the front-end to handle large antennae arrays pushing multiple image streams simultaneously through the pipeline while delivering throughput of between 25 and 50 frames per second.

Data volumes aren’t just governed by the number of antennae. These feed multiple FFTs, each of which can be quite large, up to 1K bins. Those conversions stream data ultimately to a point cloud, and the point cloud itself can easily run to half a megabyte.

Clever memory management is critical to maximizing throughput. Take the range and Doppler FFT stages. Data written to memory from the range FFT is 1-dimensional, written row-wise. The Doppler FFT needs to access this data column-wise; without special support, the address jumps implied by column accesses require many burst-reads per column, dramatically dropping feasible frame rates.

CFAR is another challenge. There are multiple algorithms for CFAR, some easier to implement than others. The state-of-the-art option today is OS-CFAR—or ordered statistics CFAR—which is especially strong when there are multiple targets (common for auto radar applications). Unfortunately, OS-CFAR is also the most difficult algorithm to implement, requiring statistics analysis in addition to linear analysis. Nevertheless, a truly competitive radar system today should be using OS-CFAR.

In the tracking stage, both location and velocity are important. Each of these is 3-dimensional (X,Y,Z for location and Vx,Vy,Vz for velocity). Some EKF algorithms drop a dimension, typically elevation, to simplify the problem; this is known as 4D EKF. In contrast, a high-quality algorithm will use all 6 dimensions (6D EKF). A major consideration for any EKF algorithm is how many targets it can track.

While aircraft may only need to track a few targets, high-end automotive radars are now able to track thousands of targets. That’s worth remembering when considering architectures for high-end and (somewhat scaled down) mid-range radar systems.

Any challenges in the classification stage are AI-model centric, so not in range of this radar system discussion. These AI models will typically run on a dedicated NPU.

Implementation challenges

An obvious question is what kind of platform will best serve all these radar system needs? It must be very strong at signal processing and must meet throughput goals (25-50 fps) at low power, while also being software programmable for adaptability over a long lifetime. That argues for a DSP.

However, it also must handle many simultaneous input streams, arguing for a high degree of parallelism. Some DSP architectures support parallel cores, but the number of cores needed may be overkill for many of the signal processing functions (FFTs for example), where hardware accelerators may be more appropriate.

At the same time, the solution must be scalable across zonal car architectures: a low-end system for edge applications, feeding a higher end system in zonal or central applications. It should provide a common product architecture for each application and common software stack, while being simply scalable to fit each level from the edge to the central controller.

Tomer Yablonka is director of cellular technology at Ceva’s mobile broadband business unit.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Challenges in designing automotive radar systems appeared first on EDN.

Measuring pulsed RF signals with an oscilloscope

Tue, 04/16/2024 - 16:10

RF signals historically are measured using spectrum analyzers, at least that was before oscilloscopes offered sufficient bandwidth for those measurements. With oscilloscope bandwidths over 100 GHz, RF measurements are no longer the exclusive domain of the spectrum analyzer; this is especially true for pulsed RF measurements, where the time domain measurements of an oscilloscope have several advantages. This article will focus on the time measurements of pulsed RF signals.

Many devices use pulsed RF signals. The obvious ones are echo-ranging systems like radar. Additionally, nuclear magnetic resonance (NMR) spectrometers and magnetic resonance imaging (MRI) systems use pulsed RF. Even automotive keyless entry systems use pulse-modulated RF signals. 

Pulsed RF signals

Pulsed RF signals are created by gating a continuous wave (CW) RF source, as shown in Figure 1.

Figure 1 Pulsed RF signals can be generated by gating a CW RF source using a switch controlled by a gate signal pulse train. Source: Arthur Pini

The carrier source is a continuous wave oscillator. It is gated by a switch driven by the gating signal pulse train. This is a multiplication operation with the carrier multiplied by the gate signal. When the gating signal is high, the switch outputs RF; when low, the output is zero. The 350 MHz carrier is shown in the upper left grid. A horizontally expanded zoom view (left center grid) shows the details of the carrier waveform. The gating signal (lower left grid) is a logic signal with a zero state at 0 volts and a 1 state of 1 volt. The gate output (upper right grid) shows the RF bursts at periodic intervals related to the gate signal state. A zoom view of one burst (center right grid) provides greater detail of a single burst. Another view with a greater zoom magnification (lower right grid) shows the turn-on details of the pulsed RF signal.

Measurement parameters, just under the display grids, read the frequency (P1) and amplitude (P2) of the carrier as well as the frequency (P3) and pulse width (P4) of the gating signal.

The frequency spectra of pulsed RF signals

Looking at the carrier, gate signal, and gated carrier in the frequency domain provides insight into the modulation process. Oscilloscopes view the frequency domain using the fast Fourier transform (FFT) providing tools similar to a traditional spectrum analyzer. The signals and the FFTs of the three signals are shown in Figure 2.

Figure 2 The three component signals carrier, gating pulse train, and pulse RF output and their FFTs provide insights into the modulation process. Source: Arthur Pini

The carrier (upper left grid), being a sine wave, has an FFT (upper right grid) consisting of a single spectral line at the frequency of 350 MHz. The gate signal (center left grid) is a train of rectangular pulses. The FFT of the gate signal takes the form of a sin(x)/s spectrum. The maximum amplitude occurs at zero Hz making this a baseband spectrum anchored at 0 Hz or DC. The peaks in the spectrum are spaced at the pulse repetition frequency (PRF) of 50 kHz, measured using the relative cursors on the FFT of the gate signal. The cursor readout, under the Timebase annotation box, reads the absolute cursor positions and the frequency difference of 50 kHz. The sin(x)/x response has a periodic lobe pattern where the nulls of the lobes occur at intervals equal to the reciprocal of the gate pulse positive width. Since the positive width of the gate pulse is 3.52 ms, the nulls occur every 283 kHz. These nulls are a little harder to measure with cursors as the spectral peaks every 50 kHz, which does not have 283 kHz as an integral multiple, tend to obscure the nulls.

The gated RF carrier results from multiplying the carrier by the gate signal.  The state of the gate signal determines the output of the gated RF carrier signal. When the gate signal is one, the carrier appears at the gated carrier output. Multiplication in the time domain has a corresponding mathematical operation of convolution in the frequency domain. The result of the convolution operation on the spectra of the carrier and gate signal is shown in the FFT of the gated carrier. The baseband sin(x)/x function of the gate signal is mirrored above and below the carrier spectrum as the upper and lower sidebands of the carrier frequency.

Pulsed RF timing measurements

The timing measurement of pulsed RF signals begins with the pulse bursts. In most of the applications cited, the PRF, pulse width, and duty cycle are of interest. The characteristics of the burst envelope, including the rise time, overshoot, and flatness, may also be desired. These measurements can’t be made directly on the pulse RF signal. To make measurements on the gated carrier, the signal has to be demodulated to extract the modulation envelope and remove the carrier. The demodulation process varies from oscilloscope to oscilloscope, depending on the math processes available. This example used a Teledyne LeCroy oscilloscope which offers three ways to demodulate the gated carrier signal. The first method is to create a peak detector using the absolute value math function and a low pass filter. The second method is to use the optional demodulation function. This math function provides demodulation of AM, FM, and PM signals. The final technique is to use the oscilloscope’s ability to embed a MATLAB script into the math processing chain and use one of the MATLAB demodulation tools. This is also an optional feature in the oscilloscope.

Comparing demodulation processes

Comparing the results for these three methods is interesting. Since the first method can commonly be done with most oscilloscopes that offer an absolute value math function and low pass filtering. The peak detector method was used in this example and the results are shown in Figure 3.

Figure 3 Comparison of the amplitude demodulated signal of the gated carrier and the gated carrier, with measurements of the demodulated envelope from the peak detector based on the absolute math function. Source: Arthur Pini

Using the dual math function, the absolute value of the gated carrier was calculated. The second math function is a low pass filter. The low pass filter cutoff frequency has to be less than the 350 MHz carrier frequency signal, and the filter roll-off has to be sharp enough to suppress the carrier. In this example, a 6th-order Butterworth low pass filter with a cutoff frequency of 125 MHz and a transition width of 100 kHz was used. This oscilloscope has low pass filters available as enhanced resolution (ERES) used for noise suppression as well as a digital filter option. Either low pass filter source can be used. The goal of this operation is to have the demodulated envelope track the peaks of the gated carrier.

The detected envelope of the RF pulse is shown as trace F3 in the lower left grid. Horizontal zoom displays in the upper and lower right grids show the match of the demodulated envelope (blue trace) to the RF burst at two different horizontal scales. The overlaid traces in the lower right grid provide the best view for evaluating the performance of the demodulator. Adjust the low pass filter cutoff to obtain the best fit.

Measurement parameters P6 through P10 read the PRF, width, duty cycle, positive overshoot, and rise time of the demodulated envelope.

The same measurement made using the oscilloscope’s demodulation function is shown in Figure 4.

Figure 4 Measurement of the pulsed RF modulation envelope using the oscilloscope’s optional demodulation math function and comparison with the pulsed RF signal. Source: Arthur Pini

The demodulation function was set up for AM demodulation. The carrier frequency and measurement bandwidth have to be entered. The result shown here is for a bandwidth of 100 MHz. 

The same measurements are performed with very good agreement with the peak detector method. Vertical scales differ due to the different processing operations. Since the parameters being measured use relative amplitude measurements, no effort has been made to rescale the vertical data to a common scale. 

The third method mentioned was the use of a MATLAB script in the oscilloscope’s signal path to demodulate the RF pulse signal. This is shown in Figure 5.

Figure 5 Example of using a MATLAB script to demodulate the Pulsed RF signal.  The MATLAB script used is shown in the popup. Source: Arthur Pini

The MATLAB demod function, available in the MATLAB signal processing toolbox, is used to demodulate the pulsed RF. It is a very simple two-line script requiring the entry of the carrier frequency and oscilloscope sampling rate. The results are consistent with the other methods where the primary difference occurs in the rise time measurement is due to the different filters used in each of the different processes. Comparing the rise time measurements of the demodulated envelope to the rise time of the gate signal, the maximum variation is about 1 % from the rise time of the gate signal. The variation among the three demodulation methods is about 0.2 ns of the nominal 22.67 ns rise time. These three available demodulation methods produce nearly identical results in reading the timing parameters of a pulse RF signal. 

Characterizing pulsed RF signals

The oscilloscope is well matched to the task of characterizing pulsed RF signals. It can render the signals in either the time or frequency domain permitting analysis in both domains. The ability to accurately demodulate the pulsed RF signals enables measurement of the timing characteristics of the pulsed RF signals.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Measuring pulsed RF signals with an oscilloscope appeared first on EDN.

Microchip’s acquisition meshes AI content into FPGA fabric

Tue, 04/16/2024 - 10:51

Field programmable gate arrays (FPGAs), once a territory of highly specialized designs, are steadily gaining prominence in the era of artificial intelligence (AI), and Microchip’s acquisition of Neuronix AI Labs once more asserts this technology premise.

The Chandler, Arizona-based semiconductor outfit, long known for highly strategic acquisitions, has announced to acquire Neuronix, a supplier of neural network sparsity optimization technology that enables a reduction in power, size, and calculations for tasks such as image classification, object detection and semantic segmentation.

The deal aims to bolster the AI/ML processing horsepower on the company’s low- and mid-range FPGAs and make them more robust for edge deployments in computer vision applications. Microchip will combine Neuronix’s neural network sparsity optimization technology with its VectorBlox design flow to boost neural network performance efficiency and GOPS/watt performance in low-power PolarFire FPGAs.

Neuronix AI Labs has been laser-focused on neural network acceleration architectures and algorithms, and Microchip aims to incorporate Neuronix’s AI frameworks in its FPGA design flow. The combination of Neuronix AI intellectual property and Microchip’s existing compilers and software design kits will allow AI/ML algorithms to be implemented on customizable FPGA logic without a need for RTL expertise or intimate knowledge of the underlying FPGA fabric.

Microchip stuck to its FPGA guns even when the Altera-Xilinx duo took over the market before being acquired by Intel and AMD, respectively. Microchip executives maintained all along that FPGAs were a strategic part of its embedded system business. Now, when a plethora of applications continue to populate the edge, Microchip’s vision of embedded systems incorporating low-power FPGA fabrics looks more real than ever.

In short, the acquisition will help Microchip to bolster neural network capabilities and enhance its edge solutions with AI-enabled IPs. It will also enable non-FPGA designers to harness parallel processing capabilities using industry-standard AI frameworks without requiring in-depth knowledge of FPGA design flow.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Microchip’s acquisition meshes AI content into FPGA fabric appeared first on EDN.

The Godox V1 camera flash: Well-“rounded” with multiple-identity panache

Mon, 04/15/2024 - 19:33

As regular readers already know, “for parts only” discount-priced eBay postings suggestive of devices that are (for one reason or another) no longer functional, are often fruitful teardown candidates as supplements to products that have died on me personally. So, when I recently saw a no-longer-working Godox V1 camera flash, which sells new for $259.99, listed on eBay for $66, I jumped on the deal. For teardown purposes, yes. But also, for reuse of its still-functional accessories elsewhere. And, as it turns out, to solve a mystery, too.

I’d long wanted to get inside the V1 for a look around (although its formidable price tag had acted as a deterrent), in part because of its robust feature set, which includes:

  • High 76 Ws peak power (5600K color temperature)
  • Fast (~1.5 sec) recycle time, and 480 full-power illuminations per battery charge cycle
  • Supplemental 2 W “modeling lamp” (3300K color temperature)
  • 28-105 mm zoom head (both manual and auto-sync to camera lens focal length setting options)
  • 0°-330° horizontal pan and -7°-120° vertical tilt head
  • Multiple camera shutter sync modes
  • Multiple exposure control modes
  • Auto (camera sync) and manual exposure compensation modes
  • Camera autofocus-assist beam, and
  • Last, but definitely not least, multi-flash master and slave sync options

And partly because this device, like many of the flash units from both Godox and other third-party flash manufacturers such as Neewer, comes in various options that support multiple manufacturers’ cameras. In the case of the V1, these include (differentiated via single-character suffixes in the otherwise identical product name):

  • C: Canon
  • N: Nikon
  • S: Sony
  • F: Fujifilm
  • O: Olympus/Panasonic, and
  • P: Pentax

That all aside, what probably caught your eye first in the earlier “stock” photo was the V1’s atypical round head, versus the more common rectangular configuration found in units such as Godox’s V860III (several examples of which, for various cameras, I also own):

The fundamental rationale for both products is their varying output-light coverage patterns:

Now, about those earlier-mentioned accessories:

The VB26-series battery used by the V1 is also conveniently also used by Godox’s V850III and V860III flash units, as well as the company’s RING72 ring light (optionally, along with the four-AA battery power-source default), and with Adorama’s Flashpoint-branded equivalents for all of these Godox devices, several of which I also own:

Here’s the capper. Shortly after buying this initial “for parts” Godox V1, for which the flash unit itself was the only thing nonfunctional, I came across another heavily discounted V1 that, as it turned out, worked fine but was missing the battery and charging cable. Guess what I did? 😉

About that battery cable…readers with long memories may recall me mentioning the VB26 before. The earlier discussion was in the context of the Olympus/Panasonic version of the V1 (i.e., the V1O), which had come with the original VB26 battery, and which I learned couldn’t be charged from a USB-C power source even though the battery charging dock had a USB-C input; a USB-A to USB-C adapter cable (along with a USB-A power source) was instead necessary. Well, in testing out the battery this time, I absentmindedly plugged it and its companion dock into a handy USB-C power source (and USB-C to USB-C cable) that normally finds use in charging my Google Pixel Buds Pro earbuds…and everything worked fine.

In retrospect, I remembered the earlier failure, and in striving to figure out what was different, I noticed that the battery this time was the more recent VB26A variant. I’d known that both it and its even newer VB26B successor held a bit more charge than the original, but Godox presumably fixed the initial USB-PD (Power Delivery) shortcoming in the evolutionary process, too (the charging circuitry is contained within the battery itself, apparently, with the dock acting solely as a “dummy” wiring translator between the USB-C connector and the battery terminals).

Enough of the prep discussion, let’s get to the tearing down. What we’re looking at today is the V1C, i.e., the Canon variant of the V1 (here’s a user manual):

I’ve long assumed that the various “flavors” of the V1 (and flash units like it) were essentially identical, save for different hot shoe modules and different firmware builds running inside. Although I won’t be dissecting multiple V1 variants today, the fact that they share a common 2ABYN001 FCC certification ID is a “bit” of a tipoff. I hope that this teardown will also shed at least a bit of added light on the accuracy-or-not of this hypothesis.

Open the box, and the goodies inside come into initial view. The cone-shaped white thing (silver on the other side) at top is a reflector, a retailer bundle adder intended for “bounce” uses:

As-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes are the primary accessories: the standard USB-A to USB-C charging cable below the coin, and to the right, top-to-bottom, the battery, AC-to-DC converter (“wall wart”) and charging dock:

A closeup of the wall wart, complete with specs:

The underside of the battery, this time (as previously noted) the “A” version of the VB26:

And the charging dock, common to all VB26 battery variants:

Lift out the case containing the V1, and several other accessories come into view below it. At bottom right is a mini stand to which you mount the hot shoe when the flash unit isn’t being directly installed on/controlled by the camera (i.e., when the V1 is in wireless sync “slave” mode). And above it is another retailer adder, a goodie bag containing a lens cleaning cloth, a brush (useful when, for example, carefully brushing dust off the image sensor or, for a DSLR, the mirror) and a set of soft gloves.

Flip up the case top flap, and our victim comes into initial view:

Here’s a view of the backside, with the flash head near-vertical. The V1 has dimensions of 76x93x197 mm and weighs 420 g without the battery (530 g with it):

Here’s one (operating mode-dependent) example of what that LCD panel looks like with a turned-on functional V1:

Flip the V1 around for the front view, with the head at the same near-vertical orientation:

A closeup of the label (note, too, the small circular “hole” below the right corner of the label; file it away in your memory for later, when it’ll be important):

And of the translucent front panel, alluding to some of what’s inside:

The circular section at the bottom is for the focus assist beam, and to its left you can faintly see the wireless sensor used to sync the V1 (in either master or slave mode) with other flash units that support Godox’s 2.4 GHz “X” protocol as well as standalone transmitters and receivers:

Now’s as good a time as any, by the way, to show you Neewer’s reminiscent-named Z1:

The V1 and Z1 look the same, are similarly featured, and both use the 2.4 GHz ISM band for wireless sync purposes. Just don’t try to sync them to each other because the protocols differ.

Here’s a straight-on closeup of the V1 flash head:

That circular area at the top, which is toward the ground in normal operation (when the flash head isn’t pointed toward the sky, that is) is the modeling lamp, constantly on when activated versus a traditional “flash”. Here’s what it looks like on, again with an alternative functional V1:

And here are examples of the modeling lamp in use.

The ring around the outside of the flash head lens is metal, by the way, affording an opportunity for easy attachment of various magnet-augmented accessories:

Finally, some side views; first the left (when viewed from the front), containing the compartment “hole” into which the battery is inserted:

And now the right, containing the battery latch, release button and contacts:

The flash head at both extremes of its tilt range:

And a closeup of the QR code sticker on this side of the flash head:

Back to the right-side battery compartment closeup. In the earlier photo, you might have noticed what looked like a protective “flap” to the right of the cavity, and above the battery-release button. If so, you’d be right:

The round female connector at the top is not for headphones. It’s a 2.5 mm sync cord jack, for mating to a camera or transmitter as an alternative to a hot shoe or wireless connection. Below it is a USB-C connector used to connect to a computer for updating the flash unit firmware. On a hunch, I mated this supposedly “dead” V1 to my Mac and was surprised to find that the flash unit was recognized. I could even update its firmware, in fact, and all without a battery installed:

Even though this V1’s all-important illumination subsystem is DOA, it’s apparently not all-dead!

Last, but not least, let’s have a look at the hot shoe:

As previously mentioned, my working theory is that this (along with the software running inside the device) is the key differentiator between the V1 variants. It’s (perhaps unsurprisingly) also the most common thing that breaks on V1s:

So, I’ll be holding onto this part of the device long-term, both for just-in-case repair purposes and for another experimental project that I’ll tell you about later…

Did you notice the four screws holding the hot shoe assembly in place? Let’s see if their removal enables us to get inside:

Here’s the removed hot shoe assembly, both in the “loose” and “latched” positions (controlled by rotation of that grey button you see in the photos):

And here’s what’s inside:

Next step, remove the four “corner” screws whose heads were obscured by white paste in previous photos:

The outer bracket piece now lifts away:

Leaving an assemblage that, for already mentioned reasons, I’m not going to further disassemble, in order to preserve it for potential future use:

Unfortunately, although this initial disassembly step gave me a teaser peak at the insides, I wasn’t yet seemingly able to proceed further from this end:

So, I returned my attention to the flash head (the other end), around which I’d remembered seeing a set of screws that held the plastic cover and metal ring in place:

Underneath it was a Fresnel lens.

From Wikipedia:

A Fresnel lens…is a type of composite compact lens which reduces the amount of material required compared to a conventional lens by dividing the lens into a set of concentric annular sections…The design allows the construction of lenses of large aperture and short focal length without the mass and volume of material that would be required by a lens of conventional design. A Fresnel lens can be made much thinner than a comparable conventional lens, in some cases taking the form of a flat sheet.

With the Fresnel lens removed, the Zenon tube assembly comes into clear view:

If you look at the bottom, you’ll see a two-rail “track” on which it moves forwards and backwards to implement, in conjunction with the fixed-position Fresnel lens, the zoom function.

I was able to unclip the brackets holding the fronts of both halves of the head assembly together, but further progress eluded me:

So, I next tried peeling away the round rubberized pieces covering both ends of the “tilt” hinge:

A-ha! Screws!

Now for the other side…

You know what comes next…

And now, one half (the lower half, to be precise) of the flash head enclosure lifts right off:

I initially thought that this mysterious red paste-covered doodad might be a piezoelectric speaker, for generating “beep” tones and the like, and its location coincides with the “hole” below the label that I showed you earlier, but…again, hold that thought:

We now get our first clear views of the flash head insides. Check out, for example, that sizeable heatsink for the modeling lamp LED!

Four screws hold the assembly in place within the other half-enclosure. Let’s get rid of these:


Here’s our first glimpse of one side of this particular PCB. Look at that massive inductor coil!

Disconnect a couple of ribbon cables:

Tilt the assembly to the side:

Next, let’s remove the modeling lamp LED-plus-heatsink assemblage:

The two are sturdily glued together, so I won’t proceed further in trying to pry them apart:

Now let’s remove the PCB from the white plastic piece it’s normally attached to:

Let’s look first at the now-revealed PCB backside. First off, unsurprising mind you given the high current flow involved but still…look at those thick traces:

See those two switches? The motor position-controlled Zenon tube bumps up against them at the far end of its zoom travel range, seemingly disabling further motion in that direction (why there aren’t similar switch contacts at the rails’ other ends isn’t clear to me, however):

Finally, note the red-color, white paste-capped device in the upper right corner. Its “TB” PCB marking, along with the wire running from it to the Zenon tube, suggests to me that it may be a thermal breaker intended to temporarily disable the flash unit if it gets too hot. Ideas, readers?

Let’s now flip the PCB back over to the side we glimpsed earlier:

Time for a brief divergence into flash unit operation basics. In the “recharge” interval between flash activations, a sizeable capacitor (which we haven’t yet seen) gets “filled” by the battery electron flow. At least some of that stored capacitive charge then gets “dumped” into the Zenon tube. But here’s the trick…the Zenon tube’s illumination time and intensity vary depending on the camera’s desired exposure characteristics. So where does any “extra” current go, if not needed by the Zenon tube?

Initially, the excess electrons were instead shunted off to something called the quench tube, a wasteful approach that both limited battery life and unnecessarily lengthened recharge time. Nowadays, either gate turn-off (GTO) thyristors or insulated-gate bipolar transistors (IGBTs) instead find use in cutting off the current flow from the capacitor, saving remaining charge for the next Zenon tube activation. I’m admittedly no power electronics design expert, so I can’t confidently say which approach is in use here. To assist the more knowledgeable-than-me readers among you (numerous, I know), note that the two devices above the coil are S6008D half-wave, unidirectional, gate-controlled rectifiers; the IC above them has the following marks:


Again, I say: further insights, readers?

Before moving on, let’s take a closer look at that zoom motor:

And now, let’s figure out how to get inside that hinge (where, I suspect, we’ll find that aforementioned sizeable capacitor). Looking closely at the ends I’d previously exposed, I noticed two more screws on each, but removing them didn’t seemingly get me any further along:

In the process of unscrewing them, however, I realized that I hadn’t yet showed you the pan range supported by the head:

And in the process of doing that, I noticed more screws underneath the pan hinge:

That’s more like it (although I’m now inside the main flash body, not yet the hinge above it)!

Let’s start with the now-detached back panel:

The LCD behind it is visible through the clear section, obviously, but don’t forget about the ribbon cable-fed multi-button-and-switch array below it:

That same panel piece from below, with another look at the ribbon cable:

And finally, that same panel piece from above:

Let’s return to that earlier inside view and get those four screws off:

The multi-button/switch assembly now lifts away straightaway:

And that black piece then pops right off, too:

Here’s a cross-section view of the circular multi-switch structure:

And with that, let’s return to the multi-sided structure we saw earlier, inside the main body:

Next are a series of sequential wiring disconnection shots; there are multiple ribbon cable harnesses, as you’ll see, some of them terminating in the tilt hinge above and some passing through the tilt hinge to the flash head above it:


With the front half of the main body shell now free and clear, let’s look at what’s inside:

That thing toward the bottom center, with a blue/black wire combo coming out of it, is the aforementioned focus assist beam. But what about the one in the upper left, with red and black wires coming out of it? Here’s a top view of the front-half piece; note the “hole” at bottom right at the corresponding external location:

Remember the mystery device inside the flash head, with a reminiscent red-and-black wire harness and external “hole”, that I initially thought was a speaker and asked you to remember?

I’d originally realized it wasn’t a speaker when I took my functional V1, activated its “beep” function and discerned that the sound wasn’t coming from there. But when I saw the second similar device-and-hole, I grabbed my functional (and fully assembled) V1 again and realized that when (and only when) the flash head was pointed horizontal and forward, the two “holes” lined up. My working theory is that one of the devices is an IR transmitter with the other an IR receiver, and that this alignment is how the flash figures out when the user has both the pan and tilt settings at their “normal” default positions. For what reason, I can’t yet precisely sort out; there’s no indication I can find in the user manual that the V1 operates any differently when pan and/or tilt are otherwise oriented. But conceptually, I could imagine that the flash’s integrated controller and/or connected camera might be interested in knowing whether the unit is being used for conventional or “bounce” purposes from an operating mode, exposure setting and/or other standpoint. Once again, readers: ideas?

At this point, by the way (and speaking of flash heads), the top half of this part of the case spontaneously disconnected from the pan-and-tilt hinge assembly:

Returning to the main body, let’s see what’s inside. Back, complete with the LCD (the on/off switch is in the lower right corner):

Right side:

Left side (note the battery latch, contacts, etc. initially highlighted before):

Front, with an initial “reveal” of the primary “power” PCB (although there’s plenty of analog stuff in the earlier flash head-located PCB too!):


And bottom, revealing a secondary “digital” PCB that we’ll discuss further shortly:

There’s one more PCB of note, actually, which isn’t visible until after you remove two screws and disconnect the LCD assembly, then flip it around:

Here’s where the main system controller can be found, therefore why I refer to it as the primary “digital” PCB. It’s the APM32F072VBT6 (PDF), from a Chinese company called Geehy Semiconductor. The entire product family, as you’ll see from the PDF, contains dozens of members, based both on the Arm Cortex-M0+ and Cortex-M3. This particular SoC proliferation (at the top of the table labeled “APM32 MCU-ARM Cortex -M0+” in the PDF, for your ease of locating it) integrates a Cortex-M0+ running at 48 MHz along with 128 Kbytes of flash memory and 16 Kbytes of RAM. I can’t find a discrete flash memory chip for code storage on the PCB; the IC in the lower right corner is a LMV339 quad-channel comparator, and pretty much everything else here are connectors and passives. Oh, and the speaker’s to the left of the comparator 😉.

Here’s a side view, showing the USB-C and 2.5 mm sync connectors:

And flipping the assembly back over, as well as flipping the LCD upside-down, you’ll find that this side of the PCB is effectively blank, save for the earlier-noted power switch:

Next, continuing with the “digital” theme, let’s look more closely at the bottom-mounted PCB:

This one requires a bit of background explanation.

I’ve already told you that the primary 2.4 GHz transceiver system for multi-unit sync purposes is upfront behind the red translucent panel, and you’ll see it again shortly. But there’s another 2.4 GHz transceiver system in the V1, this one Bluetooth-based and designed to enable flash unit configuration and control from a wirelessly tethered smartphone or tablet in conjunction with a Godox (or Adorama) app. That’s why, unsurprisingly now that you know the background, the two dominant ICs on this side of the PCB are Texas Instruments’ CC2500 low-power 2.4 GHz RF transceiver and, to its right, TI’s CC2592 front-end RF IC. Flip the PCB over:

and again, unsurprisingly, you’ll find the embedded Bluetooth antenna.

Finally, let’s look more closely at what I referred to earlier as the primary “power” PCB:

Many of the ICs here are similar to the ones we saw in the earlier flash head-located PCB, such as two more of those mysterious ones labeled “EIC” but now with slightly different second- and third-line marks:


And on the other side:

is more analog and power circuitry, including a sizeable capacitor at the bottom (albeit not as sizeable as I suspect we’ll see shortly!).

Speaking of which, let’s close by looking closely at that tilt hinge assembly. Here it is from the front:


and back:

All are fairly unmemorable. The left side is not much less boring:

At least until I tilt it slightly, revealing a green tint indicative of a PCB inside:

The right side is quite a bit busier, with wiring harnesses formerly running up to the flash head:

Even more titillating when I again tilt it, as well as moving wiring to the sides:

And speaking of wiring (and titillating relocation of same), here’s the bottom:

Cautiously, both because I don’t know exactly what’s on the other side and, if I’m right and it’s an enormous capacitor, whether it’s fully discharged, I proceed:

Enormous capacitor, indeed!

Refilling this sizeable “electron gas tank”, folks, explains the 1.5 second recycle time between flash activations, and makes the 480 activations per battery recharge all the more remarkable:

And with that, slightly more than 4,000 words in, I’m done! Not quite “in a flash”, but I still hope you found this teardown as interesting as I did. Sound off with your thoughts in the comments! And in closing, enjoy these two insides-revealing repair videos that I found during my research:

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Godox V1 camera flash: Well-“rounded” with multiple-identity panache appeared first on EDN.

A sneak peek at HBM cold war between Samsung and SK hynix

Mon, 04/15/2024 - 19:17

As high-bandwidth memory (HBM) moves from HBM3 to its extended version HBM3e, a fierce competition kicks off between Samsung and SK hynix. Micron, the third largest memory maker, has also tagged along to claim stakes in this memory nirvana that is strategically critical in artificial intelligence (AI) designs.

HBM is a high-value, high-performance memory that vertically interconnects multiple DRAM chips to dramatically increase data processing speed compared to conventional DRAM products. HBM3e is the fifth generation of HBM following HBM, HBM2, HBM2E and HBM3 memory devices.

HBM helps package numerous AI processors and memories in a multi-connected fashion to build a successful AI system that can process a huge amount of data quickly. “HBM memory is very complicated, and the value added is very high,” Jensen Huang, Nvidia co-founder and CEO, said at a media briefing during the GPU Technology Conference (GTC) held in March 2024 at San Jose, California. “We are spending a lot of money on HBM.”

Take Nvidia’s A100 and H100 processors, which commanded 80% of the entire AI processor market in 2023; SK hynix is the sole supplier of HBM3 chips for these GPUs. SK hynix currently dominates the market with a first-mover advantage. It launched the first HBM chip in partnership with AMD in 2014 and the first HBM2 chip in 2015.

Figure 1 SK hynix currently dominates the HBM market with nearly 90% of the market share.

Last month, SK hynix made waves by announcing to start the mass production of the industry’s first HBM3e chip. So, is the HBM market and its intrinsic pairing with AI processors a case of winner-takes-all? Not really. Enter Samsung with a 12-layer HBM3e chip.

Samsung’s HBM surprise

Samsung’s crosstown memory rival SK hynix has been considered the unrivalled HBM champion since it unveiled the first HBM memory chip in 2014. It’s also known as the sole HBM supplier of AI kingpin Nvidia while Samsung has been widely reported to be lagging in HBM3e sample submission and validation.

Then came Nvidia’s four-day annual conference, GTC 2024, where the GPU supplier unveiled its H200 and B100 processors for AI applications. Samsung, known for its quiet determination, once more outpaced its rivals by displaying 12-layer HBM3e chips with 36 GB capacity and 1.28 TB/s bandwidth.

Figure 2 Samsung startled the market by announcing 12-layer HBM3e devices compared to 8-layer HBM3e chips from Micron and SK hynix.

Samsung’s HBM3e chips are currently going through a verification process at Nvidia, and CEO Jensen Huang’s note “Jensen Approved” next to Samsung’s 12-layer HBM3e device on display at GTC 2024 hints that the validation process is a done deal. South Korean media outlet Alpha Biz has reported that Samsung will begin supplying Nvidia with its 12-layer HBM3e chips as early as September 2024.

These HBM3e chips stack 12 DRAMs, each carrying 24-GB capacity, leading to a peak memory bandwidth of 1.28 TB/s, 50% higher than 8-layer HBM3e devices. Samsung also claims its 12-layer HBM3e device maintains the same height as the 8-layer HBM3e while offering 50% more capacity.

It’s important to note that SK hynix began supplying 8-layer HBM3e devices to Nvidia in March 2024 while its 12-layer devices, though displayed at GTC 2024, are reportedly encountering process issues. Likewise, Micron, the world’s third largest manufacturer of memory chips, following Samsung and SK hynix, announced the production of 8-layer HBM3e chips in February 2024.

Micron’s window of opportunity

Micron, seeing the popularity of HBM devices in AI applications, is also catching up with its Korean rivals. Market research firm TrendForce, which valued the HBM market approximately 8.4% of the overall DRAM industry in 2023, projects that this percentage could expand to 20.1% by the end of 2024.

Micron’s first HBM3e product stacks 8 DRAM layers, offering 24 GB capacity and 1.2 TB/s bandwidth. The Boise, Idaho-based memory supplier calls its HBM3e chip “HBM3 Gen2” and claims it consumes 30% less power than rival offerings.

Figure 3 Micron’s HBM3e chip has reportedly been qualified for pairing with Nvidia’s H200 Tensor Core GPU.

Besides technical merits like lower power consumption, market dynamics are helping the U.S. memory chip supplier to catch up with its Korean rivals Samsung and SK hynix. As noted by Anshel Sag, an analyst at Moor Insights & Strategy, SK hynix already having sold out its 2024 inventory could position rivals like Micron as a reliable second source.

It’s worth mentioning that Micron has already qualified as a primary HBM3e supplier for Nvidia’s H200 processors. The shipments of Micron’s 8-layer HBM3e chips are set begin in the second quarter of 2024. And like SK hynix, Micron claims to have sold all its HBM3e inventory for 2024.

HBM a market to watch

The HBM market will continue to remain competitive in 2024 and beyond. While HBM3e is positioning as the new mainstream memory device, both Samsung and SK hynix aim to mass produce HBM4 devices in 2026.

SK hynix is employing hybrid bonding technology to stack 16 layers of DRAMs and achive 48 GB capacity; compared to HBM3e chips, it’s expected to boost bandwidth by 40% and lower power consumption by 70%.

At the International Solid-State Circuits Conference (ISSCC 2024) held in San Francisco on February 18-21, where SK hynix showcased its 16-layer HBM devices, Samsung also demonstrated its HBM4 device boasting a bandwidth of 2 TB/s, a whopping 66% increase from HBM3e. The device also doubled the number of I/Os.

HBM is no longer the unsung hero of the AI revolution, and all eyes are on the uptake of this remarkable memory technology.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A sneak peek at HBM cold war between Samsung and SK hynix appeared first on EDN.

8-bit MCUs tout 15-W USB power delivery

Sat, 04/13/2024 - 00:07

Microchip’s AVR DU 8-bit MCUs integrate a USB 2.0 full-speed interface that supports power delivery up to 15 W, enabling USB-C charging at up to 3 A at 5 V. According to the manufacturer, this capability, not commonly found in other USB microcontrollers in this class, allows embedded designers to implement USB functionality across a wide range of systems.

In addition to higher power delivery than previous devices, AVR DU microcontrollers also feature improved code protection. To defend against malicious attacks, the devices employ Microchip’s Program and Debug Interface Disable (PDID) function. When enabled, the PDID function locks out access to the programming/debugging interface and blocks unauthorized attempts to read, modify, or erase firmware.

To enable secure firmware updates, the MCUs provide read-while-write flash memory in combination with a secure bootloader. This allows designers to use the USB interface for in-field updates without disrupting product operation.

The AVR DU family of MCUs is suitable for a range of embedded applications, from fitness wearables and home appliances to agricultural and industrial applications. A virtual demonstration of the MCU’s USB bridge is available here.

AVR DU series product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 8-bit MCUs tout 15-W USB power delivery appeared first on EDN.

Renesas expands general-purpose MCU choices

Sat, 04/13/2024 - 00:06

RA0 microcontrollers from Renesas are low-cost devices that offer low power consumption and a feature set optimized for cost-sensitive applications. The MCUs can be used in such applications as consumer electronics, system control for small appliances, building automation, and industrial control systems.

Based on an Arm Cortex-M23 core, the 32-bit MCUs consume 84.3 µA/MHz in active mode, dropping to just 0.82 mA in sleep mode. A software standby mode cuts current consumption even further, allowing the device to sip just 0.2 µA. These features, coupled with a high-speed on-chip oscillator for fast wakeup, make the MCUs particularly well-suited for battery-operated products.

The first devices in the RA0 series, the RA0E1 group, operate from a supply voltage of 1.6 V to 5.5 V. This means there is no need for a level shifter/regulator in 5-V systems. An on-chip oscillator improves baud rate accuracy and maintains ±1.0% precision over a temperature range of -40°C to +105°C.

Other features of the RA0E1 group of MCUs include: 

  • Memory: Up to 64 kbytes of code flash and 12 kbytes of SRAM
  • Analog Peripherals: 12-bit ADC, temperature sensor, internal reference voltage
  • Communications Peripherals: 3 UARTs, 1 Async UART, 3 Simplified SPIs, 1 IIC, 3 Simplified IICs
  • Safety: SRAM parity check, invalid memory access detection, frequency detection, A/D test, immutable storage, CRC calculator, register write protection
  • Security: Unique ID, TRNG, flash read protection

RA0E1 microcontrollers are shipping now. Package options include 20-pin LSSOP, 32-pin LQFP, and QFN with 16, 24, or 32 leads.

RA0E1 product page

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Renesas expands general-purpose MCU choices appeared first on EDN.

Hi-rel GaN load switch ships off-the-shelf

Sat, 04/13/2024 - 00:06

The first entry in Teledyne’s 650-V power module family, the TDGM650LS60 integrates a 650-V, 60-A GaN transistor and isolated driver in a single package. The module, which is now available off-the-shelf, acts as a load switch or solid-state switch. Fast switching time and the absence of moving parts make the TDGM650LS60 useful for high-reliability applications in the space, avionics, and military sectors.

The TDGM650LS60 tolerates up to 100 krads of total ionizing does (TID) radiation and operates over a temperature range of -55°C to +125°C. It’s enhancement-mode GaN transistor has a minimum breakdown voltage of 650 V and a stable on-resistance of 25 mΩ. Coupled with the driver’s 5-kV isolation, the TDGM650LS60 ensures robust and reliable operation in challenging environments.

Occupying a 21.5×21.5-mm footprint, the TDGM650LS60 module has solder-down castellation for surface-mount style mounting. A preliminary datasheet can be accessed by using the link to the product page below.

TDGM650LS60 product page

Teledyne e2v HiRel Electronics    

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Hi-rel GaN load switch ships off-the-shelf appeared first on EDN.

Bluetooth module taps Cortex-M33 processor

Sat, 04/13/2024 - 00:05

A Bluetooth LE 5.4 module, the HCM511S from Quectel, leverages the power of an Arm Cortex-M33 core, along with 352 or 512 kbytes of flash memory. The module, which also provides 32 kbytes of RAM, brings efficient performance to compact connected devices such as digital keys, portable medical devices, and battery-operated motion sensors.

According to Quectel, the Bluetooth module’s transmit power of +6 dBm achieves long-distance transmission, allowing low-power devices to connect cost effectively. Optional support for Bluetooth mesh nodes increases network scalability and allows greater device density over a mesh topology. The HCM511S also offers up to 18 GPIOs, which can be multiplexed for various interfaces, including ADC, USART, I2C, I2S, PDM, SPI, and PWM.

The MCU Bluetooth module comes in a 16.6×11.2×2.1-mm LCC package and weighs just 0.57 g. It operates over a temperature range of -40°C to +85°C. In addition to being certified by the Bluetooth Special Interest Group, the HCM511S is also certified for use in Europe, America, Canada, China, Australia, and New Zealand.

Engineering samples of the HCM511S MCU Bluetooth module are available now.

HCM511S product page

Quectel Wireless Solutions  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Bluetooth module taps Cortex-M33 processor appeared first on EDN.

FPGA integrates hard RISC-V cores

Sat, 04/13/2024 - 00:04

Enabling high compute performance at the edge, the Titanium Ti375 FPGA from Efinix packs a quad-core hardened RISC-V block and 370,000 logic elements. It employs the company’s high-density, low-power Quantum compute fabric wrapped with an I/O interface.

The 32-bit hardened RISC-V block (RISCV321 with M, A, C, F, and D extensions and six pipeline stages) offers a Linux-capable MMU, FPU, and custom instruction capability. Paired with an Efinix Sapphire SoC, the Ti375 FPGA helps designers turn a tiny chip into an accelerated embedded compute system.

The Ti375 is manufactured on a 16-nm process and comes in a fine-pitch BGA package with a choice of 529, 676, 900, or 1156 balls. Its full-duplex serializer/deserializer transceiver operates at data rates from 1.25 Gbps to 16 Gbps and supports multiple protocols, including PCIe 4.0, Ethernet SGMII, and Ethernet 10GBase-KR. The FPGA also features a LPDDR4 DRAM controller and MIPI D-PHY.

Samples of the Titanium Ti375 FPGA are shipping now to early access customers.

Ti375 product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post FPGA integrates hard RISC-V cores appeared first on EDN.

Adaptable pullup

Thu, 04/11/2024 - 16:17

It’s common for I2C systems to have both standard and fast devices on the same bus.

For I2C systems, the speed and power consumption both depend on the values of pullup resistors: the values of them should be low enough to secure the fast charge of the bus capacitance.

Wow the engineering world with your unique design: Design Ideas Submission Guide

But the low values increase power consumption, the low values can also present too heavy a load for the transmitter.

The variable topology of the bus can make the situation somewhat more complicated.

Hence when your system is power-restricted and you need to use several I2C chips at different I2C modes, you have to compromise between these chips. Or you can use the adaptable pullup, which is shown in Figure 1.

Figure 1: The adaptable pullup where a closed transistor connects additional resistors R5 and R6 in parallel to the main pullup resistors R1 and R2

The circuit is rather simple: a closed transistor connects additional resistors R5 and R6 in parallel to the main pullup resistors R1 and R2. 

The connection can be controlled by GPIO for example, as shown in Figure1, and should be done before the fast data exchange takes place.

Another solution is shown in Figure 2, which represents one-half of the whole circuit (the second half for SDA is omitted for brevity). The circuit uses an analog switch (for instance, TS5A3159 of TI) to disconnect the “fast” part of the bus. While it’s disconnected, the resistor R5 provides a high (idle) voltage level on the bus. Note that the capacitance of the switch, which can be large enough (20 to 100pF), should be taken into account.

Figure 2: Alternative adaptable pullup solution that uses an analog switch to disconnect the “fast” part of the bus.

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Adaptable pullup appeared first on EDN.

GaN vs SiC: A look at two popular WBG semiconductors in power

Wed, 04/10/2024 - 15:30

Wide bandgap semiconductors have taken both power electronics and high frequency circuits by storm, replacing so many applications that were previously dominated by silicon-based devices, e.g., LDMOS HPAs in base stations, IGBTs in high voltage DC/DC conversion etc. Specifically within power electronics, it is no secret that certain applications are demanding power dense solutions that operate at high switching frequencies to minimize switching losses. From traction inverters, onboard chargers, and high voltage DC-DC converters in EVs to uninterruptible power supplies (UPSs) and solar power converters in industrial/commercial applications; WBG semiconductors have carved out an extensive niche for many next generation electronics. 

The SiC substrate has established itself for EV and some industrial applications. However, a bit more recently GaN has surfaced as a strong option for many overlapping applications. Understanding the major differences between these substrates in the context of high power circuits and their respective manufacturing considerations might shed light on the future of these two popular compound semiconductors. 

WBG benefits

WBG materials are inherently able to operate at higher switching frequencies and with higher electric fields than the conventional Si substrate. When a semiconductor is heated up, its resistance tends to go down due to thermally excited carriers that are more abundant at higher temperatures, causing conduction. Higher bandgap semiconductors will require higher temperatures (more energy) to excite electrons across the bandgap from the valence band to the conduction band. This translates directly to more power handling capabilities and higher device efficiencies. 

This can be seen in Table 1 where SiC and GaN exhibit much higher breakdown electric field, electron mobility, saturation velocity, and thermal conductivity than Si—all factors that enhance switching frequency and power density. However, high switching frequencies will lead to more losses and a lower efficiency FET, this is where optimizing the power device figure of merit (FoM) [Rds(on) x Qg], or optimizing the channel resistance and gate charge for lower conduction and switching losses, is critical.  





Band Gap (eV)




Critical Breakdown Electric Field (V/cm) x106


2 to 4


Electron Mobility (cm2/Vs)

1000 to 1400


1500 to 2000

Saturation Velocity (cm/s) x107




Thermal Conductivity (W/cm K)



1.3 to 2.2

Table 1 Properties of Si, SiC, and GaN.

Generally, GaN FETs max out at around 650-V with power applications around 10 kW while 750-V and 1200-V SiC FETs are not unusual and applications can range from 1 kW up to the megawatts (Figure 1). SiC’s excellent thermal conductivity allows for similar power ratings in significantly smaller packages. However, GaN devices are able to switch faster (note the significantly higher electron mobility) which, in turn, can translate to a higher dV/dt, potentially allowing for more converter efficiency. 

Figure 1: Power versus frequency plot for various power devices. Source: Texas Instruments

Manufacturing considerations

SiC, the recent golden child of power electronics, gained massive traction after Tesla’s announcement using exclusively SiC in the Model 3 back in March of last year. Since SiC MOSFETs were commercialized by Cree in 2010, the demand for SiC has steadily ramped up with key players taking advantage of available tax credits from the CHIPS act to grow operations and drive down the cost per wafer. Wolfspeed (formerly Cree), for instance, recently invested a total of $5 billion in a new production facility, the John Palmour (JP) manufacturing center to develop 200 mm (~8-inch) wafers. 

However, it isn’t that simple: getting a foothold in SiC fabrication requires expensive equipment that is exclusively used for SiC. SiC boules are grown  at  temperatures in excess of 2700℃ at a rate at least 200 times slower than Si, which requires a large amount of energy. GaN on the other hand can largely use the same equipment as Si semiconductor processing where GaN epitaxial wafers can be grown on its respective substrate (often Si, SiC, or sapphire) at a temperature of 1000 to 1200℃—less than half that of SiC. SiC wafers are also nearly 50% thinner than Si wafers (up to 500 μm), leading to a fairly brittle material that is prone to cracking and chipping—another quality that requires specialized processing equipment. 

According to Gregg Lowe, CEO at Wolfspeed, 6-inch SiC wafers cost ~$3,000 in 2018, a cost that has been trimmed down to ~$850 for a 7-inch wafer just 6 years later in 2024. And, as SiC power devices continue to mature, costs per wafer will continue to go down. A major leap in optimizing costs are growing wafer sizes and increasing the number of devices per wafer. For GaN-on-Si, this is relatively simple, larger diameter fabs can produce thousands of 8-inch wafers per week with excellent line yields (98%) afforded by CMOS process control. However, similar economies of scale can be applied to SiC wafer production as companies now advance toward 8-inch wafers where just ten years ago, mass production of 150 mm (~6-inch) wafers were really just on the horizon. And, while the SiC devices themselves may be more expensive than Si and GaN counterparts, the fact is, far less power devices are required to maintain the same performance. On the system level, this means less gate drivers, magnetics, and other peripherals devices that might otherwise be used in an Si-based design. 

GaN moving beyond 700 V

Because of its excellent high frequency characteristics, GaN has already established itself as a suitable III-V semiconductor for high frequency circuits such as MMICs, hybrid microwave circuits, etc., along with other compound semiconductors such as gallium arsenide (GaAs) and indium phosphide (InP). GaN is particularly relevant for high power amplifiers (HPAs) in the transmit signal chain. Many of the GaN foundry services currently available generally address high frequency applications with GaN-on-SiC however, more recently, foundries are shifting their focus towards GaN-on-Si for both RF and power applications. Table 2 highlights some of the GaN process technologies for different companies globally. Note the table does not include all GaN foundries such as Global Foundries or UMC which will likely be major contenders in Gan-on-Si technologies.

Company name Foundry location Technology name Substrate Wafer Size Gate length Cutoff frequency Power Density Wafer thickness Breakdown voltage
Wolfspeed RF business (now MACOM) US G28V5, G28V4, G40V4, G28V3, G50V3, G50V3, G50V4 SiC 0.15 µm, 0.25 µm, 0.4 µm Up to 40 GHz Up to 8.5 W/mm Up to 100 um > 84 V, >120 V, >150 V
HRL Laboratories US T3 SiC 40 nm Up to 150 GHz > 50 V
NXP US SiC 6 inches
MACOM/ OMMIC US GSiC140 SiC 140 nm Up to 30 GHz 5.5 W/mm > 70 V
Northrop Grumman US GAN20 SiC or Si 4 inches 0.2 µm Up to 200 GHz 100um
BAE systems US 0.14 µm GaN, 0.18 µm GaN SiC 4 to 6 inches 0.14 µm, 0.18 µm Up to 155 GHz 55 and 100 um > 80 V
Qorvo US QGaN25, QGaN15, QGaN25HV, QGaN50 SiC 4 inch Up to 50 GHz <28V, <40V, < 50 V, <65 V
WIN Semiconductors Taiwan NP12-01, NP25-20 SiC 4 inches 0.12 µm, 025 µm Up to 50 GHz 4 W/mm, 10 W/mm
TSMC Taiwan Si 6 inches
X-FAB Germany and US Si 6 to 8 inches 0.35 µm
Infineon/GaN systems Austria and Malaysia Gen1 (CoolGaN), Gen2 Si Up to 8 inches
UMS Germany GH15, GH25 SiC 4 inches 0.15 µm, 0.25 µm Up to 35 GHz Up to 4.5 W/mm 70 to 100 um > 70 V, > 100 V
GCS China 0.15 µm, 0.25µm, 0.4µm, 0.5µm GaN HEMT Processes Si and SiC 4 to 6 inches 0.15 µm, 0.25µm, 0.4µm, and 0.5µm Up to 23 GHz Up to 13.5 W/mm > 150 V, > 200 V
Innoscience China Si Up to 8 inches 0.5 µm

Table 2: Select GaN foundries and specifications on their technology.  

SiC and GaN serve very distinct parts of the power spectrum, however, can higher voltage GaN devices be designed to creep up the spectrum and contend with SiC? The GaN pHEMTs that dominate GaN fabrication have breakdown voltages (~0.6 to 1.5 MV/cm) that generally cap out at around 650 V due to the inherent limits of its critical breakdown field [1-2]. Methods of reaching the intrinsic limits of 3 MV/cm are being explored in research in order to improve the breakdown characteristics of GaN devices. 

More and more manufacturers are showcasing their 700-V GaN solutions. There have been talks of a 1200 V GaN FET; Transphorm released a virtual design of their 1200 V GaN-on-Sapphire FET in May of last year. Outside of this much of the talk of GaN moving up the power spectrum has remained in the R&D space. 1200-V Vertical GaN (GaN-on-GaN) transistors are also being researched by NexGen Power Systems with their Fin-JFET technology [3], a success that has allowed the company to receive funding from the US department of energy (DOE) to develop GaN-based electric drive systems. However, many of these solutions are not GaN-on-Si.

GaN-on-Si simply might have the major advantage of bandwagoning on the silicon semiconductor industry’s already established technology maturity, however, using the Si substrate comes with some design challenges. There are two major constraints: a large lattice mismatch and an even larger thermal mismatch between the GaN epitaxial layer and the host substrate causing tensile and compressive strains on the two substrates resulting in dislocations and higher defect densities (Table 3). Other substrates are being researched to overcome this issue, Qromis, for instance, has recently engineered a ceramic poly-aluminum nitride (AlN) layer that is CMOS fab compatible and CTE-matched to GaN. 


Lattice mismatch

Thermal mismatch

GaN and Si



GaN and Sapphire



GaN and SiC



Table 3  Lattice and thermal mismatch between GaN and Si, sapphire, and SiC. Source: [4] 

Access to Gallium

While GaN wafers are generally more convenient to manufacture, they do require a precious metal that is, by nature, in limited supply. There was strain on the gallium supply with the 2019 tariffs on Chinese imports ratcheted up significantly causing a 300% increase in gallium metal imported from China compared to 2018 where the surplus was likely stockpiled. China’s restrictions on gallium exports in August of last year further diminished the already small amount imported from China. The bans could have potentially signaled a problem as China produces nearly 98% of the world’s low-purity gallium. 

However, the issue has not truly disrupted gallium-based wafer production (GaAs or GaN), largely due to the stockpiling and shifting to other sources for the rare metal (Table 4). Many countries now have the incentive to scale up the operations that, over a decade ago, were shut down due to China’s overproduction. Still, this may be something to consider if China further restricts its exports in the short term. It may also be important to note that since GaN wafers are produced by growing GaN crystals on top of a variety of substrates, relatively small amounts of gallium are used per device as compared to GaAs pHEMTs that are grown on semi-insulating GaAs wafers. So, while this may have been something to consider given the recent history of restricted gallium supplies, it has not really impacted GaN production and likely won’t in the future.

U.S. imports for consumption of unwrought gallium and gallium powders (2017 to 2021)








Quantity (kg)











Hong Kong




Korea, Republic of













United Kingdom

































South Africa









Table 4: US imports of unrefined gallium by country or locality according to USGS [5].

SiC and GaN

As it stands SiC and GaN dominate distinct parts of the power spectrum and therefore distinct applications with only some overlap. However, if GaN FETs can successfully increase in drain-source voltage without stifling its current massive manufacturing advantage, it may very well break out of its current place largely in consumer electronics (e.g., USB chargers, AC adapters, etc.) into higher power applications that SiC power devices currently dominate. SiC manufacturing has not stagnated though, and steady progress is being made in wafer size and yield to drive down the cost of SiC. 

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for seven years. She holds a Bachelor’s degree in electrical engineering, and has published works in major EE journals.

Related Content


  1. Tian Z, Ji X, Yang D, Liu P. Research Progress in Breakdown Enhancement for GaN-Based High-Electron-Mobility Transistors. Electronics. 2023; 12(21):4435. https://doi.org/10.3390/electronics12214435
  2. Exploring an Approach toward the Intrinsic Limits of GaN Electronics. Sheng Jiang, Yuefei Cai, Peng Feng, Shuoheng Shen, Xuanming Zhao, Peter Fletcher, Volkan Esendag, Kean-Boon Lee, and Tao Wang. ACS Applied Materials & Interfaces 2020 12 (11), 12949-12954. DOI: 10.1021/acsami.9b19697
  3. R. Zhang et al., “Vertical GaN Fin JFET: A Power Device with Short Circuit Robustness at Avalanche Breakdown Voltage,” 2022 IEEE International Reliability Physics Symposium (IRPS), Dallas, TX, USA, 2022, pp. 1-8, doi: 10.1109/IRPS48227.2022.9764569.
  4. Kaminski, Nando, and Oliver Hilt. “SiC and GaN Devices – Wide Bandgap Is Not All the Same.” IET Circuits, Devices & Systems, vol. 8, no. 3, 2014, pp. 227-236. https://doi.org/10.1049/iet-cds.2013.0223. 
  5. “Gallium Statistics and Information.” U.S. Geological Survey, [last modified August 29, 2023], usgs.gov/centers/national-minerals-information-center/gallium-statistics-and-information.  [accessed on  2023-10-26].
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN vs SiC: A look at two popular WBG semiconductors in power appeared first on EDN.

Samsung’s advanced packaging pivot with Nvidia production win

Wed, 04/10/2024 - 14:47

The news about Samsung snapping an advanced packaging order for Nvidia’s AI chips paired with high-bandwidth memory (HBM) chips underscores the strategic importance of next-generation packaging solutions. According to a report published in South Korean media outlet The Elec, Samsung’s advanced packaging team will provide interposer and 2.5D packaging technology for Nvidia’s AI processors.

It’s important to note that the GPU and HBM building blocks in these AI processors are supplied by other companies—most likely Nvidia’s GPUs manufactured on a TSMC process node and HBM chips designed and produced by Samsung’s archrival SK hynix.

What’s more important is how industry watchers relate this development to the insufficient capacity of TSMC’s chip-on-wafer-on-substrate (CoWoS) technology, which stacks chips and packages them onto a substrate. However, this supply shortage connected with the recent earthquake in Taiwan doesn’t hold much weight, and it’s most likely related to supply and demand issues.

Samsung calls its 2.5D packaging technology iCube; it places one or more logic dies such as CPUs and GPUs and several HBM dies on top of a silicon interposer, making multiple dies operate as a single chip in one package. It deploys parallel and horizontal chip placement to boost performance and combat heat buildup.

Figure 1 The iCube technology offers warpage control even with large interposers, and its ultra-low signal loss is paired with high memory density. Source: Samsung

Samsung’s advanced packaging pivot

Trade media has been abuzz with reports about Samsung beefing up its advanced packaging division by hiring more engineers and developing its own interposer technology. The company reportedly procured a large amount of 2.5D packaging equipment from Japanese semiconductor equipment supplier Shinkawa.

Another report published in The Elec claims that Applied Materials and Besi Semiconductor are installing hybrid bonding equipment at Samsung’s Cheonan Campus. Hybrid bonding enhances I/O and wiring lengths compared to existing bonding methods. TSMC offers hybrid bonding in its 3D packaging services called System on Integrated Chip (SoIC). Intel has also implemented hybrid bonding technology in its 3D packaging technology called Foveros Direct.

Media reports suggest that Samsung has recently ramped up the production capacity at its key site for advanced production in Cheonan to full utilization preceding Nvidia’s advanced packaging orders. Industry observers also expect that this advanced packaging deal with Nvidia could pave the way for Samsung to win the supply of HBM chips for pairing with GPU’s maker’s AI devices.

SK hynix is currently the major supplier of HBM chips for Nvidia’s AI processors, and Samsung is frantically working to close the gap. In fact, when Samsung established the advanced packaging business team in December 2023, the company’s co-CEO Kye-Hyun Kyung hinted about seeing the results of this investment in the second half of 2024.

Advanced packaging in Samsung’s roadmap

Kyung also pinned his hopes on a competitive advantage with Samsung’s memory chips, chip fabrication, and chip design businesses under one roof. Advanced packaging stands out in this semiconductor technology portfolio due to its intrinsic link to large and powerful AI chips and system-in-package (SiP) devices.

Figure 2 Next-generation packaging technologies are in the limelight due to the massive demand for AI chips. Source: Samsung

Like TSMC and Intel Foundry, Samsung is aggressively investing in advanced packaging technologies like silicon interposers while also steadily expanding its production capacity. Interesting times are ahead for next-generation packaging solutions.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Samsung’s advanced packaging pivot with Nvidia production win appeared first on EDN.

Driving CMOS totem poles with logic signals, AC coupling, and grounded gates

Tue, 04/09/2024 - 17:24

Despite massive, large-scale integration being ubiquitous in contemporary electronic design, discrete MOSFETs in the classic CMOS totem pole topology are still sometimes indispensable. This makes tips and tricks for driving them efficiently with logic level signals likewise useful, because it can be a “bit” tricky, especially if other than standard logic voltage levels are involved. 

If (happily) they are not, we have Figure 1.

Figure 1 The simplest case of logic signal totem pole drive—direct connection works if V++ <= VL.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In the lucky circumstance that the totem FET source pins are connected to positive and negative rails that match the logic levels, a simple direct connection (a wire) will suffice. All that’s needed for success then is that:

  1. The FET ON/OFF gate-source voltage level lies within the logic signal excursion, and
  2. The logic signal source has sufficient drive to cope with the paralleled FET input capacitances.

Item 2 is particularly important, because it affects the archenemy of totem pole efficiency, cross-conduction. 

It often happens that, during the transition between Q1-conducting and Q2-not to the opposite state, there will be an interval of overlap when both transistors conduct. This is “cross-conduction”, and it wastes power, sometimes a lot. The longer its duration, the greater the waste. The duration of cross-conduction depends on the time required for the logic signal to complete the 0/1 or 1/0 transition, which depends on how long it takes to charge and discharge the respective gate input capacitances. The cross-conduction gremlin is somewhat mitigated by the fact the capacitance that delays one FET’s turn-off also delays its complementary partner’s turn-on, but speed is still vital.

Now suppose Q1’s V++ source voltage is higher than VL. What now? Figure 2 shows a simple solution: AC coupling.

Figure 2 AC coupling can solve the problem of positive rail voltage mismatch if the control signal runs continuously.

Of course, this simple fix will only work if the logic signal can be relied upon to always have an AC component. That is to say, if only its duty cycle is never 0% (always OFF) nor 100% (always ON): 0% < DC < 100%. C1 should have at least an order of magnitude greater capacitance than Q1’s gate capacitance (e.g., 1 nF). While D1 can usually be an ordinary junction diode (e.g., 1N4148), a Schottky type can be a better choice if a few extra hundreds of mV of gate drive are needed.

AC coupling can also come to the rescue if the totem’s negative rail is below ground, as shown in Figure 3. The same DC limitation applying, of course.

Figure 3 Ditto for AC coupling and negative rail mismatch, too.

So, what to do if DC doesn’t obey the rules, and we can’t rely on a simple diode to define signal levels? See Figure 4.

Figure 4 “Grounded” gate Q3 maintains C1 charge when logic signal stops.

Small-signal transistor Q3’s configuration as a common-gate, non-inverting high-speed amplifier transfers necessary steady-state current to Q1. Choose R2 to be a low enough resistance to source Q2’s maximum expected source-to-gate leakage current (R2 = 10k will typically be a very conservative choice), then R1 = R2(V++/VL – 1).

And of course, as illustrated in Figure 5, the same trick works for a negative totem rail.

Figure 5 Grounded gate Q4 shifts logic signal to negative rail referred C2 and Q2.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Driving CMOS totem poles with logic signals, AC coupling, and grounded gates appeared first on EDN.

Fairly evaluating HDD reliability

Mon, 04/08/2024 - 16:19

A few months back, LA Computer Company (whose website is still up as I write these words, although it may no longer be when you read them), a retailer from whom I’d purchased a number of products over the years, announced that it was closing up shop and “fire sale-ing” its remaining inventory. I subsequently purchased several items from the company, one of which was a “Refurbished 4-Bay Portable Tower Enclosure 12TB (4x3TB)” further described as a “Refurbished 4 bay Thunderbolt 2 enclosure with 4 x 3TB hard drives installed.” The product photo (no longer available, alas) was generic, but the price was compelling, so I took a chance.

What arrived was a cosmetically imperfect but still functional AKiTiO Thunder2 Quad enclosure:

This last stock photo is particularly apropos, as the initial computer I intend to tether the enclosure (and HDDs inside) to is my own “trash can” Mac Pro:

And after the Mac Pro exits Apple’s supported-products stable, I’ll still be able to use the AKiTiO external storage device with newer Macs (along with Thunderbird-supportive Windows systems) in conjunction with an Apple adapter:

What of those HDDs inside the enclosure? They’re 7,200 RPM Seagate ST3000DM001 3.5” drives (here’s a PDF spec sheet for the entire Barracuda product family generation, code-named “Grenada”), with 6 Gbps SATA interfaces and 64 Mbyte RAM caches onboard. This particular variant integrated three 1 TByte platters, each with two associated read/write heads (one on either side), and also came in fewer-platter and lower-capacity versions.

I was initially surprised when Google search results on the product code revealed a Wikipedia page dedicated to the ST3000DM001, but all became clear when I started reading it. Suffice it to say that going with the “industry’s first 1TB-per-disk hard drive technology” more than a decade ago may have incurred at least some long-term usage risk for Seagate and its customers, in contrast the product family’s generally positive initial review results. Specifically, Backblaze, a well-known cloud storage company who uses lots of mass storage devices (both rotating and solid-state) and regularly publishes data on various drives’ reliability, found the ST3000DM001 exhibiting atypically high failure rates. Quoting from the company’s April 2015 report:

Beginning in January 2012, Backblaze deployed 4,829 Seagate 3TB hard drives, model ST3000DM001, into Backblaze Storage Pods. In our experience, 80% of the hard drives we deploy will function at least four years. As of March 31, 2015, just 10% of the Seagate 3TB drives deployed in 2012 are still in service.

Root cause? Here’s one working theory, according to German data recovery company Datenrettung (who was specifically discussing the drives’ usage in Apple’s 5th-gen Time Capsule):

The parking ramp of this hard drive consists of two different materials. Sooner or later, the parking ramp will break on this hard drive model, installed in a rather poorly ventilated Time Capsule. The damage to the parking ramp then causes the write/read unit to be destroyed and severely deformed the next time the read/write unit is parked. When the Time Capsule is now turned on again or wakes up from hibernation, the data disks of the Seagate hard drive are destroyed because the deformed read-write unit drags onto it.

Is Datenrettung right? Maybe. Some of my skepticism comes from the brutally honest “rather poorly ventilated Time Capsule” observation in the company’s comments. Apple has long been all about sleek, svelte, quiet, and otherwise boundary-pushing system design, and this isn’t the first time that a propensity for overheating has been the end result. Take my G4 Cube, for example. Or my first-generation MacBook Air. Or, more germane to this particular conversation, my own 3rd-gen Time Capsule, which also exhibited overheating-induced functional compromise but used an older, lower-capacity drive from an unknown manufacturer.

My skepticism further increased when I came across an excellent dissection at Tom’s Hardware:

By its own admission, Backblaze employed consumer-class drives in a high-volume enterprise-class environment that far exceeded the warranty conditions of the HDDs. Backblaze installed consumer drives into a number of revisions of its own internally developed chassis, many of which utilized a rubber band to “reduce the vibration” of a vertically mounted HDD.

 The first revision of the pods had no fasteners for securing the drive into the chassis. As shown, a heavy HDD is mounted vertically on top of a thin multiplexer PCB. The SATA connectors are bearing the full weight of the drive, and factoring the vibration of a normal HDD into the non-supported equation creates the almost perfect recipe for device failure.

 Backblaze has confirmed it still has all revisions of its chassis installed in its datacenters and that it replaced failed drives into the same chassis the original drive failed in. This could create a scenario where replacement drives are repeatedly installed into defective chassis, thus magnifying the failure ratio.

 Backblaze developed several revisions of the custom chassis due to its admitted vibration problems with the early models, and the company shared the designs with the public. However, Backblaze did not indicate which type of enclosures each drive failed within, leaving speculation that the chassis may be the real root of the problem (among others).

The bolded emphasis in this last paragraph is mine:

The Backblaze environment employed more drives per chassis and featured much heavier workloads (both of which accelerate failure rates tremendously) than the vendors designed the client-class HDDs for. This ultimately helped Backblaze save money on their infrastructure. The Seagate 3 TB models failed at a higher rate than other drives during the Backblaze deployment, but in fairness, the Seagate drives were the only models that did not feature RV (Rotational Vibration) sensors that counteract excessive vibration in heavy usage models — specifically because Seagate did not design the drives for that use case.

So, to save cost, Backblaze went with HDDs that weren’t designed for this particularly demanding application. And when those HDDs failed at higher rates than those that were designed for that particularly demanding application, the company questioned the reliability of the HDDs instead of questioning its own procurement criteria (which, as Tom’s Hardware noted in February 2016, “was borne of necessity; it began during the Thailand floods when HDDs were excessively high priced”).

Supposedly, said Tom’s Hardware, “Backblaze issued numerous disclaimers about the applicability of the findings outside of its own unique (and questionable) use case.” Candidly. I’m not sure where those disclaimers appeared; I sure don’t see them within the report itself. Regardless, “the damage from the information dealt Seagate an almost immeasurable blow in the eyes of many consumers.” And that, I’ll frankly proffer, is profoundly unfair. The courts, who tossed out a class-action lawsuit subsequently filed by one complainant, apparently concurred.

For what it’s worth, all four of my Seagate 3TB HDDS are seemingly working just fine so far. They came pre-configured, formatted HFS+ and in a clever performance-plus-reliability RAID combo:

  • Each pair configured RAID 0 “striped” (for performance), with
  • Both pairs then combined via RAID 1 “mirrored (for reliability)

Undoing all this upfront configuration (which admittedly did have the advantage of relying solely on the software RAID 0/1 facilities already built into MacOS) was a bit tricky, but I accomplished it. I’ve now got an APFS-formatted, RAID 5-configured array via SoftRAID (now owned by Other World Computing, who coincidentally also acquired AKiTiO a few years ago). And although the intermediary Thunderbolt-to-quad-SATA translation hardware would normally make it infeasible to assess HDD health via ongoing S.M.A.R.T. monitoring, SoftRAID neatly manages this bit (maybe, more accurately instead worded, “these bits”?), too.

HDDs are, as my own teardown showcases, complicated pieces of hardware-plus-software. That they work at all, far from reliably for many years, validates my August 2022 observation that they’re “amazing engineering accomplishments”:

  • One or (usually) multiple platters, spinning at speeds up to 15,000 RPM. Each platter mated to one or (usually) two read/write heads, hovering over one or both sides of the rapidly rotating platter only a few nanometers away, and tasked with quickly accessing the desired track- and sector-stored details.
  • Low-as-possible power consumption and high-as-possible ruggedness and reliability, in contrast to other contending design considerations.
  • And ever-more data squeezed onto each platter, thanks to PRML (partial-response maximum-likelihood) sensing and decoding and now-mainstream PMR (perpendicular magnetic recording), next-generation SMR (shingled magnetic recording) and emerging successor HAMR (heat-assisted magnetic recording) storage techniques.

But, in order for them to work reliably for many years, they need to be used as intended. Backblaze seemingly didn’t do so. Was an inherent compromise in Seagate’s design at least partly to blame? Maybe. Reiterating what I said earlier, the ST3000DM001 and its product-family siblings marked Seagate’s initial entry into the 1 TByte-per-platter domain. Ironically, the Hitachi HUS724030ALE641 HDD I tore apart nearly two years ago, which dated from April 2013, was also a 1 TByte/platter design.

But that wasn’t the Hitachi HDD that Backblaze compared the Seagate ST3000DM001 against. It was the much older HDS5C3030ALA630, which not only required 5 platters (and 10 read/write heads) to achieve that same total-capacity metric, but also only ran at 5940 RPM rotational speeds. When you unwisely try to compare apples and oranges, you undoubtedly encounter variances. And in summary. I guess that’s my guidance to all of you: be wise. Don’t be fooled by sensationalist clickbait, whether related to technology, politics, or anything else, that presents you with a cherry-picked subset of the total applicable dataset in attempting to persuade you to accept a distorted conclusion. Question your own assumptions? Yes. But also question others’ assumptions. As well as their underlying motivations. I welcome thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Fairly evaluating HDD reliability appeared first on EDN.

Authentication IC ties up with IoT SaaS for in-field provisioning

Mon, 04/08/2024 - 12:10

An off-the-shelf secure authentication IC combined with cloud-based security software-as-a-service (SaaS) claims to manage and update embedded security credentials in the field instead of being limited to a static certificate chain implemented during manufacturing.

Microchip’s ECC608 TrustMANAGER authentication ICs are paired with Kudelski IoT’s keySTREAM device-to-cloud solution for securing key assets end-to-end in an IoT ecosystem throughout a product’s lifecycle. The combo enables custom cryptographic credentials to be accurately provisioned at the endpoint without requiring supply chain customization and can be managed by the end user.

Figure 1 Here is how a security silicon component (left) works with IoT cloud software for in-field provisioning. Source: Microchip

ECC608 TrustMANAGER, a secure authentication IC designed to store and protect cryptographic keys and certificates, is managed by the keySTREAM SaaS. Their combination allows end users to set up a self-serve root Certificate Authority (root CA). Next, the associated public key infrastructure (PKI) secured by Kudelski IoT creates and manages a dynamic certificate chain and provisions devices in the field the first time they are connected.

Once claimed in the SaaS account, the IoT devices are automatically activated in the user’s keySTREAM service via in-field provisioning. In other words, security ICs like ECC608 TrustMANAGER come with a pre-provisioned set of keys that will be controlled by keySTREAM at the time the IoT device connects for the first time.

The operation—called in-field provisioning of the PKI— happens in-field, and after in-field provisioning, the fleet of devices containing the ECC608 TrustMANAGER is first claimed and then activated in the user’s keySTREAM account.

An IoT device is “claimed” when the purchased batch of security ICs shows up in the keySTREAM account but not connected yet. It’s “activated” when the purchased batch of security ICs is connected to keySTREAM and the in-field provisioning takes place.

Figure 2 Specialized authentication semiconductors tie up with IoT security services for reliable cybersecurity on embedded systems. Source: Microchip

It’s a pivotal moment in the industry’s quest to secure the IoT landscape and make provisioning easier. Especially when the volume of connected devices rapidly increases, and security standards and regulations steadily tighten.

Moreover, security standards and upcoming regulations increasingly require the upgradability of security infrastructure for IoT devices. This poses a dilemma for traditionally static IoT security implementations, which require physical upgrades like changing out the security ICs in each device to stay in compliance.

The combo of silicon components and key management SaaS automates provisioning and facilitates easy device ownership management without changing hardware. It also streamlines the supply chain processes for distribution partners.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Authentication IC ties up with IoT SaaS for in-field provisioning appeared first on EDN.

Security IC teams with key-management SaaS

Fri, 04/05/2024 - 16:01

Microchip has added its ECC608 TrustMANAGER with Kudelski IoT’s keySTREAM software as a service (SaaS) to its Trust Platform of devices, services, and tools. The cloud-based key-management SaaS integrates with the ECC608 secure authentication IC to increase the security of IoT network-connected products. It also simplifies setup and lifecycle management.

The ECC608 TrustMANAGER IC stores and protects cryptographic keys and certificates, which are then managed and updated in the field via keySTREAM. This combination allows the setup of a self-serve root Certificate Authority and associated public key infrastructure (PKI). Users can create and manage a dynamic certificate chain and provision devices in the field the first time they are connected.

The ECC608 is the first security IC in the TrustMANAGER series. To get started, download the Trust Platform Design Suite and test the KeySTREAM use case under the ECC608.

Prices for the ECC608 TrustMANAGER start at $0.75 each in lots of 10,000 units. An activation fee is applied only after the device has been connected for the first time.

TrustMANAGER product page

Microchip Technology

Kudelski IoT

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Security IC teams with key-management SaaS appeared first on EDN.