EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 21 min ago

Sidereal time versus solar time

Wed, 04/24/2024 - 15:56

About time keeping, the definition of “sidereal” is “of or with respect to the distant stars (i.e. the constellations or the fixed stars, not the sun or planets)”, as defined after a google search. With that in mind, a quick and admittedly simplistic look at keeping earthly time is shown in Figure 1.

 

Figure 1 A comparison between solar days of 24 hours and sidereal days of 23 hours, 56 minutes, and 4.091 seconds.

As the earth moves around the sun in its yearly orbit, the absolute direction in which some fixed point on the earth’s surface that is aimed directly at the sun changes. We who sit on the earth at some particular place, maybe in our backyards, see the sun reach its peak in the sky once every twenty-four hours and we call that a solar day. Disregarding leap years, leap seconds, and the like; our wristwatches, tabletop alarm clocks, and other timepieces keep track of our personal time on a solar day basis.

Measured with respect to absolute space, however, the solar day requires an earth rotation per “day” of slightly more than 360°. Thus, when we take the universe at large as some fixed entity, with respect to that entity, the earth completes an exact 360° rotation in an average time of only 23 Hours, 56 Minutes, 4.091 Seconds which is a shorter time span.

That shorter time span is a sidereal day which differs from and is slightly less than a solar day.

Have you checked your watch lately?

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sidereal time versus solar time appeared first on EDN.

Power Tips #128: Designing a high voltage DC-link capacitor active precharge circuit

Tue, 04/23/2024 - 18:41

Introduction

Electric vehicles (EVs) typically feature a large DC link capacitor (CDC LINK) to minimize voltage ripple at the input of the traction inverter. When powering up an EV, the purpose of precharging is to safely charge up CDC LINK before operating the vehicle. Charging CDC LINK up to the battery stack voltage (VBATT) prevents arcing on the contactor terminals, which can lead to catastrophic failures over time.

The conventional precharge method involves implementing a power resistor in series with the CDC LINK to create a resistor-capacitor (RC) network. However, as the total CDC LINK capacitance and VBATT increase, the required power dissipation grows exponentially. In this article, we’ll present a straightforward approach to designing an efficient, active pre-charge circuit using a spreadsheet calculator.

Understanding active precharge

While passive precharge employs a power resistor to create an RC circuit that charges the capacitor asymptotically, active precharge can employ a switching converter with a buck topology that uses hysteretic inductor current control to deliver a constant charge current to the capacitor (Figure 1).

Figure 1 The active precharge circuit where a buck converter uses a hysteretic inductor current control to deliver a constant charge current to the capacitor to enable the linear charge of the capacitor voltage (VCAP) up to the same voltage potential as the battery (VBATT). Source: Texas Instruments

This constant current enables linear charging of the capacitor voltage (VCAP) up to the same voltage potential as that of the battery. Figure 2 and Equation 1 characterize this linear behavior.

Figure 2 Active precharge linear behavior using a buck topology with hysteretic inductor current control. Source: Texas Instruments

The first step is to determine the required charge current (ICHARGE). ICHARGE is the quotient of the total DC link charge (QDC LINK) and the required precharge time (tCHARGE) shown in Equation 2.

QDC LINK is the product of CDC LINK and VBATT, as shown in Equation 3.

Calculator overview

This active hysteretic buck circuit has a floating ground potential riding on the switch node, so powering the control system requires an isolated bias supply. The calculator tool will ensure that the power consumption of this control circuitry stays within the sourcing capability of the isolated bias supply, or else the voltage will collapse.

The High-Voltage Solid-State Relay Active Precharge Reference Design from Texas Instruments (TI) introduces an active solution that enhances energy transfer efficiency and reduces practical charge time. TI’s TPSI3052-Q1 is a fully integrated isolated bias supply used in the active precharge reference design, which can source and supply up to 83 mW of power to the isolated secondary. Gate drive current, device quiescent currents, and resistor dividers are the primary contributors to power consumption. Equation 4 characterizes the gate drive power (PGATE DRIVE) as the product of the gate drive current (IGATE DRIVE) and gate drive voltage (VS GATE DRIVER) which is 15 V, in the case of the reference design.

Equation 5 characterizes gate drive current as the product of the metal-oxide semiconductor field-effect transistor (MOSFET) total gate charge (QG) and switching frequency (FSW).

Equation 6 expresses how FSW varies according to VCAP throughout the charging period, creating the upside-down parabola in the FSW versus VCAP curve in Figure 3. As shown in the Figure, the gate drive current peaks at the maximum switching frequency (FSW_MAX), which occurs when VCAP reaches half of VBATT. Equation 7 expresses the relationship between FSW_MAX, VBATT, inductance (L) and peak-to-peak inductor current (dI):

Figure 3 Calculator curve showing FSW versus VCAP and FSW LIMIT. Source: Texas Instruments

Using the calculator tool

The calculator prompts you to input various design parameters. The yellow cells are the required inputs while gray cells signify optional inputs. The default values in the gray cells reflect the parameters of the reference design. A user can change the gray cell values as needed. The white cells show the calculated values as outputs. A red triangle in the upper-right corner of a cell indicates an error; users will be able to see a pop-up text on how to fix them. The objective is to achieve a successful configuration with no red cells. This can be an iterative process where users can hover their mouse over each of the unit cells to read explanatory information.

Precharge system requirements

The first section of the calculator, shown in Figure 4, computes the required charge current
(ICHARGE REQUIRED) based on the VBATT, tCHARGE, and CDC LINK system parameters.

Figure 4 The required charge current (ICHARGE REQUIRED) based on the VBATT, tCHARGE, and CDC LINK system parameters. Source: Texas Instruments

Inductance and charge current programming

The section of the calculator shown in Figure 5 calculates the actual average charging current (ICHARGE) and FSW_MAX. The average inductor current essentially equates to ICHARGE where ICHARGE must be equal to or greater than ICHARGE REQUIRED, this was calculated in the previous section to meet the desired tCHARGE.

Be mindful of the relationship between L, dI, and FSW_MAX as expressed in Equation 7. L and dI are each inversely proportional to FSW, so it is important to select values that do not exceed the maximum switching frequency limit (FSW LIMIT). Your inductor selection should accommodate adequate root-mean-square current (IRMS > ICHARGE), saturation current (ISAT > IL PEAK), and voltage ratings, with enough headroom as a buffer for each.

Figure 5 Inductance and charge current programming parameters. Source: Texas Instruments

 Current sensing and comparator setpoints

The section of the calculator shown in Figure 6 calculates the bottom resistance (RB), top resistance (RT), and hysteresis resistance (RH) around the hysteresis circuit needed to meet the peak (IL PEAK) and valley (IL VALLEY) inductor current thresholds specified in the previous section. Input the current sense resistance (RSENSE) and RB. These are flexible and can be changed as needed. Make sure that the comparator supply voltage (VS COMPARATOR) is correct.

Figure 6 Section that calculates the bottom resistance (RB), top resistance (RT), and hysteresis resistance (RH) around the hysteresis circuit needed to meet the peak (IL PEAK) and valley (IL VALLEY) inductor current thresholds. Source: Texas Instruments

Bias supply and switching frequency limitations

The section of the calculator shown in Figure 7 calculates the power available for switching the MOSFET (PREMAINING FOR FET DRIVE), by first calculating the total power draw (PTOTAL) associated with the hysteresis circuit resistors (PCOMP. RESISTORS), the gate driver integrated circuit (IC) (PGATE DRIVER IC), and the comparator IC (PCOMPARATOR IC), and subtracting it from the maximum available power of the TPSI3052-Q1 (PMAX_ISOLATED BIAS SUPPLY). Input the MOSFET total gate charge (QG TOTAL), device quiescent currents (IS GATE DRIVER IC and ISUPPLY COMP IC), and gate driver IC supply voltage (VS GATE DRIVER IC). The tool uses these inputs to calculate FSW LIMIT displayed as a red line in Figure 3.

Figure 7 Isolated bias supply and switching frequency limitations parameters. Source: Texas Instruments

The calculator tool makes certain assumptions and do not account for factors such as comparator delays and power losses in both the MOSFET and the freewheeling diode. The tool assumes the use of rail-to-rail input and output comparators. Make sure to select a MOSFET with an appropriate voltage rating, RDSON, and parasitic capacitance parameters. Ensure the power loss in both the MOSFET and freewheeling diode are within acceptable limits. Finally, select a comparator with low offset and low hysteresis voltages with respect to the current sense peak and valley-level voltages. Simulating the circuit with the final calculator values ensures the intended operation.

Achieved the desired charge profile

Adopting an active hysteretic buck circuit significantly improves efficiency and reduces the size of the charging circuitry in high voltage DC-link capacitors found in EVs. This helps potentially lower the size, cost, and thermals of a precharge solution.

This article presents the design process to calculate the appropriate component values that help achieve the desired charge profile.

By embracing these techniques and tools, engineers can effectively improve the precharge functionality in EVs, leading to improved power management systems to meet the increasing demands of the automotive industry.

Tilden Chen works as an Applications Engineer for the Texas Instruments Solid State Relays team, where he provides product support and generates technical collateral to help win business opportunities. Tilden joined TI in 2021 after graduating from Iowa State University with a B.S. in Electrical Engineering. Outside of work, Tilden enjoys participating in Brazilian jiu-jitsu and shuffle dancing.

 Hrag Kasparian, who joined Texas Instruments over 10 years ago, currently serves as a power applications engineer, designing custom dc-dc switch-mode power supplies. Previously, he worked on development of battery packs, chargers, and electric vehicle (EV) battery management systems at a startup company in Silicon Valley. Hrag graduated from San Jose State University with a bachelor of science in electrical engineering.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #128: Designing a high voltage DC-link capacitor active precharge circuit appeared first on EDN.

Palladium emulation: Nvidia’s Jensen Huang is a fan

Mon, 04/22/2024 - 17:39

Nvidia CEO Jensen Huang calls Palladium the only appliance more important to him than the refrigerator. At Cadence Design Systems annual event in Santa Clara, California, he also acknowledged that Nvidia has the largest installation of Palladium emulation systems.

Earlier, during a fireside chat, Huang said that Blackwell AI processor would not exist without Palladium. So, what’s Palladium and why is it making waves for large and powerful chip designs? It’s an emulation tool built around Cadence’s custom processors, and it’s used for pre-silicon hardware debugging.

Figure 1 Palladium Z3 and Protium X3 deliver fast pre-silicon verification and validation of the large, complex chip designs. Source: Cadence

At CadenceLive, held on 17 April 2024, the EDA toolmaker unveiled Palladium Z3 alongside Protium X3. “The supercharged Palladium Z3 and Protium X3 are built to deliver fast pre-silicon verification and validation of the largest and most complex devices,” said Dhiraj Goswami, corporate VP of hardware system verification R&D at Cadence.

Palladium Z3, which offers approximately 1.5 times the performance of its predecessor Palladium Z2, can scale from 16 million gates to all the way 48 billion gates. It also features specialized apps for tasks such as 4-state emulation, mixed-signal emulation, safety emulation, and fine-grained power analysis.

Next, the Protium X3 system, built around AMD Epyc processors paired to AMD Versal Premium VP1902 adaptive system-on-chips (SoCs), provides physical prototyping to accelerate bring-up times for pre-silicon software validation of complex, multi-billion gate chip designs. It’s also 1.5 times faster than its predecessor, Protium X2.

Palladium and Protium create a virtual version of a chip to start writing software while waiting for the physical chip to return from the fab. That’s how chip design emulation accelerates time to market. However, though Palladium Z3 and Protium X3 work in tandem, as explained above, they facilitate different types of workloads.

Figure 2 Palladium Z3 and Protium X3 feature a unified compiler and common virtual and physical interfaces. Source: Cadence

Nvidia, which used Palladium Z3 and Protium X3 predecessors in designing just-announced Blackwell AI processors, is already testing these upgraded systems in some of its AI processor designs. “The next-generation Palladium and Protium systems push the boundaries of capacity and performance to help enable a new era of generative AI computing,” said Scot Schultz, senior director for networking at Nvidia.

Today’s large chip designs serving applications like AI and high-performance computing (HPC) increasingly demand emulation solutions that offer higher performance along with faster and more predictable compile debug. New emulation systems such as Palladium fill the need with early software development, hardware-software verification, and debug tasks.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Palladium emulation: Nvidia’s Jensen Huang is a fan appeared first on EDN.

USB activation dongles: Counterfeit woes

Mon, 04/22/2024 - 17:27

My Blackmagic Design video cameras are compatible with numerous editing software packages, but the company’s own DaVinci Resolve is a particularly compelling option. For one thing, there’s a close-knit synergy—or at least the natural potential for one—whenever the hardware and software come from the same company. DaVinci Resolve, for example, is able to access the camera’s gyroscope-generated metadata in order to implement post-capture stabilization of video footage; an admittedly inferior alternative to the in-body stabilization (IBIS) offered by competitors’ cameras, which operates during initial footage capture, but better than nothing.

The baseline DaVinci Resolve suite is also completely free, and robustly featured to boot. That said, Blackmagic’s cameras come bundled (at least from the factory…this is something often in-advance stripped out of used units offered for resale) with license keys for the paid DaVinci Resolve Studio variant, which offers some key enhancements for more advanced videography use cases. Each license key allows for two concurrent-use “seats”, and deactivating (at least temporarily) one installation associated with your key and account in order to activate another is straightforward…but it requires a “live” Internet connection to Blackmagic’s server farm, which may not be feasible if you’re “in the field” at the time.

Alternatively, therefore, the company also sells (through its various retail partners) USB activation dongles. Here’s an example, from Sweetwater’s site:

As you can see, they cost the same as a software license key: $295, which is a bit “salty”, both absolutely and relative to “free”. But last May, shortly after buying my two cameras (only one of which came with a key), I came across a claimed “used” one on eBay for just over $100. For the flexibility of two additional concurrent active “seats”, if no other reason, I took the plunge.

When the dongle arrived (from a Vietnam-based seller, it turns out, contrary to the upfront U.S.-sourced claim, and which in retrospect should have been my first warning sign), the packaging was admittedly a bit sketchy:

But the dongle itself looked legit, at least at initial quick glance (as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes):

And it was recognized by both MacOS and Windows systems, along with (at least at the time, keep reading) correctly activating DaVinci Resolve Studio installs on both O/Ss:

(keen-eyed readers will notice that I’ve gone ahead and added the Satechi hub to the Mac mini “stack” covered in one of last month’s blog posts, to give me easy front-panel access to various interface and expansion connectors)

More recently, however, I decided to update my various DaVinci Resolve Studio “seats” to latest-version 18.6. Afterwards, my ability to continue activating them via the dongle abruptly ceased. Jumping on Google, I learned that I wasn’t alone in my dismay (not to mention its root cause):

Like some previous releases we also have blocked some dongle key ids that are counterfeit. If you purchased second hand or not from an authorized reseller you may have one of those.

Unsurprisingly, the original seller (whose eBay account is still active as I write these words; note, too, that the seeming same person(s) was/were also selling “used” dongles on Amazon at the same time I bought mine on eBay) hasn’t responded to my outreach. That said, I give eBay plenty of kudos; the rep to whom I reported the issue promptly issued me a future-purchase coupon for the full amount.

But I was still curious to see what else I could find out about this forgery. I’ll probably eventually do a proper teardown, so stay tuned for that, although note that I’m not going to also drop $300 on a legit one for comparison purposes (!!!). For now, I’ll share some screenshots of how the dongle self-identifies to both MacOS:                                           

Screenshot

and Windows:

I’m pretty sure the first time I heard the adage “if it sounds too good to be true, it probably is” was as a child, and came from my parents. Decades later, the wisdom still applies. Caveat emptor, folks! Sound off with your thoughts in the comments.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post USB activation dongles: Counterfeit woes appeared first on EDN.

Actuation gear aids aircraft electrification

Fri, 04/19/2024 - 18:09

Microchip has combined gate driver boards with its Hybrid Power Drive (HPD) modules to ease the aviation industry’s transition to electric actuation systems. Configured with silicon or silicon carbide switches, the HPD modules cover a range of 5 kVA to 20 kVA and maintain the same footprint regardless of the power output.

These integrated actuation power bundles provide a plug-and-play motor drive solution for the electrification of such systems as flight controls, braking, and landing gear. Power components are designed to scale based on application requirements. They can be used to create actuation systems for drones, small planes, electric vertical take-off and landing aircraft, More Electric Aircraft (MEA), and all-electric aircraft.

Gate driver boards are driven with external PWM signals. They provide differential outputs for telemetry signals like DC bus current, phase current, and solenoid current by taking feedback from shunts in the HPD modules and DC bus voltage. The isolated boards operate over a temperature range of -55°C to +110°C and require a single 15-VDC input for the control and drive circuit. Additional required voltages can be generated on the card.

The gate driver boards and accompanying HPD modules are available now in production quantities. To learn more about these integrated actuation power components, click here.

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Actuation gear aids aircraft electrification appeared first on EDN.

7-A switch protects USB-C connections

Fri, 04/19/2024 - 18:08

The AOZ1377DI current-limiting switch from Alpha & Omega improves USB-C safety by providing a maximum load current of 7 A at up to 23 V. In addition to programmable current limiting, the switch offers true reverse current blocking, which prevents unwanted reverse current from flowing from VOUT to VIN.

Useful for both sink and source applications, the AOZ13771DI has an input operating voltage range of 3.4 V to 23 V. Both VIN and VOUT terminals are rated at an absolute maximum of 28 V. Integrated back-to-back N-channel MOSFETs provide a typical on-resistance of 19 mΩ and a high safe operating area. In addition, an internal soft-start circuit controls inrush current from high capacitive loads, and an external capacitor can adjust the slew rate.

The protection switch comes in two variants. The AOZ1377DI-01 automatically restarts once fault conditions are cleared. The AOZ1377DI-02 latches the power switch off, and the enable-input pin must be reset to restart the device.

The AOZ1377DI-01 and AOZ1377DI-02 switches cost $1.356 in lots of 1000 units. They are available in production quantities with a lead time of 16 weeks

AOZ1377DI datasheet

Alpha & Omega Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 7-A switch protects USB-C connections appeared first on EDN.

32-bit MCUs pack ample memory and resources

Fri, 04/19/2024 - 18:08

Microcontrollers in the GD32F5 series from GigaDevice are equipped with 7.5 Mbytes of on-chip flash and 1 Mbyte of SRAM, both supporting ECC verification. Up to 2 Mbytes of code flash can be configured for zero-wait-state operation, improving processing speed and efficiency. A maximum of 2 Mbytes of memory is also available for read-while-write OTA updating. Additionally, the MCUs accommodate a variety of external memories.

The GD32F5 series of MCUs is based on an Arm Cortex-M33 processor. This 32-bit core operates at up to 200 MHz and delivers a performance rating of 3.31 CoreMark/MHz. An integrated 12-bit successive approximation ADC module can sample analog signals from 16 external channels and 2 internal channels, plus the battery voltage channel. MCUs also feature two DACs and multiple timers. Connectivity resources include UART/USART, I2C, SPI, I2S, SDIO, USB, CAN-FD, and Ethernet.

A set of built-in system security features offers protection for both firmware and device private data, as well as service execution assurance. The general-purpose MCUs operate with a supply voltage of 1.71 V to 3.6 V and have 5-V tolerant I/Os. The lineup comprises 10 variants in a choice of packages, including BGA and LQFP.

GD32F5 series product page 

GigaDevice

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 32-bit MCUs pack ample memory and resources appeared first on EDN.

AI-powered processors drive business PCs

Fri, 04/19/2024 - 18:07

AMD has rolled out the Ryzen Pro 8040, a series of advanced x86 processors with built-in AI for business laptops and mobile workstations. By leveraging the CPU, GPU, and dedicated on-chip neural processing unit (NPU), the Ryzen AI-enabled processors are able to provide more dedicated AI processing power than previous generations.

According to AMD, these devices are capable of delivering up to 16 dedicated NPU TOPS, or trillions of operations per second, and up to 39 total system TOPS. The premier model in the Ryzen Pro 8040 lineup is the Ryzen 9 Pro 8945H. This powerful processor is outfitted with 8 cores, 16 threads, 24 Mbytes of cache, and a Radeon 780M GPU.

Ryzen Pro technology gives IT managers access to enterprise-grade manageability, as well as chip-to-cloud security. Further, PCs powered by the Ryzen Pro 8040 series will be among the first to incorporate Wi-Fi 7 connectivity.

Along with the Ryzen Pro 8040 series, AMD also announced the Ryzen Pro 8000 series of AI-powered desktop processors. Like their mobile counterparts, these desktop processors are built on a 4-nm FinFET process and feature the same Zen 4 architecture.

Both the Ryzen Pro 8040 mobile series and Ryzen Pro 8000 desktop series are expected to be available in PCs from OEM partners, including HP and Lenovo, starting in Q2 2024.

Ryzen Pro mobile product page 

Ryzen Pro desktop product page 

Advanced Micro Devices 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI-powered processors drive business PCs appeared first on EDN.

Snap-in capacitors simplify PCB mounting

Fri, 04/19/2024 - 18:07

Aluminum electrolytic capacitors in the SNA and SNL series from Kyocera AVX come in insulated cases with snap-in terminals that ease installation. The devices provide high reliability and high CV performance for use in a wide range of commercial and industrial applications requiring operation at temperatures of up to 105°C.

Well-suited for use in solar inverters, frequency converters, and power supplies, the SNA series of capacitors offers rated DC voltages of 250 V, 420 V, and 450 V. Capacitance values extend from 82 µF to 1500 µF with ±20% tolerance. The devices exhibit an endurance of 3000 hours at 105°C and come in 24 case sizes spanning 22×25 mm to 35×50 mm.

SNL series capacitors provide rated DC voltages of 160 V, 200 V, 250 V, 350 V, 400 V, 450 V, 500 V, and 550 V. Capacitance values range from 68 µF to 2200 µF with ±20% tolerance. SNL capacitors also exhibit an endurance of 3000 hours at 105°C. They are available in 36 case sizes ranging from 22×20 mm to 35×60 mm.

Lead time for the SNA and SNL series of snap-in aluminum electrolytic capacitors is 20 to 24 weeks. The parts are shipped in bulk packaging.

SNA series product page 

SNL series product page 

Kyocera AVX 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Snap-in capacitors simplify PCB mounting appeared first on EDN.

Efficient digitally regulated bipolar voltage rail booster

Thu, 04/18/2024 - 16:41

The challenge of improving analog/digital accuracy by preventing amplifier saturation in systems supplied with only a single logic-level power rail has been receiving a lot of activity and design creativity recently. Voltage inverters generating negative rails for keeping RRIO amplifier output circuitry “live” at zero have received most of the attention. But frequent and ingenious contributor Christopher Paul points out that precision rail-to-rail analog signals need similar extension of the positive side for exactly the same reason. He presents several interesting and innovative circuits to achieve this in his design idea “Parsing PWM (DAC) performance: Part 2—Rail-to-rail outputs”.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The design idea presented here addresses the same topic but offers a variation on the theme. It regulates inverter output through momentary (on the order of tens of microseconds) digital shutdown of the capacitive current pumps instead of post-pump linear regulation of pump output. This yields a very low quiescent, no-load current draw (<50 µA) and achieves good current efficiency  (~95% at 1 mA load current, 99% at 5 mA)

Figure 1 shows how it works.

Figure 1 Direct charge pump control yields efficient generation and regulation of bipolar beyond-the-rails voltages.

Schmidt trigger oscillator U1a provides a continuous ~100 kHz clock signal to charge pump drivers U1b (positive rail pump) and U1c (negative rail). When enabled, these drivers can supply up to 24 mA of output current via the corresponding capacitor-diode charge pumps and associated filters: C4 + C5 for the positive rail, C7 + C8 for the negative. Peak-to-peak output ripple is ~10 mV.

Output regulation is provided by the charge pump control from the temperature compensated discrete transistor comparator Q1:Q2 for U1c on the negative rail and Q3:Q4 for U1b on the positive. Average current draw of each comparator is ~4 µA, which helps achieve those low power consumption figures mentioned earlier. Comparator voltage gain is ~40 dB = 100:1.

The comparators set beyond-the-rails voltage setpoint s in ratio to +5 V of:

= -5 V*R4/R5 for the negative rail = -250 mV for values shown
+ = 5 V*R2/R5 for the positive = +250 mV for values shown

Note that the output of the Q1:Q2 comparator is opposite to the logic polarity required for correct U1c control. Said problem being fixed by handy inverter U1d.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Efficient digitally regulated bipolar voltage rail booster appeared first on EDN.

EPR spectrometer and its AWG and digitizer building blocks

Thu, 04/18/2024 - 08:35

A new electron paramagnetic resonance (EPR) spectrometer aims to open the technology to a larger pool of scientists by making it cheaper, lighter, and easier to use without needing an experienced operator. Its control software—designed to be intuitive with several automated features—makes the set-up straightforward and doesn’t require an expert in EPR spectroscopy to obtain results.

EPR or electron spin resonance (ESR) spectroscopy, while quite similar to nuclear magnetic resonance (NMR) spectroscopy, examines the nature of unpaired electrons instead of nuclei such as protons. It’s commonly used in chemistry, biology, material science, and physics to study the electronic structure of metal complexes or organic radicals.

Figure 1 The new EPR spectrometer is modular in design and is smaller, lighter, and cheaper than traditional solutions. Source: Spectrum Instrumentation

However, EPR spectrometers are commonly built around massive electromagnets, so they can weigh over a ton and are usually placed in basements. Bridge12, a startup located near Boston, Massachusetts, claims to have produced an EPR spectrometer that is about half the cost of current instruments and a tenth of the size and weight so that it can be placed on any floor of a building (Figure 1).

The new EPR spectrometer is built around two basic building blocks: an arbitrary waveform generator (AWG) to generate the pulses and a digitizer to capture the returning signal. These building blocks are implemented as cards supplied by German firm Spectrum Instrumentation, making the design modular and flexible for end users.

First, an AWG generates 10 to 100-ns long pulses in the 200 to 500 MHz range as required by the experiment, which are then first up-converted to 10-GHz X band range using an RF I/Q mixer and then up-converted to the Q band range. The microwave pulses are then fed into a 100-W solid-state amplifier before being sent to the EPR resonator.

Next, the reflected signal is down-converted to an IF frequency in the 200 to 500 MHz range and sent to the digitizer. Unlike the traditional EPR spectroscopy, where the signal is down-converted to DC, this new approach drastically reduces noise and artifacts.

Figure 2 shows an example of AWG-generated pulses used in an EPR experiment. See WURST (Wideband, Uniform Rate, Smooth Truncation) pulses, which are broadband microwave pulses with an excitation bandwidth and profile that exceeds that of a simple rectangular pulse by far. These pulses facilitate broadband excitation in EPR spectroscopy while heavily relying on the performance of the AWG.

Figure 2 The AWG-generated WURST pulses are displayed in an EPR spectroscopy experiment. Source: Spectrum Instrumentation

The modular design of this EPR spectrometer built around AWG and digitizer cards is integrated into Netboxes, which can be connected to a PC through Ethernet. So, a compact PC can replace a system that is big enough to insert cards, which inevitably leads to a bulky rack solution. As a result, it’s much easier to service EPR spectrometer and replace components in the field.

Another noteworthy design feature of this new EPR spectrometer relates to the much smaller, super-conducting magnet to produce the required magnetic field strength. EPR spectrometers usually use huge, heavy electromagnets to generate intense magnetic fields in the order of 1 to 1.5 Tesla.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post EPR spectrometer and its AWG and digitizer building blocks appeared first on EDN.

Practical tips for automotive ethernet physical layer debug

Wed, 04/17/2024 - 17:36

Automotive ethernet is increasingly utilized for in-vehicle electronics to transmit high speed serial data between interconnected devices and components. Due to the relatively fast data rates, and the complexity and variation of the networked devices, signal integrity issues can often arise. This article outlines several real-world challenges and provides insight into how to identify and debug automotive ethernet physical layer signal integrity problems using an oscilloscope. The following is a case study of automotive ethernet debugging performed at Inspectron, a company that designs and manufactures borescopes, embedded Linux systems, and camera inspection tools.

Automotive ethernet hardware debug configuration

The automotive ethernet signal path is bi-directional (full duplex on a single twisted pair), so hardware transceivers must be able to discern incoming data by subtracting their own outbound data contributions from the composite signal. If one were to directly probe an automotive ethernet data line, a jumbled superposition resembling a bus collision would be acquired. To make sense of the individual signals being sent, bi-directional couplers can be used.

Figure 1 shows the hardware configuration used to debug an automotive ethernet setup. The two automotive ethernet devices under test (DUTs) are a ROCAM mini-HD display and a Raspberry Pi (with a 100Base-TX to 100Base-T1 bridge). The Raspberry Pi is used to simulate an ethernet camera. The twisted pairs from the DUTs are attached to adapter boards which break out the single 100 Ω differential pair into two 50 Ω single-ended SMA connectors. Each DUT has its pair of SMA cables connected to a calibrated active breakout fixture (Teledyne LeCroy TF-AUTO-ENET). The breakout fixture maintains an uninterrupted communication link, while two calibrated and software-enhanced hardware directional couplers tap off the traffic from each direction into separate streams which isolate the automotive ethernet traffic from each direction for analysis on the oscilloscope.

Figure 1 (a) The hardware configuration used to debug an automotive ethernet setup involves two DUTs, passive fixtures to adapt from automotive ethernet to SMA, and a calibrated active breakout fixture with bi-directional couplers to isolate traffic from each direction. The oscilloscope will analyze both upstream and downstream traffic. (b) The block diagram of the test setup. Source: Teledyne LeCroy

Identifying where signal loss occurs

Intermittent signal loss occurred between the ROCAM mini-HD display and the Raspberry Pi. One method to capture an intermittent loss of data transmission is a hardware Dropout trigger. In Figure 2, a Dropout trigger is armed to trigger the oscilloscope if no signal edge crosses the threshold voltage within 200 nanoseconds (ns). The two Zoom traces scaled at 200 ns/div show the trigger point one division to the right of the previous automotive ethernet edge. A loss of signal occurred for approximately 800 ns before data transmission recommenced. Note that since automotive ethernet 100Base-T1 is a three-intensity level (+1, 0, -1) PAM3 signal, the eye pattern with over 192,000 bits in the eye still shows good signal integrity (data dropout blends in with “0” symbols), but the Zoom traces at the Dropout trigger location reveal the location of signal loss.

Figure 2 The eye pattern shows a clean automotive ethernet 100Base-T1 signal, while the Dropout trigger identifies and locates a signal loss event. Source: Teledyne LeCroy

Amplitude modulation of serial data

Anomalous amplitude modulation or baseline wander issues can often be caught by triggering at a high threshold, slightly above the logic +1 voltage level (for the non-inverting input from the split differential signal). Intermittent anomalous amplitude modulation occurred on the automotive ethernet signal, and an instance was captured with the edge trigger set slightly above the highest expected voltage level, shown in Figure 3. The red histogram with three peaks, taken from a vertical slice through the eye diagram in the center of the symbol slot, show an asymmetry in the statistical distribution of the lowest and highest of the three voltage levels; this is due to the intermittent anomalous amplitude modulation of the signal. There is also an asymmetry of the eye width between the upper and lower eyes, identified in the eye measurement parameter table below the waveforms.

Figure 3 The three red histograms in the lower right grid show an asymmetry in the eye pattern due to intermittent anomalous amplitude modulation. The edge trigger raised to a high voltage threshold, catches an instance of the anomalous amplitude modulation. Source: Teledyne LeCroy

Intermittent amplitude reduction of signal

During the debug process, a malfunction was detected in which the amplitude of the signal would drop to 50% of the expected level. This problem was initially detected with the eye pattern, in which there was a collapse of the eye. In order to detect the location in time where the problem occurred, a dropout trigger was set with a threshold level at approximately 80% of the amplitude of the automotive ethernet signal. When the signal dropped to half amplitude, the Dropout trigger caught the event, showing the amplitude reduction at the point of occurrence. Zoom traces superimposed over the original waveform captures shows poor signal integrity in the time domain traces, which is also indicated in the collapsed eye.

Figure 4 The location of occurrence of the automotive ethernet amplitude reduction is caught using the Dropout trigger with a threshold set to approximately 80% of the waveform amplitude. The poor signal integrity of the reduced amplitude signal is shown in both the eye pattern and in the time synchronized Zoom traces. Source: Teledyne LeCroy

Addressing real-world automotive ethernet scenarios

Physical layer problems in automotive ethernet designs can be elusive and difficult to detect. This article outlined several real-world scenarios which occurred during the implementation of an automotive ethernet network with specific techniques used to identify each type of problem and where in time it occurred. This was accomplished using a combination of triggering, Zooms, eye patterns, statistical distributions, and measurement parameters.

Dave Van Kainen is a Founding Partner of Superior Measurement Solutions and holds a BSEE from Lawrence Tech.

Mike Hertz is a Field Applications Engineer at Teledyne LeCroy and holds a BSEE from Iowa State and an MSEE from Univ. Arizona.

Patrick Caputo is Chief Product Architect at Inspectron, Inc., and holds dual BSs in EE and Physics and an MS in ECE from Georgia Tech.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Practical tips for automotive ethernet physical layer debug appeared first on EDN.

Challenges in designing automotive radar systems

Wed, 04/17/2024 - 04:39

Radar is cropping up everywhere in new car designs: sensing around the car to detect hazards and feed into decision making for braking, steering, and parking and in the cabin for driver and occupancy monitoring systems. Effective under all weather conditions, now high-definition radar can front-end AI-based object detection, complementing other sensor channels to further enhance accuracy and safety.

There’s plenty of potential for builders of high value embedded radar systems. However, competitively exploiting that potential can be challenging. Here we explore some of those challenges.

Full system challenges

Automotive OEMs aren’t simply adding more electronic features to new vehicles; they are driving unified system architectures for their product lines to manage cost, simplify software development and maintenance, and enhance safety and security.

So, more compute and intelligence are moving into consolidated zonal controllers, communicating on one side between relatively small sensor units and processors within a small zone of the car, and on the other side, between zonal controllers and a central controller, managing overall decision making.

Suppliers aiming at automotive radar system markets must track their solution architectures with these changes, providing scalability between relatively simple processing for edge functions and more extensive capability for zonal or central controllers, while being flexible to adapt to different OEM partitioning choices.

One important implication is that however a solution might be partitioned, it must allow for significant amounts of data to be exchanged between edge, zonal, and central compute. Which raises the importance of data compression during transmission to manage latency and power.

In addition to performance, power and cost constraints, automotive systems must also factor in longevity and reliability. The full lifetime of a car may be 10, 20 or more years during which time software and AI model upgrades may be required to fix detected problems or to meet changing regulatory requirements.

Those constraints dictate a careful balance in radar system design between the performance/low power of hardware and the flexibility of software to adapt to changes. Nothing new there, but radar pipelines present some unique demands when compared to vision pipelines.

Pipeline challenges

A full radar system flow is shown in the figure below, from transmit and receive antennae all the way to target tracking and classification. Antennae configurations may run from 4×4 (Tx/Rx) for low-end detection up to 48×64 for high-definition radars. In the system pipeline following the radar front-end are FFTs for computing first range information and then Doppler information. Next is a digital beamforming stage to manage digital streams from multiple radar antennae.

A complete radar system pipeline spans from transmit/receive antennae all the way to target tracking and classification. Source: Ceva

Up to this point, data is still somewhat a “raw signal”. A constant false alarm rate (CFAR) stage is the first step in separating real targets from noise. Angle of Arrival (AoA) calculations complete positioning a target in 3D space, with Doppler velocity calculation adding a 4th dimension. The pipeline rounds out with target tracking, using for example an Extended Kalman Filter (EKF), and object classification typically using an OEM-defined AI model.

OK, that’s a lot of steps, but what makes these complex? First, the radar system must support significant parallelism in the front-end to handle large antennae arrays pushing multiple image streams simultaneously through the pipeline while delivering throughput of between 25 and 50 frames per second.

Data volumes aren’t just governed by the number of antennae. These feed multiple FFTs, each of which can be quite large, up to 1K bins. Those conversions stream data ultimately to a point cloud, and the point cloud itself can easily run to half a megabyte.

Clever memory management is critical to maximizing throughput. Take the range and Doppler FFT stages. Data written to memory from the range FFT is 1-dimensional, written row-wise. The Doppler FFT needs to access this data column-wise; without special support, the address jumps implied by column accesses require many burst-reads per column, dramatically dropping feasible frame rates.

CFAR is another challenge. There are multiple algorithms for CFAR, some easier to implement than others. The state-of-the-art option today is OS-CFAR—or ordered statistics CFAR—which is especially strong when there are multiple targets (common for auto radar applications). Unfortunately, OS-CFAR is also the most difficult algorithm to implement, requiring statistics analysis in addition to linear analysis. Nevertheless, a truly competitive radar system today should be using OS-CFAR.

In the tracking stage, both location and velocity are important. Each of these is 3-dimensional (X,Y,Z for location and Vx,Vy,Vz for velocity). Some EKF algorithms drop a dimension, typically elevation, to simplify the problem; this is known as 4D EKF. In contrast, a high-quality algorithm will use all 6 dimensions (6D EKF). A major consideration for any EKF algorithm is how many targets it can track.

While aircraft may only need to track a few targets, high-end automotive radars are now able to track thousands of targets. That’s worth remembering when considering architectures for high-end and (somewhat scaled down) mid-range radar systems.

Any challenges in the classification stage are AI-model centric, so not in range of this radar system discussion. These AI models will typically run on a dedicated NPU.

Implementation challenges

An obvious question is what kind of platform will best serve all these radar system needs? It must be very strong at signal processing and must meet throughput goals (25-50 fps) at low power, while also being software programmable for adaptability over a long lifetime. That argues for a DSP.

However, it also must handle many simultaneous input streams, arguing for a high degree of parallelism. Some DSP architectures support parallel cores, but the number of cores needed may be overkill for many of the signal processing functions (FFTs for example), where hardware accelerators may be more appropriate.

At the same time, the solution must be scalable across zonal car architectures: a low-end system for edge applications, feeding a higher end system in zonal or central applications. It should provide a common product architecture for each application and common software stack, while being simply scalable to fit each level from the edge to the central controller.

Tomer Yablonka is director of cellular technology at Ceva’s mobile broadband business unit.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Challenges in designing automotive radar systems appeared first on EDN.

Measuring pulsed RF signals with an oscilloscope

Tue, 04/16/2024 - 16:10

RF signals historically are measured using spectrum analyzers, at least that was before oscilloscopes offered sufficient bandwidth for those measurements. With oscilloscope bandwidths over 100 GHz, RF measurements are no longer the exclusive domain of the spectrum analyzer; this is especially true for pulsed RF measurements, where the time domain measurements of an oscilloscope have several advantages. This article will focus on the time measurements of pulsed RF signals.

Many devices use pulsed RF signals. The obvious ones are echo-ranging systems like radar. Additionally, nuclear magnetic resonance (NMR) spectrometers and magnetic resonance imaging (MRI) systems use pulsed RF. Even automotive keyless entry systems use pulse-modulated RF signals. 

Pulsed RF signals

Pulsed RF signals are created by gating a continuous wave (CW) RF source, as shown in Figure 1.

Figure 1 Pulsed RF signals can be generated by gating a CW RF source using a switch controlled by a gate signal pulse train. Source: Arthur Pini

The carrier source is a continuous wave oscillator. It is gated by a switch driven by the gating signal pulse train. This is a multiplication operation with the carrier multiplied by the gate signal. When the gating signal is high, the switch outputs RF; when low, the output is zero. The 350 MHz carrier is shown in the upper left grid. A horizontally expanded zoom view (left center grid) shows the details of the carrier waveform. The gating signal (lower left grid) is a logic signal with a zero state at 0 volts and a 1 state of 1 volt. The gate output (upper right grid) shows the RF bursts at periodic intervals related to the gate signal state. A zoom view of one burst (center right grid) provides greater detail of a single burst. Another view with a greater zoom magnification (lower right grid) shows the turn-on details of the pulsed RF signal.

Measurement parameters, just under the display grids, read the frequency (P1) and amplitude (P2) of the carrier as well as the frequency (P3) and pulse width (P4) of the gating signal.

The frequency spectra of pulsed RF signals

Looking at the carrier, gate signal, and gated carrier in the frequency domain provides insight into the modulation process. Oscilloscopes view the frequency domain using the fast Fourier transform (FFT) providing tools similar to a traditional spectrum analyzer. The signals and the FFTs of the three signals are shown in Figure 2.

Figure 2 The three component signals carrier, gating pulse train, and pulse RF output and their FFTs provide insights into the modulation process. Source: Arthur Pini

The carrier (upper left grid), being a sine wave, has an FFT (upper right grid) consisting of a single spectral line at the frequency of 350 MHz. The gate signal (center left grid) is a train of rectangular pulses. The FFT of the gate signal takes the form of a sin(x)/s spectrum. The maximum amplitude occurs at zero Hz making this a baseband spectrum anchored at 0 Hz or DC. The peaks in the spectrum are spaced at the pulse repetition frequency (PRF) of 50 kHz, measured using the relative cursors on the FFT of the gate signal. The cursor readout, under the Timebase annotation box, reads the absolute cursor positions and the frequency difference of 50 kHz. The sin(x)/x response has a periodic lobe pattern where the nulls of the lobes occur at intervals equal to the reciprocal of the gate pulse positive width. Since the positive width of the gate pulse is 3.52 ms, the nulls occur every 283 kHz. These nulls are a little harder to measure with cursors as the spectral peaks every 50 kHz, which does not have 283 kHz as an integral multiple, tend to obscure the nulls.

The gated RF carrier results from multiplying the carrier by the gate signal.  The state of the gate signal determines the output of the gated RF carrier signal. When the gate signal is one, the carrier appears at the gated carrier output. Multiplication in the time domain has a corresponding mathematical operation of convolution in the frequency domain. The result of the convolution operation on the spectra of the carrier and gate signal is shown in the FFT of the gated carrier. The baseband sin(x)/x function of the gate signal is mirrored above and below the carrier spectrum as the upper and lower sidebands of the carrier frequency.

Pulsed RF timing measurements

The timing measurement of pulsed RF signals begins with the pulse bursts. In most of the applications cited, the PRF, pulse width, and duty cycle are of interest. The characteristics of the burst envelope, including the rise time, overshoot, and flatness, may also be desired. These measurements can’t be made directly on the pulse RF signal. To make measurements on the gated carrier, the signal has to be demodulated to extract the modulation envelope and remove the carrier. The demodulation process varies from oscilloscope to oscilloscope, depending on the math processes available. This example used a Teledyne LeCroy oscilloscope which offers three ways to demodulate the gated carrier signal. The first method is to create a peak detector using the absolute value math function and a low pass filter. The second method is to use the optional demodulation function. This math function provides demodulation of AM, FM, and PM signals. The final technique is to use the oscilloscope’s ability to embed a MATLAB script into the math processing chain and use one of the MATLAB demodulation tools. This is also an optional feature in the oscilloscope.

Comparing demodulation processes

Comparing the results for these three methods is interesting. Since the first method can commonly be done with most oscilloscopes that offer an absolute value math function and low pass filtering. The peak detector method was used in this example and the results are shown in Figure 3.

Figure 3 Comparison of the amplitude demodulated signal of the gated carrier and the gated carrier, with measurements of the demodulated envelope from the peak detector based on the absolute math function. Source: Arthur Pini

Using the dual math function, the absolute value of the gated carrier was calculated. The second math function is a low pass filter. The low pass filter cutoff frequency has to be less than the 350 MHz carrier frequency signal, and the filter roll-off has to be sharp enough to suppress the carrier. In this example, a 6th-order Butterworth low pass filter with a cutoff frequency of 125 MHz and a transition width of 100 kHz was used. This oscilloscope has low pass filters available as enhanced resolution (ERES) used for noise suppression as well as a digital filter option. Either low pass filter source can be used. The goal of this operation is to have the demodulated envelope track the peaks of the gated carrier.

The detected envelope of the RF pulse is shown as trace F3 in the lower left grid. Horizontal zoom displays in the upper and lower right grids show the match of the demodulated envelope (blue trace) to the RF burst at two different horizontal scales. The overlaid traces in the lower right grid provide the best view for evaluating the performance of the demodulator. Adjust the low pass filter cutoff to obtain the best fit.

Measurement parameters P6 through P10 read the PRF, width, duty cycle, positive overshoot, and rise time of the demodulated envelope.

The same measurement made using the oscilloscope’s demodulation function is shown in Figure 4.

Figure 4 Measurement of the pulsed RF modulation envelope using the oscilloscope’s optional demodulation math function and comparison with the pulsed RF signal. Source: Arthur Pini

The demodulation function was set up for AM demodulation. The carrier frequency and measurement bandwidth have to be entered. The result shown here is for a bandwidth of 100 MHz. 

The same measurements are performed with very good agreement with the peak detector method. Vertical scales differ due to the different processing operations. Since the parameters being measured use relative amplitude measurements, no effort has been made to rescale the vertical data to a common scale. 

The third method mentioned was the use of a MATLAB script in the oscilloscope’s signal path to demodulate the RF pulse signal. This is shown in Figure 5.

Figure 5 Example of using a MATLAB script to demodulate the Pulsed RF signal.  The MATLAB script used is shown in the popup. Source: Arthur Pini

The MATLAB demod function, available in the MATLAB signal processing toolbox, is used to demodulate the pulsed RF. It is a very simple two-line script requiring the entry of the carrier frequency and oscilloscope sampling rate. The results are consistent with the other methods where the primary difference occurs in the rise time measurement is due to the different filters used in each of the different processes. Comparing the rise time measurements of the demodulated envelope to the rise time of the gate signal, the maximum variation is about 1 % from the rise time of the gate signal. The variation among the three demodulation methods is about 0.2 ns of the nominal 22.67 ns rise time. These three available demodulation methods produce nearly identical results in reading the timing parameters of a pulse RF signal. 

Characterizing pulsed RF signals

The oscilloscope is well matched to the task of characterizing pulsed RF signals. It can render the signals in either the time or frequency domain permitting analysis in both domains. The ability to accurately demodulate the pulsed RF signals enables measurement of the timing characteristics of the pulsed RF signals.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Measuring pulsed RF signals with an oscilloscope appeared first on EDN.

Microchip’s acquisition meshes AI content into FPGA fabric

Tue, 04/16/2024 - 10:51

Field programmable gate arrays (FPGAs), once a territory of highly specialized designs, are steadily gaining prominence in the era of artificial intelligence (AI), and Microchip’s acquisition of Neuronix AI Labs once more asserts this technology premise.

The Chandler, Arizona-based semiconductor outfit, long known for highly strategic acquisitions, has announced to acquire Neuronix, a supplier of neural network sparsity optimization technology that enables a reduction in power, size, and calculations for tasks such as image classification, object detection and semantic segmentation.

The deal aims to bolster the AI/ML processing horsepower on the company’s low- and mid-range FPGAs and make them more robust for edge deployments in computer vision applications. Microchip will combine Neuronix’s neural network sparsity optimization technology with its VectorBlox design flow to boost neural network performance efficiency and GOPS/watt performance in low-power PolarFire FPGAs.

Neuronix AI Labs has been laser-focused on neural network acceleration architectures and algorithms, and Microchip aims to incorporate Neuronix’s AI frameworks in its FPGA design flow. The combination of Neuronix AI intellectual property and Microchip’s existing compilers and software design kits will allow AI/ML algorithms to be implemented on customizable FPGA logic without a need for RTL expertise or intimate knowledge of the underlying FPGA fabric.

Microchip stuck to its FPGA guns even when the Altera-Xilinx duo took over the market before being acquired by Intel and AMD, respectively. Microchip executives maintained all along that FPGAs were a strategic part of its embedded system business. Now, when a plethora of applications continue to populate the edge, Microchip’s vision of embedded systems incorporating low-power FPGA fabrics looks more real than ever.

In short, the acquisition will help Microchip to bolster neural network capabilities and enhance its edge solutions with AI-enabled IPs. It will also enable non-FPGA designers to harness parallel processing capabilities using industry-standard AI frameworks without requiring in-depth knowledge of FPGA design flow.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Microchip’s acquisition meshes AI content into FPGA fabric appeared first on EDN.

The Godox V1 camera flash: Well-“rounded” with multiple-identity panache

Mon, 04/15/2024 - 19:33

As regular readers already know, “for parts only” discount-priced eBay postings suggestive of devices that are (for one reason or another) no longer functional, are often fruitful teardown candidates as supplements to products that have died on me personally. So, when I recently saw a no-longer-working Godox V1 camera flash, which sells new for $259.99, listed on eBay for $66, I jumped on the deal. For teardown purposes, yes. But also, for reuse of its still-functional accessories elsewhere. And, as it turns out, to solve a mystery, too.

I’d long wanted to get inside the V1 for a look around (although its formidable price tag had acted as a deterrent), in part because of its robust feature set, which includes:

  • High 76 Ws peak power (5600K color temperature)
  • Fast (~1.5 sec) recycle time, and 480 full-power illuminations per battery charge cycle
  • Supplemental 2 W “modeling lamp” (3300K color temperature)
  • 28-105 mm zoom head (both manual and auto-sync to camera lens focal length setting options)
  • 0°-330° horizontal pan and -7°-120° vertical tilt head
  • Multiple camera shutter sync modes
  • Multiple exposure control modes
  • Auto (camera sync) and manual exposure compensation modes
  • Camera autofocus-assist beam, and
  • Last, but definitely not least, multi-flash master and slave sync options

And partly because this device, like many of the flash units from both Godox and other third-party flash manufacturers such as Neewer, comes in various options that support multiple manufacturers’ cameras. In the case of the V1, these include (differentiated via single-character suffixes in the otherwise identical product name):

  • C: Canon
  • N: Nikon
  • S: Sony
  • F: Fujifilm
  • O: Olympus/Panasonic, and
  • P: Pentax

That all aside, what probably caught your eye first in the earlier “stock” photo was the V1’s atypical round head, versus the more common rectangular configuration found in units such as Godox’s V860III (several examples of which, for various cameras, I also own):

The fundamental rationale for both products is their varying output-light coverage patterns:

Now, about those earlier-mentioned accessories:

The VB26-series battery used by the V1 is also conveniently also used by Godox’s V850III and V860III flash units, as well as the company’s RING72 ring light (optionally, along with the four-AA battery power-source default), and with Adorama’s Flashpoint-branded equivalents for all of these Godox devices, several of which I also own:

Here’s the capper. Shortly after buying this initial “for parts” Godox V1, for which the flash unit itself was the only thing nonfunctional, I came across another heavily discounted V1 that, as it turned out, worked fine but was missing the battery and charging cable. Guess what I did? 😉

About that battery cable…readers with long memories may recall me mentioning the VB26 before. The earlier discussion was in the context of the Olympus/Panasonic version of the V1 (i.e., the V1O), which had come with the original VB26 battery, and which I learned couldn’t be charged from a USB-C power source even though the battery charging dock had a USB-C input; a USB-A to USB-C adapter cable (along with a USB-A power source) was instead necessary. Well, in testing out the battery this time, I absentmindedly plugged it and its companion dock into a handy USB-C power source (and USB-C to USB-C cable) that normally finds use in charging my Google Pixel Buds Pro earbuds…and everything worked fine.

In retrospect, I remembered the earlier failure, and in striving to figure out what was different, I noticed that the battery this time was the more recent VB26A variant. I’d known that both it and its even newer VB26B successor held a bit more charge than the original, but Godox presumably fixed the initial USB-PD (Power Delivery) shortcoming in the evolutionary process, too (the charging circuitry is contained within the battery itself, apparently, with the dock acting solely as a “dummy” wiring translator between the USB-C connector and the battery terminals).

Enough of the prep discussion, let’s get to the tearing down. What we’re looking at today is the V1C, i.e., the Canon variant of the V1 (here’s a user manual):

I’ve long assumed that the various “flavors” of the V1 (and flash units like it) were essentially identical, save for different hot shoe modules and different firmware builds running inside. Although I won’t be dissecting multiple V1 variants today, the fact that they share a common 2ABYN001 FCC certification ID is a “bit” of a tipoff. I hope that this teardown will also shed at least a bit of added light on the accuracy-or-not of this hypothesis.

Open the box, and the goodies inside come into initial view. The cone-shaped white thing (silver on the other side) at top is a reflector, a retailer bundle adder intended for “bounce” uses:

As-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes are the primary accessories: the standard USB-A to USB-C charging cable below the coin, and to the right, top-to-bottom, the battery, AC-to-DC converter (“wall wart”) and charging dock:

A closeup of the wall wart, complete with specs:

The underside of the battery, this time (as previously noted) the “A” version of the VB26:

And the charging dock, common to all VB26 battery variants:

Lift out the case containing the V1, and several other accessories come into view below it. At bottom right is a mini stand to which you mount the hot shoe when the flash unit isn’t being directly installed on/controlled by the camera (i.e., when the V1 is in wireless sync “slave” mode). And above it is another retailer adder, a goodie bag containing a lens cleaning cloth, a brush (useful when, for example, carefully brushing dust off the image sensor or, for a DSLR, the mirror) and a set of soft gloves.

Flip up the case top flap, and our victim comes into initial view:

Here’s a view of the backside, with the flash head near-vertical. The V1 has dimensions of 76x93x197 mm and weighs 420 g without the battery (530 g with it):

Here’s one (operating mode-dependent) example of what that LCD panel looks like with a turned-on functional V1:

Flip the V1 around for the front view, with the head at the same near-vertical orientation:

A closeup of the label (note, too, the small circular “hole” below the right corner of the label; file it away in your memory for later, when it’ll be important):

And of the translucent front panel, alluding to some of what’s inside:

The circular section at the bottom is for the focus assist beam, and to its left you can faintly see the wireless sensor used to sync the V1 (in either master or slave mode) with other flash units that support Godox’s 2.4 GHz “X” protocol as well as standalone transmitters and receivers:

Now’s as good a time as any, by the way, to show you Neewer’s reminiscent-named Z1:

The V1 and Z1 look the same, are similarly featured, and both use the 2.4 GHz ISM band for wireless sync purposes. Just don’t try to sync them to each other because the protocols differ.

Here’s a straight-on closeup of the V1 flash head:

That circular area at the top, which is toward the ground in normal operation (when the flash head isn’t pointed toward the sky, that is) is the modeling lamp, constantly on when activated versus a traditional “flash”. Here’s what it looks like on, again with an alternative functional V1:

And here are examples of the modeling lamp in use.

The ring around the outside of the flash head lens is metal, by the way, affording an opportunity for easy attachment of various magnet-augmented accessories:

Finally, some side views; first the left (when viewed from the front), containing the compartment “hole” into which the battery is inserted:

And now the right, containing the battery latch, release button and contacts:

The flash head at both extremes of its tilt range:

And a closeup of the QR code sticker on this side of the flash head:

Back to the right-side battery compartment closeup. In the earlier photo, you might have noticed what looked like a protective “flap” to the right of the cavity, and above the battery-release button. If so, you’d be right:

The round female connector at the top is not for headphones. It’s a 2.5 mm sync cord jack, for mating to a camera or transmitter as an alternative to a hot shoe or wireless connection. Below it is a USB-C connector used to connect to a computer for updating the flash unit firmware. On a hunch, I mated this supposedly “dead” V1 to my Mac and was surprised to find that the flash unit was recognized. I could even update its firmware, in fact, and all without a battery installed:

Even though this V1’s all-important illumination subsystem is DOA, it’s apparently not all-dead!

Last, but not least, let’s have a look at the hot shoe:

As previously mentioned, my working theory is that this (along with the software running inside the device) is the key differentiator between the V1 variants. It’s (perhaps unsurprisingly) also the most common thing that breaks on V1s:

So, I’ll be holding onto this part of the device long-term, both for just-in-case repair purposes and for another experimental project that I’ll tell you about later…

Did you notice the four screws holding the hot shoe assembly in place? Let’s see if their removal enables us to get inside:

Here’s the removed hot shoe assembly, both in the “loose” and “latched” positions (controlled by rotation of that grey button you see in the photos):

And here’s what’s inside:

Next step, remove the four “corner” screws whose heads were obscured by white paste in previous photos:

The outer bracket piece now lifts away:

Leaving an assemblage that, for already mentioned reasons, I’m not going to further disassemble, in order to preserve it for potential future use:

Unfortunately, although this initial disassembly step gave me a teaser peak at the insides, I wasn’t yet seemingly able to proceed further from this end:

So, I returned my attention to the flash head (the other end), around which I’d remembered seeing a set of screws that held the plastic cover and metal ring in place:

Underneath it was a Fresnel lens.

From Wikipedia:

A Fresnel lens…is a type of composite compact lens which reduces the amount of material required compared to a conventional lens by dividing the lens into a set of concentric annular sections…The design allows the construction of lenses of large aperture and short focal length without the mass and volume of material that would be required by a lens of conventional design. A Fresnel lens can be made much thinner than a comparable conventional lens, in some cases taking the form of a flat sheet.

With the Fresnel lens removed, the Zenon tube assembly comes into clear view:

If you look at the bottom, you’ll see a two-rail “track” on which it moves forwards and backwards to implement, in conjunction with the fixed-position Fresnel lens, the zoom function.

I was able to unclip the brackets holding the fronts of both halves of the head assembly together, but further progress eluded me:

So, I next tried peeling away the round rubberized pieces covering both ends of the “tilt” hinge:

A-ha! Screws!

Now for the other side…

You know what comes next…

And now, one half (the lower half, to be precise) of the flash head enclosure lifts right off:

I initially thought that this mysterious red paste-covered doodad might be a piezoelectric speaker, for generating “beep” tones and the like, and its location coincides with the “hole” below the label that I showed you earlier, but…again, hold that thought:

We now get our first clear views of the flash head insides. Check out, for example, that sizeable heatsink for the modeling lamp LED!

Four screws hold the assembly in place within the other half-enclosure. Let’s get rid of these:

Liftoff!

Here’s our first glimpse of one side of this particular PCB. Look at that massive inductor coil!

Disconnect a couple of ribbon cables:

Tilt the assembly to the side:

Next, let’s remove the modeling lamp LED-plus-heatsink assemblage:

The two are sturdily glued together, so I won’t proceed further in trying to pry them apart:

Now let’s remove the PCB from the white plastic piece it’s normally attached to:

Let’s look first at the now-revealed PCB backside. First off, unsurprising mind you given the high current flow involved but still…look at those thick traces:

See those two switches? The motor position-controlled Zenon tube bumps up against them at the far end of its zoom travel range, seemingly disabling further motion in that direction (why there aren’t similar switch contacts at the rails’ other ends isn’t clear to me, however):

Finally, note the red-color, white paste-capped device in the upper right corner. Its “TB” PCB marking, along with the wire running from it to the Zenon tube, suggests to me that it may be a thermal breaker intended to temporarily disable the flash unit if it gets too hot. Ideas, readers?

Let’s now flip the PCB back over to the side we glimpsed earlier:

Time for a brief divergence into flash unit operation basics. In the “recharge” interval between flash activations, a sizeable capacitor (which we haven’t yet seen) gets “filled” by the battery electron flow. At least some of that stored capacitive charge then gets “dumped” into the Zenon tube. But here’s the trick…the Zenon tube’s illumination time and intensity vary depending on the camera’s desired exposure characteristics. So where does any “extra” current go, if not needed by the Zenon tube?

Initially, the excess electrons were instead shunted off to something called the quench tube, a wasteful approach that both limited battery life and unnecessarily lengthened recharge time. Nowadays, either gate turn-off (GTO) thyristors or insulated-gate bipolar transistors (IGBTs) instead find use in cutting off the current flow from the capacitor, saving remaining charge for the next Zenon tube activation. I’m admittedly no power electronics design expert, so I can’t confidently say which approach is in use here. To assist the more knowledgeable-than-me readers among you (numerous, I know), note that the two devices above the coil are S6008D half-wave, unidirectional, gate-controlled rectifiers; the IC above them has the following marks:

EIC
SN
5M

Again, I say: further insights, readers?

Before moving on, let’s take a closer look at that zoom motor:

And now, let’s figure out how to get inside that hinge (where, I suspect, we’ll find that aforementioned sizeable capacitor). Looking closely at the ends I’d previously exposed, I noticed two more screws on each, but removing them didn’t seemingly get me any further along:

In the process of unscrewing them, however, I realized that I hadn’t yet showed you the pan range supported by the head:

And in the process of doing that, I noticed more screws underneath the pan hinge:

That’s more like it (although I’m now inside the main flash body, not yet the hinge above it)!

Let’s start with the now-detached back panel:

The LCD behind it is visible through the clear section, obviously, but don’t forget about the ribbon cable-fed multi-button-and-switch array below it:

That same panel piece from below, with another look at the ribbon cable:

And finally, that same panel piece from above:

Let’s return to that earlier inside view and get those four screws off:

The multi-button/switch assembly now lifts away straightaway:

And that black piece then pops right off, too:

Here’s a cross-section view of the circular multi-switch structure:

And with that, let’s return to the multi-sided structure we saw earlier, inside the main body:

Next are a series of sequential wiring disconnection shots; there are multiple ribbon cable harnesses, as you’ll see, some of them terminating in the tilt hinge above and some passing through the tilt hinge to the flash head above it:

 

With the front half of the main body shell now free and clear, let’s look at what’s inside:

That thing toward the bottom center, with a blue/black wire combo coming out of it, is the aforementioned focus assist beam. But what about the one in the upper left, with red and black wires coming out of it? Here’s a top view of the front-half piece; note the “hole” at bottom right at the corresponding external location:

Remember the mystery device inside the flash head, with a reminiscent red-and-black wire harness and external “hole”, that I initially thought was a speaker and asked you to remember?

I’d originally realized it wasn’t a speaker when I took my functional V1, activated its “beep” function and discerned that the sound wasn’t coming from there. But when I saw the second similar device-and-hole, I grabbed my functional (and fully assembled) V1 again and realized that when (and only when) the flash head was pointed horizontal and forward, the two “holes” lined up. My working theory is that one of the devices is an IR transmitter with the other an IR receiver, and that this alignment is how the flash figures out when the user has both the pan and tilt settings at their “normal” default positions. For what reason, I can’t yet precisely sort out; there’s no indication I can find in the user manual that the V1 operates any differently when pan and/or tilt are otherwise oriented. But conceptually, I could imagine that the flash’s integrated controller and/or connected camera might be interested in knowing whether the unit is being used for conventional or “bounce” purposes from an operating mode, exposure setting and/or other standpoint. Once again, readers: ideas?

At this point, by the way (and speaking of flash heads), the top half of this part of the case spontaneously disconnected from the pan-and-tilt hinge assembly:

Returning to the main body, let’s see what’s inside. Back, complete with the LCD (the on/off switch is in the lower right corner):

Right side:

Left side (note the battery latch, contacts, etc. initially highlighted before):

Front, with an initial “reveal” of the primary “power” PCB (although there’s plenty of analog stuff in the earlier flash head-located PCB too!):

Top:

And bottom, revealing a secondary “digital” PCB that we’ll discuss further shortly:

There’s one more PCB of note, actually, which isn’t visible until after you remove two screws and disconnect the LCD assembly, then flip it around:

Here’s where the main system controller can be found, therefore why I refer to it as the primary “digital” PCB. It’s the APM32F072VBT6 (PDF), from a Chinese company called Geehy Semiconductor. The entire product family, as you’ll see from the PDF, contains dozens of members, based both on the Arm Cortex-M0+ and Cortex-M3. This particular SoC proliferation (at the top of the table labeled “APM32 MCU-ARM Cortex -M0+” in the PDF, for your ease of locating it) integrates a Cortex-M0+ running at 48 MHz along with 128 Kbytes of flash memory and 16 Kbytes of RAM. I can’t find a discrete flash memory chip for code storage on the PCB; the IC in the lower right corner is a LMV339 quad-channel comparator, and pretty much everything else here are connectors and passives. Oh, and the speaker’s to the left of the comparator 😉.

Here’s a side view, showing the USB-C and 2.5 mm sync connectors:

And flipping the assembly back over, as well as flipping the LCD upside-down, you’ll find that this side of the PCB is effectively blank, save for the earlier-noted power switch:

Next, continuing with the “digital” theme, let’s look more closely at the bottom-mounted PCB:

This one requires a bit of background explanation.

I’ve already told you that the primary 2.4 GHz transceiver system for multi-unit sync purposes is upfront behind the red translucent panel, and you’ll see it again shortly. But there’s another 2.4 GHz transceiver system in the V1, this one Bluetooth-based and designed to enable flash unit configuration and control from a wirelessly tethered smartphone or tablet in conjunction with a Godox (or Adorama) app. That’s why, unsurprisingly now that you know the background, the two dominant ICs on this side of the PCB are Texas Instruments’ CC2500 low-power 2.4 GHz RF transceiver and, to its right, TI’s CC2592 front-end RF IC. Flip the PCB over:

and again, unsurprisingly, you’ll find the embedded Bluetooth antenna.

Finally, let’s look more closely at what I referred to earlier as the primary “power” PCB:

Many of the ICs here are similar to the ones we saw in the earlier flash head-located PCB, such as two more of those mysterious ones labeled “EIC” but now with slightly different second- and third-line marks:

EIC
SK
5B

And on the other side:

is more analog and power circuitry, including a sizeable capacitor at the bottom (albeit not as sizeable as I suspect we’ll see shortly!).

Speaking of which, let’s close by looking closely at that tilt hinge assembly. Here it is from the front:

Top:

and back:

All are fairly unmemorable. The left side is not much less boring:

At least until I tilt it slightly, revealing a green tint indicative of a PCB inside:

The right side is quite a bit busier, with wiring harnesses formerly running up to the flash head:

Even more titillating when I again tilt it, as well as moving wiring to the sides:

And speaking of wiring (and titillating relocation of same), here’s the bottom:

Cautiously, both because I don’t know exactly what’s on the other side and, if I’m right and it’s an enormous capacitor, whether it’s fully discharged, I proceed:

Enormous capacitor, indeed!

Refilling this sizeable “electron gas tank”, folks, explains the 1.5 second recycle time between flash activations, and makes the 480 activations per battery recharge all the more remarkable:

And with that, slightly more than 4,000 words in, I’m done! Not quite “in a flash”, but I still hope you found this teardown as interesting as I did. Sound off with your thoughts in the comments! And in closing, enjoy these two insides-revealing repair videos that I found during my research:

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Godox V1 camera flash: Well-“rounded” with multiple-identity panache appeared first on EDN.

A sneak peek at HBM cold war between Samsung and SK hynix

Mon, 04/15/2024 - 19:17

As high-bandwidth memory (HBM) moves from HBM3 to its extended version HBM3e, a fierce competition kicks off between Samsung and SK hynix. Micron, the third largest memory maker, has also tagged along to claim stakes in this memory nirvana that is strategically critical in artificial intelligence (AI) designs.

HBM is a high-value, high-performance memory that vertically interconnects multiple DRAM chips to dramatically increase data processing speed compared to conventional DRAM products. HBM3e is the fifth generation of HBM following HBM, HBM2, HBM2E and HBM3 memory devices.

HBM helps package numerous AI processors and memories in a multi-connected fashion to build a successful AI system that can process a huge amount of data quickly. “HBM memory is very complicated, and the value added is very high,” Jensen Huang, Nvidia co-founder and CEO, said at a media briefing during the GPU Technology Conference (GTC) held in March 2024 at San Jose, California. “We are spending a lot of money on HBM.”

Take Nvidia’s A100 and H100 processors, which commanded 80% of the entire AI processor market in 2023; SK hynix is the sole supplier of HBM3 chips for these GPUs. SK hynix currently dominates the market with a first-mover advantage. It launched the first HBM chip in partnership with AMD in 2014 and the first HBM2 chip in 2015.

Figure 1 SK hynix currently dominates the HBM market with nearly 90% of the market share.

Last month, SK hynix made waves by announcing to start the mass production of the industry’s first HBM3e chip. So, is the HBM market and its intrinsic pairing with AI processors a case of winner-takes-all? Not really. Enter Samsung with a 12-layer HBM3e chip.

Samsung’s HBM surprise

Samsung’s crosstown memory rival SK hynix has been considered the unrivalled HBM champion since it unveiled the first HBM memory chip in 2014. It’s also known as the sole HBM supplier of AI kingpin Nvidia while Samsung has been widely reported to be lagging in HBM3e sample submission and validation.

Then came Nvidia’s four-day annual conference, GTC 2024, where the GPU supplier unveiled its H200 and B100 processors for AI applications. Samsung, known for its quiet determination, once more outpaced its rivals by displaying 12-layer HBM3e chips with 36 GB capacity and 1.28 TB/s bandwidth.

Figure 2 Samsung startled the market by announcing 12-layer HBM3e devices compared to 8-layer HBM3e chips from Micron and SK hynix.

Samsung’s HBM3e chips are currently going through a verification process at Nvidia, and CEO Jensen Huang’s note “Jensen Approved” next to Samsung’s 12-layer HBM3e device on display at GTC 2024 hints that the validation process is a done deal. South Korean media outlet Alpha Biz has reported that Samsung will begin supplying Nvidia with its 12-layer HBM3e chips as early as September 2024.

These HBM3e chips stack 12 DRAMs, each carrying 24-GB capacity, leading to a peak memory bandwidth of 1.28 TB/s, 50% higher than 8-layer HBM3e devices. Samsung also claims its 12-layer HBM3e device maintains the same height as the 8-layer HBM3e while offering 50% more capacity.

It’s important to note that SK hynix began supplying 8-layer HBM3e devices to Nvidia in March 2024 while its 12-layer devices, though displayed at GTC 2024, are reportedly encountering process issues. Likewise, Micron, the world’s third largest manufacturer of memory chips, following Samsung and SK hynix, announced the production of 8-layer HBM3e chips in February 2024.

Micron’s window of opportunity

Micron, seeing the popularity of HBM devices in AI applications, is also catching up with its Korean rivals. Market research firm TrendForce, which valued the HBM market approximately 8.4% of the overall DRAM industry in 2023, projects that this percentage could expand to 20.1% by the end of 2024.

Micron’s first HBM3e product stacks 8 DRAM layers, offering 24 GB capacity and 1.2 TB/s bandwidth. The Boise, Idaho-based memory supplier calls its HBM3e chip “HBM3 Gen2” and claims it consumes 30% less power than rival offerings.

Figure 3 Micron’s HBM3e chip has reportedly been qualified for pairing with Nvidia’s H200 Tensor Core GPU.

Besides technical merits like lower power consumption, market dynamics are helping the U.S. memory chip supplier to catch up with its Korean rivals Samsung and SK hynix. As noted by Anshel Sag, an analyst at Moor Insights & Strategy, SK hynix already having sold out its 2024 inventory could position rivals like Micron as a reliable second source.

It’s worth mentioning that Micron has already qualified as a primary HBM3e supplier for Nvidia’s H200 processors. The shipments of Micron’s 8-layer HBM3e chips are set begin in the second quarter of 2024. And like SK hynix, Micron claims to have sold all its HBM3e inventory for 2024.

HBM a market to watch

The HBM market will continue to remain competitive in 2024 and beyond. While HBM3e is positioning as the new mainstream memory device, both Samsung and SK hynix aim to mass produce HBM4 devices in 2026.

SK hynix is employing hybrid bonding technology to stack 16 layers of DRAMs and achive 48 GB capacity; compared to HBM3e chips, it’s expected to boost bandwidth by 40% and lower power consumption by 70%.

At the International Solid-State Circuits Conference (ISSCC 2024) held in San Francisco on February 18-21, where SK hynix showcased its 16-layer HBM devices, Samsung also demonstrated its HBM4 device boasting a bandwidth of 2 TB/s, a whopping 66% increase from HBM3e. The device also doubled the number of I/Os.

HBM is no longer the unsung hero of the AI revolution, and all eyes are on the uptake of this remarkable memory technology.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A sneak peek at HBM cold war between Samsung and SK hynix appeared first on EDN.

8-bit MCUs tout 15-W USB power delivery

Sat, 04/13/2024 - 00:07

Microchip’s AVR DU 8-bit MCUs integrate a USB 2.0 full-speed interface that supports power delivery up to 15 W, enabling USB-C charging at up to 3 A at 5 V. According to the manufacturer, this capability, not commonly found in other USB microcontrollers in this class, allows embedded designers to implement USB functionality across a wide range of systems.

In addition to higher power delivery than previous devices, AVR DU microcontrollers also feature improved code protection. To defend against malicious attacks, the devices employ Microchip’s Program and Debug Interface Disable (PDID) function. When enabled, the PDID function locks out access to the programming/debugging interface and blocks unauthorized attempts to read, modify, or erase firmware.

To enable secure firmware updates, the MCUs provide read-while-write flash memory in combination with a secure bootloader. This allows designers to use the USB interface for in-field updates without disrupting product operation.

The AVR DU family of MCUs is suitable for a range of embedded applications, from fitness wearables and home appliances to agricultural and industrial applications. A virtual demonstration of the MCU’s USB bridge is available here.

AVR DU series product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 8-bit MCUs tout 15-W USB power delivery appeared first on EDN.

Renesas expands general-purpose MCU choices

Sat, 04/13/2024 - 00:06

RA0 microcontrollers from Renesas are low-cost devices that offer low power consumption and a feature set optimized for cost-sensitive applications. The MCUs can be used in such applications as consumer electronics, system control for small appliances, building automation, and industrial control systems.

Based on an Arm Cortex-M23 core, the 32-bit MCUs consume 84.3 µA/MHz in active mode, dropping to just 0.82 mA in sleep mode. A software standby mode cuts current consumption even further, allowing the device to sip just 0.2 µA. These features, coupled with a high-speed on-chip oscillator for fast wakeup, make the MCUs particularly well-suited for battery-operated products.

The first devices in the RA0 series, the RA0E1 group, operate from a supply voltage of 1.6 V to 5.5 V. This means there is no need for a level shifter/regulator in 5-V systems. An on-chip oscillator improves baud rate accuracy and maintains ±1.0% precision over a temperature range of -40°C to +105°C.

Other features of the RA0E1 group of MCUs include: 

  • Memory: Up to 64 kbytes of code flash and 12 kbytes of SRAM
  • Analog Peripherals: 12-bit ADC, temperature sensor, internal reference voltage
  • Communications Peripherals: 3 UARTs, 1 Async UART, 3 Simplified SPIs, 1 IIC, 3 Simplified IICs
  • Safety: SRAM parity check, invalid memory access detection, frequency detection, A/D test, immutable storage, CRC calculator, register write protection
  • Security: Unique ID, TRNG, flash read protection

RA0E1 microcontrollers are shipping now. Package options include 20-pin LSSOP, 32-pin LQFP, and QFN with 16, 24, or 32 leads.

RA0E1 product page

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Renesas expands general-purpose MCU choices appeared first on EDN.

Hi-rel GaN load switch ships off-the-shelf

Sat, 04/13/2024 - 00:06

The first entry in Teledyne’s 650-V power module family, the TDGM650LS60 integrates a 650-V, 60-A GaN transistor and isolated driver in a single package. The module, which is now available off-the-shelf, acts as a load switch or solid-state switch. Fast switching time and the absence of moving parts make the TDGM650LS60 useful for high-reliability applications in the space, avionics, and military sectors.

The TDGM650LS60 tolerates up to 100 krads of total ionizing does (TID) radiation and operates over a temperature range of -55°C to +125°C. It’s enhancement-mode GaN transistor has a minimum breakdown voltage of 650 V and a stable on-resistance of 25 mΩ. Coupled with the driver’s 5-kV isolation, the TDGM650LS60 ensures robust and reliable operation in challenging environments.

Occupying a 21.5×21.5-mm footprint, the TDGM650LS60 module has solder-down castellation for surface-mount style mounting. A preliminary datasheet can be accessed by using the link to the product page below.

TDGM650LS60 product page

Teledyne e2v HiRel Electronics    

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Hi-rel GaN load switch ships off-the-shelf appeared first on EDN.

Pages