EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 43 min 57 sec ago

FPGA furnishes built-in USB interface

Thu, 09/28/2023 - 20:29

Lattice Semiconductor claims that its CrossLinkU-NX FPGA is the first in its class to integrate hardened USB device functionality. Aimed at AI and embedded vision applications, the FPGA packs a USB controller and physical layer (PHY) capable of USB 2.0 transfer rates up to 480 Mbps and USB 3.2 transfer rates up to 5 Gbps.

CrossLinkU-NX not only reduces the total cost of ownership and area needed for discrete PHY components, but also the FPGA fabric resources required for a USB device controller. Additionally, it provides a low-power standby mode with an always-on block to extend battery life and simplify thermal management. Instant-on configuration enables I/O to be configured in 3 ms and the device in 8 ms. Current consumption is less than 70 µA under typical standby mode.

Offered in commercial and industrial temperature grades, the CrossLinkU-NX FPGA has 33k logic cells and 64 18×18 multipliers. A Lattice Propel template, host driver, and example host utilities for USB to I/O bridging and MIPI CSI-2 to USB bridging applications help accelerate USB device implementation with the FPGA.

CrossLinkU-NX FPGAs are sampling now and are supported by Lattice Radiant design software.

CrossLinkU-NX product page

Lattice Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post FPGA furnishes built-in USB interface appeared first on EDN.

EDA software tools gain shift left updates

Thu, 09/28/2023 - 20:29

EDA 2024 from Keysight is a tightly integrated suite of EDA software tools that facilitates a shift left approach to increase engineering productivity. Shift left moves design validation forward in the development cycle to accelerate time to market. Key to building accurate models and faster simulations is the incorporation of actual measurement data into the design and validation process in the virtual space.

Keysight EDA 2024 software includes new system and circuit design workflow integration, power amplifier modeling and simulation optimization, and satcom design evolution enhancements. Powered by Keysight measurement science, the tools provide engineers with a comprehensive solution that speeds virtual prototype creation with accurate validation prior to building physical prototypes and starting volume production manufacturing.

Shift left updates can be found in the following EDA 2024 products:

  • RF System Explorer streamlines system and circuit level design workflows for early exploration of system architectures in the Advanced Design System (ADS).
  • Digital Pre-Distortion Explorer and Digital Pre-Distortion Designer accelerate wide bandgap power amplifier design and validation using the Dynamic Gain Model.
  • SystemVue delivers complete satcom modeling and simulation solutions for 5G non-terrestrial network, DVB-S2X, and phased array product development.

“The breadth of improvements we’ve packed into Keysight EDA 2024 is aimed squarely at our customers’ major pain points—faster time-to-market; first pass success; automated, integrated, and open workflows; and high speed and high frequency performance,” said Niels Faché, Vice President and General Manager, Keysight. “It’s not just plain vanilla shift left, but rather shift left powered by measurement science and application domain expertise that customers count on Keysight EDA to deliver.”

To learn more about EDA 2024, click here to attend the product launch event on October 10, 2023.

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post EDA software tools gain shift left updates appeared first on EDN.

Samsung unveils LPDDR-based LPCAMM

Thu, 09/28/2023 - 20:29

Samsung’s Low Power Compression Attached Memory Module (LPCAMM) leverages LPDDR5X memory devices in a faster, smaller form factor. Expected to impact the DRAM market for PCs and laptops, the LPDDR5X-enabled LPCAMM delivers a throughput of 7.5 Gbps while occupying up to 60% less space on the motherboard than SO-DIMM memory modules.

Unlike LPDDR DRAM devices, which are permanently attached to the motherboard, LPCAMM is a detachable module that not only allows upgrading, but affords flexibility during the production process. And while DDR-based SO-DIMMs are attached and detached easily, LPCAMM permits more efficient use of a product’s internal space. According to Samsung, LPCAMM also improves performance by up to 50% and power efficiency by up to 70% compared to SO-DIMMs.

LPCAMM scales to 128 GB and includes onboard serial presence detect (SPD) and power management ICs. It has completed system verification with Intel’s platform and is set to be tested using next-generation systems from major customers this year. Commercialization of the LPCAMM is planned for 2024.

A datasheet for the LPCAMM was unavailable at the time of this announcement. To learn more about Samsung’s LPDDR DRAM devices, click here.


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Samsung unveils LPDDR-based LPCAMM appeared first on EDN.

An efficient and simple regulator for heating/lighting purposes

Thu, 09/28/2023 - 16:09

This switching regulator is highly efficient and can be used for AC and DC and requires no reactive L/C components. The regulator can provide a power factor very close to 1.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The circuit (see Figure 1) can be used to regulate a heating process with some thermal inertia such as a soldering iron, hot wire cutter, heater, and so on. The circuit is easily scalable for many more purposes from clothing irons to industrial heating and drying processes. It can be used as a dimmer for incandescent lamps as well.

Figure 1 A simple regulator for heating or lighting purposes, the circuit can be used to regulate heating processes with thermal inertia such as a soldering iron, hot wire cutter, heaters, etc.

Due to its low thermal inertia, the lamp is a special case. During a period of several milliseconds a load should tolerate the voltage pulses at full input amplitude. This may be too harsh for an incandescent lamp to survive. So, the nominal voltage of the lamp should not be lower than Vin.  

The circuit has a capacitor-less rectifier at the input, hence the reverse voltage on the diodes is twice as low as it would be with a smoothing capacitor. This facilitates the usage of current-effective Schottky diodes in the bridge.

The output voltage is a train of monopolar pulses—thus the circuit regulates an effective voltage on the load. This resembles pulse width modulation (PWM), the difference is the non-constant amplitude of the pulses at the input in the case of AC.  

The monopolar pulses the circuit produces are averaged by R1C1 and some part of the result (R2/R8, R9, R10) is compared by TL431 (Q3) with its internal Vref. If this part is lower than the internal Vref (2.5 V), the transistor Q2 is closed, so the switch Q1 is closed as well, and the load is connected. And vice versa, when the input of TL431 is higher than 2.5 V, both Q2 and Q1 are open, and the load is disconnected.  

When the input of TL431 is higher than 2.5 V, its cathode voltage (Vka) is not well documented; it’s only known it would be about 2 V. The diodes D3 reduce the gate voltage of opened Q2 to a value lower than 0.2 V. Diode D4 protects the circuit from an overvoltage caused by load inductance. The transistor Q1 may have no heatsink.  

If the circuit uses AC as Vin, the time constant R1C1 must be more than the AC period. The values shown are for AC 50Hz. Potentiometer R8 can have a scale graduated in volts. The simple circuit in Figure 1 is not well-suited for very heavy loads (~100 W).  

The circuit in Figure 2 is more complex, it is intended for more heavy loads.  

Figure 2 Another regulator that is well-suited for heavier loads on the order of 100 W.

It has several distinctions from the previous circuit including:  

  • A “half-driver” Q5 which accelerates the opening of Q1,
  • A more effective source of auxiliary voltage (Q6, D1),
  • A green LED D2 (~2V) used as interface between TL431 and Q2 (instead of the diodes D3 in Figure 1), and
  • A faster diode D4.

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post An efficient and simple regulator for heating/lighting purposes appeared first on EDN.

Malaysia’s semiconductor journey spanning half a century

Thu, 09/28/2023 - 15:59

In 1972, Intel opened a 5-acre assembly plant in Penang, the trading hub close to Malaysia’s northern tip. The assembly plant employed nearly a thousand people and soon became crucial to Intel’s semiconductor supply chain. By 1975, it accounted for more than half of Intel’s assembly capacity.

Soon, AMD, Hitachi and HP followed suit, and by the early 1980s, 14 semiconductor firms were operating in Malaysia. According to a Harvard Business School paper by Goh Pek Chen, it all began when the Malaysian government established the first free trade zone in 1972.

Figure 1 The four-year-old Intel was the first semiconductor company to benefit from Malaysia’s free trade zones. Source: Intel

The free trade zones offered companies tariff exemptions on imports and exports, tax holidays, tighter controls on labor organization, and streamlined regulatory processes. Moreover, these zones were strategically located along well-linked highways and railway systems and offered easy access to well-equipped seaports and Kuala Lumpur International Airport.

Fast forward to 2023, Infineon announced to significantly expand its fab in Kulim, Malaysia, which it built in 2006 to manufacture power semiconductor products like MOSFETs and IGBTs. Infineon will invest €5 billion in Kulim fab to build what it claims to be the world’s largest 200-mm-wafer silicon carbide (SiC) power fab. Kulim fab will be critical in Infineon’s goal to win 30% of SiC market share by 2030.

The journey from an assembly plant to a SiC semiconductor foundry marks a significant milestone for Malaysia’s semiconductor ecosystem. Taiwan-based contract manufacturer Foxconn has also announced to build a 300-mm wafer fab in Malaysia. It’ll operate on 28-nm to 40-nm process nodes and will have the capacity to produce 40,000 wafers per month.

New fab buildup aside, Malaysia’s semiconductor industry is known to comprise three main groups: outsourced semiconductor assembly and testing (OSAT), automated test equipment (ATE) suppliers, and designers and manufacturers of high-performance test sockets.

Take, for example, Bosch’s recent announcement to invest €65 million in an 18,000 square meters test center in Penang. It will carry out final testing of Bosch’s semiconductors fabricated in Reutlingen, Germany; Suzhou, China; and Hatvan, Hungary. The test center will include clean rooms, office space, and laboratories for quality assurance and manufacturing.

Malaysia has captured nearly 13% of the global chip assembly and testing market share; it’s also the world’s seventh-largest exporter of semiconductors. Furthermore, it’s consistently listed among the top 10 semiconductor manufacturing countries and is usually number seven or eight between the UK and the Netherlands.

Figure 2 Malaysia’s move from back-end to front-end chip manufacturing will raise its profile as a semiconductor industry hub. Source: UOB Group

The news about the new fab buildup hints toward the country’s ambitions to move up the semiconductor value chain. Especially when the semiconductor trade war between China and the United States is promising new opportunities for countries already exposed to the semiconductor technologies.

So, while semiconductor fabs are nothing new in Malaysia’s semiconductor ecosystem, the renewed interest in moving up the chip value chain is worth noting. Also, with the SiC power fab in Kulim, Malaysia’s semiconductor industry can claim an almost complete ecosystem to attract new investments.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Malaysia’s semiconductor journey spanning half a century appeared first on EDN.

The other fusion challenge: harvesting the power

Thu, 09/28/2023 - 15:33

You undoubtedly saw the impressive news back in December 2022 that using what is called “inertial confinement”, scientists and engineering teams at the National Ignition Facility (NIF) of Lawrence Livermore National Laboratory (LLNL) achieved fusion “success” by producing more energy than the energy delivered by its 192 lasers converging on a target. The project and its precursors have consumed over 60 years of research and development in lasers, optics, diagnostics, target fabrication, computer modeling and simulation, and experimental design (plus many tens of billions of dollars), Figure 1. Note that the lasers themselves were only about 1% efficient, so the power required was 100× the laser output and the entire arrangement was a huge net-energy loss.

Figure 1 The almost unimaginably complex NIF/LLNL achieved brief fusion energy “gain”—if you ignore the 100× power needed to drive its 192 lasers. Source: NIF/LLNL via typeset.io

This fusion approach is not the only path being explored. There’s a huge European-centric effort called ITER and various smaller-scale approaches using different physics and principles.

Impressive as the LLNL/NIF and other efforts are, there’s another half to the story of “nearly endless, pollution-free power” that so many are touting as viable.  That thus-far unanswered question is how do you manage and extract—let’s be flexible and call it “harvest”—this enormous power and transform it into useful electricity?

Engineers are, of course, familiar with the concept and reality of harvesting at various scales and types, whether it is on a small scale such as a piezo-driven transducer, a medium or larger scale photovoltaic farm, or at a large scale via a fossil-fuel megawatt station. Each of these has been used extensively and there are mature techniques supported by components, systems, and structures for each one.

For fusion-based power sources, it’s a different story. On one side there’s the huge laser-driven fusion reactor of the LLNL-NIF project; on the other side, there are no precedents for this type of power conversion. Unlike a conventional boiler where the source heat is used to directly develop high-pressure steam and then drive a turbine and generator (Figure 2), nuclear fusion faces very different steam-creation challenges, even when compared to that “other” nuclear source of fission.

Figure 2 In contrast to a fusion-based source, a conventional oil/coal/gas-fired electric-generating station is conceptually simple, even if an actual installation has industrial complexities. Source: Chegg, Inc.

One approach is somewhat conventional. In the design being pursued by ITER, neutrons will be absorbed by the surrounding walls of their huge tokamak—a fusion-enabling construct which uses a powerful magnetic field to confine plasma in the shape of a torus—where their kinetic energy will be transferred to the walls as heat. The heat will be captured by cooling water circulating in the vessel walls and eventually dispersed through cooling towers, Figure 3.

Figure 3 In contrast to the inertial confinement fusion of the NIF/LLNL approach but with a similar huge size and scope, the European-led ITER project is using the magnetic field surrounding a tokomak to confine the plasma. Source: ITER

In contrast, start-up Commonwealth Fusion Systems (CFS), a spinout of MIT’s Plasma Science and Fusion Center, is developing high-performance tokamak that is much smaller and less expensive than the ITER approach, Figure 4 (note the humans standing nearby for scale).

Figure 4 Smaller start-ups such as Commonwealth Fusion Systems are striving to achieve fusion with much smaller tokamaks and other topologies, in sharp contrast the incredibly enormous NIF/LLNL and ITER projects. Source: Commonwealth Fusion Systems

Their plan for creating the useful steam output is to use a continuously flowing blanket of molten salt. A loop of this salt will be pumped into a tank surrounding the plasma chamber, where it absorbs radiated neutrons. This molten salt is then pumped outside the tokamak, where its heat energy is transferred into a more-conventional fluid that drives a turbine to generate electricity.

Although molten salt is already used in some heat-concentrating, non-photovoltaic solar installations, this molten salt is not ordinary at all. Instead, it will likely be a mixture of lithium fluoride and beryllium fluoride (FLiBe). In this combination, the salt also acts as a “breeding” medium in which some of the fusion neutrons interact with lithium atoms and change them into tritium. The tritium is then filtered out of the blanket and recycled into fusion fuel; a rare hydrogen isotope used to fuel magnetic-confinement reactors.

Helion Energy is going an entirely different way. According to their web site, their device “directly recaptures electricity; it does not use heat to create steam to turn a turbine, nor does it require the immense energy input of cryogenic superconducting magnets. Our technical approach reduces efficiency loss, which is key to our ability to commercialize electricity from fusion at very low costs. The FRC [field reversed configuration] plasmas in our device are high-beta and, due to their internal electrical current, produce their own magnetic field, which pushes on the magnetic field from the coils around the machine.”

It continues, “The FRCs collide in the fusion chamber and are compressed by magnets around the machine. That compression causes the plasma to become denser and hotter, initiating fusion reactions that cause the plasma to expand, resulting in a change in the plasma’s magnetic field. This change in magnetic field interacts with the magnets around the machine, increasing their magnetic field, initiating a flow of newly generated electricity through the coils. This process is explained by Faraday’s Law of Induction.” I’m not going to pass judgement on this, that’s for sure.

The large-scale LLNL-NIF and ITER projects as well as the smaller ones such as at CFS, Helion, and others are literally and figuratively focused on creating a controlled, self-sustaining fusion reaction to produce the power with the hoped-for attributes of being limitless and pollution free. How and when that will happen is anyone’s educated guess, ranging from at least several decades to perhaps—and let’s be brutally honest here—maybe never.

Still, the fusion challenge is only half of the overall problem. The second half is converting the enormous heat energy into electrical energy, and it is still another major project with its own known and unknown technical problems and scaling issues.

All of this complexity makes me wonder how the Sun and other stars do their fusion so easily, without need of all this hardware and organization. Perhaps a truly “out of the box” idea is needed, such as running a very long pipe filled with some suitable phase-change material from Earth towards the Sun and back in a long loop? As they say, “never say never”.

Until then, what’s your view on approaches to transforming the heat energy of a sustained fusion reaction—assuming that day comes—into useable, controllable electricity?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

 Related Content


googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The other fusion challenge: harvesting the power appeared first on EDN.

Power Tips #121: Improving phase-shifted full-bridge efficiency using an active snubber

Wed, 09/27/2023 - 15:23

The phase-shifted full bridge (PSFB) shown in Figure 1 is popular in applications >500 W because it can achieve soft switching on its input switches for high converter efficiency. Although switching losses are greatly reduced, you can still expect to see high-voltage stress on the output rectifier, as its parasitic capacitance resonates with the transformer leakage inductance—modeled as Lr in Figure 1. The voltage stress of the output rectifier could be as high as 2VINNS/NP, where NP and NS are the transformer primary and secondary windings, respectively.

Limiting the maximum voltage stress on the output rectifier traditionally requires a passive snubber [1] such as a resistor-capacitor-diode (RCD) snubber, but the use of a passive snubber will dissipate power, resulting in an efficiency penalty.

Figure 1 A PSFB power stage with a passive clamp and waveforms, the use of the passive clamp dissipates power which leads to an efficiency penalty. Source: Texas Instruments

Alternatively, you could apply an active snubber to clamp the rectifier voltage stress without dissipating any power in the snubber circuit (assuming an ideal switch) [2]. Figure 2 shows the insertion of an active clamp leg (ACL) formed by a capacitor (CCL) and a MOSFET (QCL) before the output inductor. When the output winding voltage becomes non-zero, energy will transfer from the primary winding to the secondary winding to energize the output inductor while also conducting current through the QCL body diode to charge CCL, even if QCL isn’t turned on. You can turn on QCL after its body diode has already conducted current to ensure zero voltage switching (ZVS) on QCL.

Figure 2 A PSFB power stage with an active clamp and waveforms, unlike the passive snubber, the active snubber doesn’t dissipate the ringing energy on the power resistor but circulates the energy in the LC resonant tank as a lossless snubber. Source: Texas Instruments

It’s important to turn on QCL before the current in the active clamp MOSFET (iCL)polarity changes to allow the current-second balance on CCL to be complete by the beginning of the effective duty cycle (DeffTS). In other words, QCL only needs to be turned on long enough for the current-second balance of the active snubber to work as intended, clamping the output rectifier voltage to the CCL voltage (VCL). In other words, QCL doesn’t need to conduct throughout the full DeffTS, but for a relatively short period of time instead. As such, QCL can have a fixed on-time—that is, the QCL on time (DACLTS) is constant—while keeping DeffTS always greater than the duration where the current-second balance (DCSBTS) is complete.

This approach addresses one of the challenges when using an active snubber in that the transformer winding current does not rise monotonically—which is an issue if you are using peak current-mode control. This happens because the active snubber capacitor energy also participates in energizing the output inductor, rather than solely relying on energy transfer from the primary side. Since DeffTS is larger than DCSBTS, peak current detection can occur when the transformer current is rising monotonically. And because you can expect higher efficiency for a PSFB with a larger Deff, you can design the PSFB to have a larger Deff at mid to heavy loads, where Deff >> DCSB. At light loads, the converter should operate in discontinuous conduction mode, where Deff will be smaller than Deff in continuous conduction mode at the same input/output voltage condition. In order to keep DeffTS greater than DCSBTS even at light loads, you can use frequency-reduction control or burst-mode control.

Because the CCL ripple voltage affects the total voltage stress on the output rectifier, you must select a large-enough CCL for a low capacitor ripple voltage. You must also select CCL such that the inductor-capacitor (LC) resonant period formed by Lr and CCL is much longer than the switching period [3], expressed by Equation 1:

The rectifier voltage stress will clamp to around VINNS/NP with the active snubber, which is half of the voltage stress without any clamp circuit. Unlike the passive snubber in [1], the active snubber doesn’t dissipate the ringing energy on the power resistor but circulates the energy in the LC resonant tank as a lossless snubber. Therefore, you can expect higher converter efficiency on a PSFB with an active snubber than a PSFB with a passive snubber in an identical specification.

To understand the factors that determine the ACL current level, you’ll need to calculate the current flow through the ACL itself. Figure 3 illustrates waveforms around the ACL conduction period.

Figure 3 Waveforms during an ACL current conduction period. Source: Texas Instruments

Assuming that VCL is a constant and Lm = ∞, Equation 2 derives the current in one side of the output rectifier (iSR2) as the drain to source voltage rises as:

By assuming iSR2 current decreases at a constant rate, Equation 3 derives the time duration of t2-t1 as:

Since CCL needs to maintain current-second balance, the sum of areas A1 and A3 will equal area A2. With all of this information, it is possible to calculate the root-mean-square (RMS) value of iCL. As Equation 3 shows, the synchronous rectifier (SR) output capacitance (Coss) controls the peak current on the ACL. If you select a lower Coss SR FET, the ACL RMS current will be lower and thus help improve converter efficiency.

Figure 4 shows waveforms of the Texas Instruments (TI) 54-V, 3-kW phase-shifted full bridge with active clamp reference design, which is a 400-V input, 54-V output, 3-kW PSFB converter using an active clamp realized with TI’s C2000™ microcontroller. In this design, the transformer turns ratio is Np:Ns = 16:3. With the ACL FET turned on only for 300 ns within the output inductor energizing period, the output rectifier voltage stress (Ch1 in Figure 4) is limited to 80 V, even at a 3-kW load. The lower voltage stress enables the use of SR FETs with a lower voltage rating and a better figure of merit to further improve the efficiency of the PSFB.

Figure 4 A 54-V, 3-kW phase-shifted full bridge with active clamp reference design steady-state waveforms. Source: Texas Instruments

This control method isn’t limited to a full-bridge rectifier with one ACL; you can also apply it to an active snubber with other types of rectifiers such as a current doubler [4] or a center-tapped rectifier. TI’s 3-kW phase-shifted full bridge with active clamp reference design with >270-W/in3 power density has a 400-V input, 12-V output, 3-kW PSFB converter with an active clamp where the secondary side uses a center-tapped rectifier. The output rectifier stress (Ch1 in Figure 5) is limited to 40 V at a 3-kW load.

Figure 5 A 3-kW phase-shifted full bridge with active clamp reference design with >270-W/in3 power density steady-state waveforms. Source: Texas Instruments

The merit of an active clamp in a PSFB converter

The implementation of an active snubber in a PSFB converter significantly reduces the maximum voltage stress on the output rectifiers. This reduction in voltage stress enables the use of an SR FET with a lower drain-to-source voltage rating, which can have a better figure of merit. While an active clamp can create challenges with the implementation of peak current-mode control, proper implementation enables the use of an active clamp and peak current-mode control in harmony. This combination achieves higher power density and higher efficiency compared to traditional PSFB implementations.

Ben Lough received his M.S. in electrical engineering at the Ohio State University in 2016. He joined TI in 2016 working on AC/DC power conversion, power factor correction and isolated DC/DC design. He has authored over 15 technical articles at TI and external publications. He currently works as a systems engineer in the Power Design Services team at TI.

Related Content


  1. Lin, Song-Yi, and Chern-Lin Chen. “Analysis and Design for RCD Clamped Snubber Used in Output Rectifier of Phase-Shift Full-Bridge ZVS Converters.” Published in IEEE Transactions on Industrial Electronics 45, no. 2 (April 1998): pp. 358-359.
  2. Sabate, J.A., V. Vlatkovic, R.B. Ridley, and F.C. Lee. “High-Voltage, High-Power, ZVS, Full-Bridge PWM Converter Employing an Active Snubber.” Published in Sixth Annual Applied Power Electronics Conference and Exhibition (APEC), March 10-15, 1991, pp. 158-163.
  3. Nene. “Digital Control of a Bidirectional DC-DC Converter for Automotive Applications.” Published in 28th Annual Applied Power Electronics Conference and Exposition (APEC), March 17-21, 2013, pp. 1360-1365.
  4. Balogh, Laszlo. “Design Review: 100 W, 400 kHz, DC/DC Converter with Current Doubler Synchronous Rectification Achieves 92% Efficiency.” Texas Instruments Power Supply Design Seminar SEM100, literature No. SLUP111, 1996.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #121: Improving phase-shifted full-bridge efficiency using an active snubber appeared first on EDN.

Intel’s next-generation CPUs hide chiplets inside*

Tue, 09/26/2023 - 17:05

Last week, Intel held its third annual two-day Innovation event, a resurrection of the previous Intel Developer Forums (a “few” of which I attended back in the old days). The day-one keynote, focusing on silicon, was delivered by CEO Pat Gelsinger:

while Greg Lavender, Intel’s chief technology officer, handled the day-two keynote duties, centering on software:

The big news coming out of the show was the public unveil of Intel’s chiplet-implemented Meteor Lake CPU architecture, now referred to as Core Ultra in its high-end configurations (the company is deprecating the longstanding “i” from the Core 3/5/7/9 differentiation scheme):

Chiplets are, as any of you who’ve been following tech news lately, one of the “hottest” things in semiconductors right now. And for good reason, as I wrote (for example) in my 2021 retrospective coverage on processor architectures, specifically about AMD in that case:

Some of AMD’s success is due to the company’s “chiplet” packaging innovations, which have enabled it to cost-effectively stitch together multiple die on a unified package substrate to achieve a given aggregate core count, cache amount and (in some cases) embedded graphics capability, versus squeezing everything onto one much larger, lower-yielding sliver of silicon.

The thing is, the chiplet concept isn’t particularly new. Multi-chip and multi-die modules under a single package lid, whether arranged side-by-side and/or vertically stacked, have been around for a long time. The chiplet implementation has only come to the fore now because:

  • Leading-edge processes have become incredibly difficult and costly to develop and ramp into high-volume production,
  • That struggle and expense, coupled with the exponentially growing transistor counts on modern ICs, have negatively (and significantly so) impacted large-die manufacturing yields not only during initial semiconductor process ramps but also long-term, and
  • Desirable variability both in process technology (DRAM versus logic, for example), process optimization (low power consumption versus high performance) and IC sourcing (internal fab versus foundry), not to mention the attractiveness of being able to rapidly mix-and-match various feature set combinations to address different (and evolving) market needs, also enhance the appeal of a multi- vs monolithic-die IC implementation.

Chiplets are “old news” at this point for Intel’s competitors. As previously mentioned, AMD’s been doing them with its CPUs, GPUs and APUs (CPU-plus-GPU hybrids) since 2019’s Zen 2 microarchitecture-based Ryzen 3000 series. Similarly, Apple’s first homegrown silicon for computers, 2020’s M1 SoC, integrated DRAM alongside the processor die:

The belatedly-but-ultimately unveiled highest transistor count M1 Ultra variant further stretched the concept by stitching together two distinct M1 Max die via a silicon interposer:

And (cue irony) it’s not even a new concept to Intel itself. Way back in early 2005 (speaking of IDFs), Intel was playing catch-up with AMD, which was first to release a true single-die dual-core CPU, the Athlon 64 X2. Intel’s counterpunch, the Pentium D, stitched together two single-core CPU die, in this case interacting via a northbridge intermediary vs directly. Still, what’s old is new again, eh? Intel also leveraged multi-die, single package techniques in 2010’s “Arrandale” CPU architecture, for example, and more recently in the 47-“tile” Ponte Vecchio datacenter GPU.

Although at a high level the “song remains the same”, different chiplet implementations vary in key factors such as the inherent cost of the technology, the performance latency and power consumption of the interconnect, and the ability (or lack thereof) to pack together multiple die tightly both horizontally and vertically. Intel, for example, has branded its latest approaches as EMIB (the Embedded Multi-Die Interconnect Bridge, for 2D multi-die interconnect) and Foveros (for vertical multi-die stacking purposes). Here’s a brief video on the latter:

And all that commonality-or-not aside, Intel’s mixing-and-matching of different slivers of silicon from different fab sources using different process lithographies, not to mention the partitioning of functions among those various silicon slivers, is also intriguing. Meteor Lake comprises four main die, each with its own power management subsystem:

  • The Compute tile, fabricated on the company’s own Intel 4 (7 nm EUV) process and integrating a varying mix of “P” (performance) and “E” (efficiency) processing cores. It’s reminiscent of the initial “hybrid” combinations in the company’s 12th generation “Alder Lake” CPUs, but these cores are generationally improved in metrics such as average and peak clock speed, power consumption in various modes, and IPC (the average number of instructions per clock cycle, for both single- and multi-threaded code).
  • The SoC tile, fabricated on TSMC’s N6 (6 nm) process. It integrates a network-on-chip processor, thereby acting as the conductor for communication between the other tiles. It also integrates cores for silicon and system security, and for AI inference (I’m guessing the latter derives from Intel’s 2016 acquisition of Movidius, although that’s just an uninformed hunch). And interestingly, it also contains its own “E” processor cores, acting as a lowest-power-consumption compute tile alternative for relevant usage scenarios.
  • The GPU tile, whose purpose is likely self-explanatory, is fabricated on TSMC’s N5 (5 nm) process and derived from the technology in the company’s latest Arc Xe discrete graphics processors. That said, the media codec and display controller functions normally found in a GPU aren’t included in this tile. Instead, they’re also in the aforementioned SoC tile.
  • And, last but not least, the I/O tile, the smallest (area-wise) of the four, and the one most likely to be omitted from low-end Meteor Lake implementations. As its name implies, it implements “boutique” functions such as Thunderbolt 4. And at least initially, it’ll also be fabricated at TSMC, specifically (as with the SoC tile) on the N6 process.

Initial rumors suggested that initial Meteor Lake products, targeting mobile computing implementations, might show up in October. Whether that month was originally hoped-for or not inside Intel, the official “due date” for CPUs (and presumably also systems based on them) is now December 14, which pushes them out beyond both the holiday 2023 shopping cycle (for consumers) and 2024 budgetary cycle (for companies whose fiscal and calendar years coincide).

Why mobile first, versus desktop (or for that matter, server)? Mobile CPUs tend to prioritize low power consumption over peak performance and are also typically “kitted” with lower core counts than their desktop siblings, both attributes resonant with suppliers ramping up new die and packaged chip manufacturing processes. That said, Intel promises that desktop variants of Meteor Lake are also under development for production shipments beginning sometime next year. That said, and as presumed reassurance for skeptics, the company was already demoing its two-generations-subsequent desktop client CPU, 2025’s Lunar Lake, last week. And as for servers, Intel has a next-generation 144-core (“E”-only) monolithic Xeon CPU also coming out on December 14, with a dual-chiplet 288-core version to follow next year.

One final thing, returning once again to mobile. Not announced last week but sneak-peeked (then quickly yanked) a few weeks prior at a packaging event Intel put on was a Meteor Lake derivative with 16 GBytes of LPDDR5 SDRAM onboard for the ride, alongside the logic “tiles”.

If you’re thinking “Apple Silicon”, you’re conceptually spot-on, an association which Intel management is seemingly happy to encourage. 2024 and beyond should very interesting…

*I realize, by the way, that I may be dating myself with the title of this piece. How many of you are, like me, old enough to remember the Intel Inside branding program, now deprecated (as of, gulp, nearly 20 years ago) but apparently still with a pulse?

Thoughts as always are welcomed in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Intel’s next-generation CPUs hide chiplets inside* appeared first on EDN.

Make gain-control multiplexer Ron and Roff errors (mostly) disappear

Mon, 09/25/2023 - 16:42

The use of multiplexer chips (typically various members of the 405x family) to control gain in transimpedance (current to voltage conversion) signal-processing and data acquisition op-amp circuits, is an established practice. Using a multiplexer (mux) to select the desired feedback resistor generally works well, but mux non-ideality can sometimes introduce significant, or even intolerable, conversion error. This is mostly due, of course, to the fact that the Ron (on-resistance) of the switches that comprise muxes is always greater than zero, and their Roff (off-resistance) is less than infinity. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

Happily, simple circuit topologies exist that can reduce (or even eradicate) the worst of these erroneous effects of these pesky switch parameters, the choice depending on which (Ron or Roff) is the more serious problem in a given application. Figure 1 illustrates a trick that cancels Ron error.

Figure 1 Canceling mux Ron error by taking the output signal from the gain resistor side of switches.

 Typical mux Ron values run from 100 to 1000 Ω, tend to vary significantly with fabrication process (e.g., polysilicon gate vs metal) signal level, supply voltage, and temperature. Because they are electrically in series with the resistors, they switch in and out of the circuit and create corresponding feedback gain error at the amplifier output. These errors aren’t too serious so long as the gain setting resistor is at least a couple orders of magnitude larger than Ron like Figure 1’s R4 is, but can become intolerable for lower values like R1’s.

The Figure 1 circuit entirely evades these issues by employing an additional mux (U1a) to pick the output signal, not directly from A1’s output on U1b pin 13, but from the U1b pin that drives the selected gain resistor (R1 thru R4), thus bypassing and eliminating Ron error.

So much for Ron, but what about the other end of mux switch error spectrum:  Roff? 

Mux Roff resistances are also highly dependent on temperature and fabrication process, usually range from ones to hundreds of megaohms, and are effectively in parallel with the gain setting resistors, and thus can cause troublesome errors when gain resistors are more than a few tens of kiloohms. Figure 2 suggests a work-around when this is the situation.

Figure 2 Canceling mux Roff error by the routing leakage current to ground with Rz.

Figure 2’s Rz resistors serve to route the Vout/Roff leakage currents of U1b to ground, leaving only millivolts to be blocked by U1a. This trick reduces Roff error by orders of magnitude.

Of course, this ploy effectively places two mux Ron resistances in series with the gain-set resistors (R1 through R4), and so it must be used with caution lest it cause more error than it cures.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Nearly 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Make gain-control multiplexer Ron and Roff errors (mostly) disappear appeared first on EDN.

Predicting VDS switching spike with SPICE simulation

Mon, 09/25/2023 - 08:29

One of the primary goals for the power supply industry is to bring higher power conversion efficiency and power density to power devices in applications such as data centers and 5G. Integrating a driver circuit and power MOSFET—known as a DrMOS—into an IC increases power density and efficiency when compared to a conventional, discrete MOSFET with an individual driver IC.

Moreover, DrMOS’s flip-chip technology further optimizes the voltage regulator’s performance by reducing response time and reducing the inductance between the die and package (Figure 1).

Figure 1 Here is a comparison between conventional wire bond and flip-chip technology. Source: Monolithic Power Systems

However, parasitic inductance on the substrate and PCB significantly impacts the drain-to-source voltage (VDS) spike, and that’s due to resonance between parasitic inductance and MOSFET’s output capacitance (COSS). A high VDS spike can cause a MOSFET avalanche, which leads to device degradation and reliability issues. To prevent an avalanche breakdown on the MOSFET, there are several methods to alleviate voltage stress.

The first method is to apply a higher-voltage, double-diffused MOSFET (DMOS) process on the DrMOS. If this process is adopted in the power MOSFET design, it results in a higher on resistance (RDS(ON)) for the DrMOS due to a reduced number of paralleled DMOS within the same space.

The second method is to use a snubber circuit to suppress voltage spike. However, this method leads to extra loss on snubber circuit. Furthermore, adding a snubber circuit may not effectively lower the MOSFET’s VDS spike since the stray inductance that causes resonant behavior is mainly integrated in DrMOS’s package.

When trying to increase voltage regulator efficiency and reduce the MOSFET’s voltage spikes, the tradeoffs described above can make it difficult to quantify and optimize the effects of parasitic inductance on the PCB and substrate.

This article will first discuss parasitic inductance modeling. Next, the equivalent parasitic circuit model is applied in a SPICE simulation tool to predict the VDS switching spike. Experimental results will be presented to verify the feasibility of the parasitic model.

Parasitic inductance modeling on a DrMOS

To model parasitic inductance, 3D structures of both the DrMOS and PCB were built for a simulation analysis (Figure 2). Parameters such as the material, stack-up information and PCB as well as package layer thickness are crucial for modeling accuracy.

Figure 2 DrMOS and PCB’s 3D-modeling structure can be used to obtain parasitic inductance. Source: Monolithic Power Systems

After 3D-modeling the PCB and DrMOS, the parasitic inductance can be characterized and obtained via ANSYS Q3D extractor. Since this article focuses on the MOSFET’s VDS spike, the main simulation settings of interest are the parasitic parameters on the power nets and driver nets.

When considering the parasitic component obtained from Q3D extractor, the parasitic inductance matrix—including the self and mutual terms of each net on the DrMOS—can be selected under different frequency conditions. Since the resonant frequency for VDS on the high-side MOSFET (HS-FET) and low-side MOSFET (LS-FET) is between 300 MHz and 500 MHz, the parasitic inductance matrix under 300 MHz condition is adopted for further behavior model simulation.

Behavior model simulation on SPICE

After the equivalent parasitic component model is exported from Q3D, the effects of different types of decoupling capacitors on the PCB are taken into account. Due to the capacitance decay after applying a DC voltage on a multi-layer ceramic capacitor (MLCC), it’s important to consider the equivalent circuit of each individual MLCC under certain DC voltage bias conditions. Each consideration should be based on the MLCC’s operating voltage. Figure 3 shows the circuit configuration for the behavior model simulation on SPICE.

Figure 3 A circuit can be configured with a behavior model simulation. Source: Monolithic Power Systems

Table 1 shows the simulation and measurement conditions based on the schematic shown in Figure 3.

Table 1 The data shows the results of experimental test bench. Source: Monolithic Power Systems

Optimizing parasitic inductance

To suppress VDS spike without compromising efficiency, it’s vital to optimize parasitic inductance on the PCB and package. With advanced package technology, input capacitors can be integrated in the package to shorten the decoupling path (Figure 4). Paralleling the embedded capacitors in the package can effectively reduce the equivalent parasitic inductance on the DrMOS.

Figure 4 A 3D DrMOS structure with embedded capacitors optimizes the VDS spike. Source: Monolithic Power Systems

Table 2 shows the equivalent parasitic inductance and VDS spike when utilizing different decoupling capacitor configurations on DrMOS.

Table 2 Equivalent parasitic inductance and VDS spike are shown with different capacitor configurations. Source: Monolithic Power Systems

As the simulation results in Table 2 show, not only is the equivalent parasitic inductance lowered, but the VDS spike on MOSFET is also suppressed. Moreover, thanks to the MLCC’s low-ESR characteristics, no additional power loss is generated on the embedded input capacitors. Therefore, it’s possible to add different embedded input capacitors to reduce parasitic inductance in DrMOS applications.

DrMOS with embedded capacitors

This article has explained the effect of parasitic inductance on the VDS switching spike, as well as several methods to prevent an avalanche breakdown on the MOSFET due to the VDS switching spike. To quantify the effects of parasitic inductance on the VDS switching spike, parasitic inductance modeling is first introduced, and then behavior modeling on SPICE is proposed.

The results obtained via SPICE closely matched the experimental results for DrMOS solutions such as the MP87000-L, which means the behavior model can accurately predict the risk of an avalanche breakdown on the MOSFET.

To effectively suppress the VDS spike without any tradeoffs, embedded capacitors in the package were introduced. The behavior model simulation confirmed that these capacitors can reduce the equivalent parasitic inductance, and thus lower the VDS spike without additional loss.

Andrew Cheng is applications engineer at Monolithic Power Systems (MPS).

Lion Huang is senior staff applications engineer at Monolithic Power Systems (MPS).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Predicting VDS switching spike with SPICE simulation appeared first on EDN.

Software tool simplifies MCU-based device-to-cloud connectivity

Fri, 09/22/2023 - 18:12

Microcontroller suppliers are striving to bolster device-to-cloud connectivity by adding software tools that simplify the Internet of Things (IoT) links to major cloud platforms. Case in point: STMicroelectronics has released software that simplifies connectivity for devices built around its STM32H5 microcontrollers to AWS IoT Core and Microsoft Azure IoT Hub.

The X-CUBE-AWS-H5 and X-CUBE-AZURE-H5 expansion packages feature a set of libraries and application examples for STM32H5 microcontrollers to facilitate secure connection to the AWS and Azure cloud platforms, respectively.

Figure 1 The new software for STM32H5 MCUs leverages ST’s Secure Manager for protected connection to cloud IoT platforms. Source: STMicroelectronics

The sample applications illustrate device-to-cloud connectivity along with network configuration and data publishing. For instance, the application example in the X-CUBE-AZURE-H5 expansion package handles Azure messages, methods, and twin update commands.

These expansion packages also leverage ST’s embedded security software, Secure Manager, to securely connect STM32H5 microcontrollers to AWS and Azure cloud platforms. Secure Manager provides isolation properties that facilitate protecting the intellectual property of multiple owners on the same platform. This multitenant IP protection is part of a comprehensive set of services that protect the confidentiality and integrity of assets through development, manufacturing, and in the field.

STM32H5 microcontrollers, based on Arm Cortex-M33, are programmed with their own immutable identity at the ST factory. That, combined with ST’s Secure Manager, simplifies registering smart devices to AWS and Azure cloud platforms and removes the need for costly infrastructure otherwise necessary to keep the identities of IoT objects secret during their production.

Figure 2 STM32H5 Discovery kit lets developers easily and securely connect their STM32H5-based prototypes to cloud platforms like AWS IoT Core. Source: STMicroelectronics

Moreover, devices in production and those in the field can also benefit from remote provisioning and administration of credentials via third-party service providers. That’s because Secure Manager stores the credentials needed to connect to AWS and Azure IoT Hub, as well as other device secrets and assets, within the STM32H5 microcontroller.

The STM32H5 microcontroller, introduced in March 2023, is the first to support Secure Manager and target PSA Certified level 3 and SESIP3 certifications. The MCU’s security features simplify the process for developers to securely provision and manage high-performance IoT devices.

Developers can also take advantage of the powerful services offered by AWS and Azure cloud platforms to transform the data collected from their devices into actionable insights.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Software tool simplifies MCU-based device-to-cloud connectivity appeared first on EDN.

MAC-PHY connects MCUs to 10Base-T1S networks

Thu, 09/21/2023 - 20:52

Microchip’s LAN8650/1 combines a media access controller (MAC) and Ethernet PHY to enable low-cost MCUs to access 10Base-T1S automotive networks. A standard serial peripheral interface (SPI) allows the LAN8650/1 to interface with nearly any 8-bit, 16-bit, or 32-bit MCU, including those that do not have a built-in Ethernet MAC. The 10Base-T1S Ethernet PHY transmits 10 Mbps over a single balanced pair and conforms to the IEEE 802.3cg-2019 standard.

The LAN8650/1 is equipped with time-sensitive networking (TSN) support, which permits synchronized timing across far-reaching Ethernet networks. Time synchronization is critical for many automotive applications, such as advanced driving assistance systems (ADAS).

Compliant with AEC-Q100 Grade 1 requirements for enhanced robustness in harsh environments, the LAN8650/1 operates over the extended temperature range o -40°C to +125°C. It is also functional safety ready and intended for use in ISO 26262 applications.

The part is housed in a 32-pin, 5×5-mm VQFN package. Samples of the LAN8650/1 can be ordered online by using the product page link below.

LAN8650/1 product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MAC-PHY connects MCUs to 10Base-T1S networks appeared first on EDN.

GaN-on-Sic ISM amplifier boosts efficiency

Thu, 09/21/2023 - 20:51

Gallium Semiconductor offers its first ISM CW amplifier, the GTH2e-2425300P, operating from 2.4 GHz to 2.5 GHz with peak efficiency of 76% (pulsed). Measured data for the 300-W prematched GaN-on-SiC HEMT shows a drain efficiency of over 72% under continuous-wave operation.

The GTH2e-2425300P brings high levels of efficiency to a wide range of industrial, scientific, and medical (ISM); RF energy, and continuous-wave (CW) applications, including semiconductor plasma sources and microwave plasma chemical vapor deposition. Powered by a 50-V supply rail, the part has a small signal gain of 17 dB typical at saturated power. Drain source breakdown voltage is 150 V.

The GTH2e-2425300P comes in an ACP-800 4-lead air-cavity plastic package with a ceramic matrix composite (CMC) flange, enhancing reliability and thermal performance (0.67°C/W). This standard air-cavity package also simplifies integration into various RF systems.

A fixed-tune demonstration board is available for qualified customers. Contact sales@galliumsemi.com for pricing details and ordering information.

GTH2e-2425300P product page

Gallium Semiconductor

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN-on-Sic ISM amplifier boosts efficiency appeared first on EDN.

TVS device targets USB4 and Thunderbolt 4

Thu, 09/21/2023 - 20:51

A single-channel transient voltage suppressor (TVS) from AOS, the AOZ8S207BLS-01 provides ESD protection for high-speed interfaces like USB4 and Thunderbolt 4. The device packs two unidirectional TVS diodes in a single 0.6×0.3×0.3-mm leadless surface-mount package, enabling it to meet the small footprint requirements of USB Type-C connectors.

Key specifications for the AOZ8S207BLS-01 include a reverse working voltage (VRWM) of 1 V maximum, reverse breakdown voltage (VBR) of 1.5 V typical, peak pulse current (IPP) of 5 A maximum, and junction capacitance (CJ) of 0.15 pF typical. The ESD rating per IEC61000-4-2 standards is ±16 kV for both contact and air, along with an ESD rating of ±8 kV human body model (HBM).

The AOZ8S207BLS-01 transient voltage suppressor costs $66 each in lots of 1000 units. It is immediately available in production quantities with a lead time of 16 weeks.

AOZ8S207BLS-01 product page

Alpha & Omega Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post TVS device targets USB4 and Thunderbolt 4 appeared first on EDN.

Automotive Hall ICs provide dual outputs

Thu, 09/21/2023 - 20:50

Hall-effect sensors in the AH39xxQ series from Diodes provide accurate speed and directional data or two independent outputs. Targeting automotive applications, the parts are AEC-Q100 Grade 0 qualified and support PAPP documentation. Self-diagnostic features also make the Hall ICs suitable for ISO 26262-compliant systems.

In line with automotive battery requirements, the devices operate over a wide supply voltage range of 2.7 V to 27 V. They have a 40-V absolute maximum rating, enabling them to safely handle 40-V load dumps. Three operating-point and release-point (BOP/BRP) options are offered, with typical values of 10/-10 gauss, 25/-25 gauss, and 75/-75 gauss. A narrow operating window ensures accurate and reliable switching points.

Dual-channel operation allows one Hall IC to replace two latch switches, saving PCB space and overall component costs. In addition, a chopper stabilized design minimizes switch point drift and ensures accurate measurements over a broad temperature range.

Hall sensors in the AH396xQ series cost $0.40 each in lots of 1000 units. Hall sensors with self-diagnostics in the AH397xQ series cost $0.44 each in like quantities. All of the devices come in TSOT25 packages.

AH396xQ series product page

AH397xQ series product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Automotive Hall ICs provide dual outputs appeared first on EDN.

Kit accelerates STM32H5 MCU development

Thu, 09/21/2023 - 20:50

The STM32H5731-DK Discovery kit from ST allows developers to build secure, connected IoT devices based on the STM32H5 microcontroller. User can explore all the integrated features of the STM32H5— including analog peripherals, adaptive real-time accelerator (ART), media interfaces, and mathematical accelerators—as well as core security services certified and maintained by ST.

Along with the STM32H5 MCU, the development board furnishes a color touch display, digital microphone, and USB, Ethernet, and Wi-Fi interfaces. An audio codec, flash memory, and headers for connecting expansion shields and daughterboards are also provided.

To simplify the development process, the STM32CubeH5 MCU software package supplies the components required to develop an application on the STM32H5 microcontroller, including examples and application code. ST also offers the STM32CubeMX tool for configuring and initializing the MCU.

The STM32H5 employs an Arm Cortex-M33 MCU core running at 250 MHz and is the first to support ST’s Secure Manager system-on-chip security features. It combines Arm TrustZone security with ST’s STM32Trust framework to comply with PSA Certified Level 3 and Global Platform SESIP3 security specifications.

The Discovery kit board cost $98.75 and is available through ST’s eStore and authorized distributors.

STM32H5731-DK product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Kit accelerates STM32H5 MCU development appeared first on EDN.

Accelerating RISC-V development with network-on-chip IP

Thu, 09/21/2023 - 09:33

In the world of system-on-chip (SoC) devices, architects encounter many options when configuring the processor subsystem. Choices range from single processor cores to clusters to multiple core clusters that are predominantly heterogeneous but occasionally homogeneous.

A recent trend is the widespread adoption of RISC-V cores, which are built upon open standard RISC-V instruction set architecture (ISA). This system is available through royalty-free open-source licenses.

Here, the utilization of network-on-chip (NoC) technologies’ plug-and-play capabilities has emerged as an effective strategy to accelerate the integration of RISC-V-based systems. This approach facilitates seamless connections between processor cores or clusters and intellectual property (IP) blocks from multiple vendors.


Network-on-chip basics

Using a NoC interconnect IP offers several advantages. The NoC can extend across the whole device, with each IP having one or more interfaces that span the entire SoC. These interfaces have their own data widths, operate at varying clock frequencies, and utilize diverse protocols such as OCP, APB, AHB, AXI, STBus, and DTL commonly adopted by SoC designers. Each of these interfaces links to a corresponding network interface unit (NIU), also referred to as a socket.

The NIU’s role is to receive data from a transmitting IP and then organize and serialize this data into a standardized format suitable for network transmission. Multiple packets can be in transit simultaneously. Upon arrival at its destination, the associated socket performs the reverse action by deserializing and undoing the packetization before presenting the data to the relevant IP. This process is done in accordance with the protocol and interface specifications linked to that particular IP.

A straightforward illustration of IP blocks could be visualized as solid logic blocks. Additionally, an SoC usually utilizes a single NoC. Figure 1 illustrates a basic NoC configuration.

Figure 1 A very simple NoC representation shows basic design configuration. Source: Arteris

The NoC itself can be implemented using a variety of topologies, including 1D star, 1D ring, 1D tree, 2D mesh, 2D torus and full mesh, as illustrated in Figure 2.

Figure 2 The above examples show a variety of NoC topologies. Source: Arteris

Some SoC design teams may want to develop their own proprietary NoCs, a process that is resource- and time-intensive. This approach requires teams of several specialized engineers to work for two or more years. To make matters more challenging, designers often invest nearly as much time debugging and verifying an in-house developed NoC as they do for the rest of the entire design.

As design cycles shorten and time-to-revenue pressures increase, SoC development teams are considering commercially available NoC IP. This IP enables the customization required in an internally developed NoC IP but is available from third-party vendors.

Another challenge of the growing SoC complexity is the practice of utilizing multiple NoCs and various NoC topologies within a single device (Figure 3). For instance, one section of the chip might adopt a hierarchical tree topology, while another area could opt for a 2D mesh configuration.

Figure 3 The illustration highlights sub-system blocks with internal NoCs. Source: Arteris

In many cases, the IP blocks in today’s SoCs are the equivalent of entire SoCs of only a few years ago, making them sub-systems. Thus, the creators of these sub-system blocks will often choose to employ industry-standard NoC IP provided by a third-party vendor.

In instances requiring high levels of customizability and co-optimization of compute and data transport, such as a processor cluster or a neural network accelerator, the IP development team may opt for a custom implementation of the transport mechanisms. Alternatively, they might decide to utilize one of the lesser adopted, highly specialized protocols to achieve their design goals.

RISC-V and NoC integration

For a standalone RISC-V processor core, these IPs are available with AXI interfaces for designers who don’t need coherency and CHI interfaces for those who do. This allows these cores to plug-and-play with an industry-standard NoC at the SoC level.

Likewise, if design teams select one of the less commonly adopted protocols for inter-cluster communication in a RISC-V design, that cluster can also feature ACE, AXI or CHI interfaces toward external connections. This method allows for quick connection to the SoC’s NoC.

Figure 4 below features both non-coherent and cache coherent options. Besides their usage in IPs and SoCs, these NoCs can also function as super NoCs within multi-die systems.

Figure 4 A NoC interconnect IP is shown in the context of a multi-die system. Source: Arteris

NoC IP in RISC-V processors

The industry is experiencing a dramatic upsurge in SoC designs featuring processor cores and clusters based on the open standard RISC-V instruction set architecture.

The development and adoption of RISC-V-based systems, including multi-die systems, can be accelerated by leveraging the plug-and-play capabilities offered by NoC technologies. This enables quick, seamless and efficient connections between RISC-V processor cores or clusters and IP functional blocks provided by multiple vendors.

Frank Schirrmeister, VP solutions and business development at Arteris, leads activities in the automotive, data center, 5G/6G communications, mobile, aerospace and data center industry verticals. Before Arteris, Frank held various senior leadership positions at Cadence Design Systems, Synopsys and Imperas, focusing on product marketing and management, solutions, strategic ecosystem partner initiatives and customer engagement.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Accelerating RISC-V development with network-on-chip IP appeared first on EDN.

Exploring the superior capabilities of Wi-Fi 7 over Wi-Fi 6

Wed, 09/20/2023 - 17:27

In recent years, applications such as video conferences, ultra high-definition streaming services, cloud services, gaming, and advanced industrial Internet of Things (IIoT) have significantly raised the bar for wireless technology. Wi-Fi 6 (including Wi-Fi 6E) and dual-band Wi-Fi were promising solutions to the rising wireless demands. However, the real-world improvements and noticeable benefits of Wi-Fi 6 have been underwhelming.

Now, we have a new standard on the horizon, bringing significant technical changes to the Wi-Fi industry. Wi-Fi 7 will be a giant leap forward for residential and enterprise users. This article will provide readers with insights into the latest progress of Wi-Fi 7. It will help engineers to better understand the full capabilities of Wi-Fi 7 and all the technical challenges that come with these new features. It will assist engineers to work on smooth Wi-Fi 7 adoption and develop potential applications regarding advanced wireless technologies.

Expected Wi-Fi 7 performance vs Wi-Fi 6, 6E and 5

From the last column of Table 1, you can clearly see some performance numbers that Wi-Fi 7 will be able to deliver. As you can see, we are looking at a 4.8 fold connection speed gain from Wi-Fi 6 to Wi-Fi 7, making the maximum theoretical data rate 46 Gbps. That is a considerable speed improvement from Wi-Fi 5 to Wi-Fi 6, which was only 2.8 times.


Wi-Fi 5

Wi-Fi 6

Wi-Fi 6E

Wi-Fi 7

Launch time




2024 (Expected)

IEEE standard





Max data rate

3.5 Gbps

9.6 Gbps

9.6 Gbps

46 Gbps


5 GHz

2.4 GHz, 5 GHz

2.4 GHz, 5 GHz, 6 GHz

2.4 GHz, 5 GHz, 6 GHz

Channel size

20, 40, 80, 80+80, 160 MHz

20, 40, 80, 80+80, 160 MHz

20, 40, 80, 80+80, 160 MHz

Up to 320 MHz





4096-QAM OFDMA(with Extensions)
















Table 1 A specification comparison between Wi-Fi 5, Wi-Fi 6, Wi-Fi 6E, and Wi-Fi 7.

That much speed improvement is due to the channel size increasing up to 320 MHz. From Table 1, channel size has stayed the same for over ten years. Another key reason Wi-Fi 7 could deliver much higher speed is that it supports three frequency bands (2.4 GHz, 5 GHz, 6 GHz) and multi-link operations. Figure 1 shows the bands, spectrum, channels, and channel width that are available to Wi-Fi 7. This feature not only improves connection speed but also improves network capacity by five times compared to Wi-Fi 6. In a later section, we will explore these new technical features in more detail.

Figure 1 A description of bands, spectrum, channels, and channel width available to Wi-Fi 7. Source: Keysight

Based on the specifications of Wi-Fi 7, besides the 46 Gbps speed, we expect Wi-Fi 7 to deliver less than five milliseconds of latency. This is over one hundred times better than Wi-Fi 6. With this performance, we could expect 15x better AR/VR performance.

Maximum channel bandwidth increase

As mentioned in Table 1, one of the most significant changes coming to Wi-Fi 7 is the maximum channel bandwidth. It allows the 6 GHz band to double its bandwidth from 160 MHz to 320 MHz, this change will enable many more simultaneous data transmissions. As illustrated in Figure 2, with twice the bandwidth resources, you can easily expect the base speed to double.

Figure 2 Wi-Fi 7’s maximum channel bandwidth in the 6 GHz bad versus the 5 GHz band of Wi-Fi 6. Source: Keysight

Currently, two main challenges will make adopting 320 MHz slower. First, from a regulatory standpoint, certain regions support three channels of the 320 MHz contiguous spectrum while others only support one channel, and some regions do not support any channel. That is why this bandwidth is exclusive to the 6 GHz band. It requires policymakers in different regions to work closely with the Wi-Fi industry to find feasible solutions to allow additional bandwidth for Wi-Fi applications. Despite these challenges, several chipset/module vendors have already certified Wi-Fi 7 modules, and several device manufacturers will be releasing Wi-Fi 7 access points (APs) in 2023.

Another challenge is that we need compatible clients to support this feature. Currently, all client devices only support 160 MHz at best. Device makers must consider factors like interference or power consumption when designing and developing their new products. Higher bandwidth support usually means higher power usage and a higher chance of interference. It usually takes time for device makers to find a balance between performance and other factors. Therefore, it will take time until the industry can take full advantage of this channel bandwidth increase.

Multi-link operation

There is another important feature coming to Wi-Fi 7. This feature is multi-link operation or MLO. Currently, as shown on the left of Figure 3, Wi-Fi technology only supports single-link operation, which means Wi-Fi devices can only transmit data using either the 2.4 GHz band or the 5 GHz band. With Wi-Fi 7 and MLO, shown on the right of Figure 3, Wi-Fi devices can transmit data using all available bands and channels to transmit data simultaneously. There are usually two schemes for MLO to work. Devices could either choose among different bands for each transfer cycle, or they could just aggregate more than one band. Either way, MLO avoids congestion on the links, lowering latency. This feature will improve reliability for applications like VR/AR, gaming, video conferencing, and cloud computing.

Figure 3 Single-link operation of Wi-Fi 6 versus MLO of Wi-Fi 7. Source: Keysight

As mentioned in the previous section, Wi-Fi 7 now supports wider maximum channel bandwidth of up to 320 MHz. To support high band aggregation, it will cause an increase in peak-to-average power ratio (PAPR) in wider channels. Therefore, this MLO feature will introduce more power consumption, which device makers must find ways to compensate for. Besides additional power usage, having more subchannels will make managing interference more difficult at the same time.

Channel puncturing

The following important feature is channel puncturing or, preamble puncturing. This feature allows APs to establish transmissions with more than one companion device at the same time and be able to monitor for interference on the channel. If they detect interference in the channel, they can ‘puncture’ the channel and notch out that 20 MHz sub-channel to continue the transmission in the rest of the channel. The overall bandwidth is lower because of the punctured amount, but we still enable a decent channel than not using it at all.

Channel puncturing already existed in Wi-Fi 6 as an optional feature. However, because of its technical complexity, this feature requires both compatible APs and clients to work properly. There has yet to be a manufacturer taking advantage of this feature. With the new Wi-Fi 7 standards, this channel puncturing could become a standard feature.

For measurement requirements, this feature has presented more challenges from the regulatory side. The European Telecommunications Standards Institute (ETSI) has already given the standards for preamble puncturing testing, but for 160 MHz bandwidths. The Federal Communications Commission (FCC), however, needs to provide clear guidelines for the measurement limits for preamble puncturing. The existing measurement limits were not designed for the Wi-Fi 7 preamble puncturing feature, and they are too restrictive. For example, there are discussions in presentations on how to manage channel puncturing for dynamic frequency selection (DFS) testing, but no formal definition in FCC guidance documents (KDBs). Also, there are possible changes coming to the in-band emission limits for channel puncturing.

Other important new features of Wi-Fi 7 and IoT support

To support more IoT devices on one Wi-Fi network, Wi-Fi 7 brought 16×16 multi-user multiple-input and multiple-output (MU-MIMO). This feature will easily double the network capacity of Wi-Fi 6. While this improves the transmission efficiency, it also greatly increases the amount of testing required, as several tests are required for each antenna output.

Wi-Fi 7 adopts a higher-order modulation scheme, 4096-QAM, to further enhance peak rates. As shown in Figure 4. This allows Wi-Fi 7 to carry 12 bits at a time rather than 10 bits. That means the new modulation scheme alone can improve theoretical transmission rates by 20% compared to Wi-Fi 6’s 1024-QAM. Besides faster data rate improvement, when it comes to streaming, gaming, and VR/AR applications, 4K-QAM means flawless 4K/8K image quality, higher colour accuracy, and minimum lag.

Figure 4 Wi-Fi 7 adopts a higher-order modulation scheme, 4096-QAM, to further enhance peak rates; here is an example of 1024 QAM vs. 4096 QAM. Source: Keysight

With Wi-Fi 6, each user only has one resource unit (RU) assigned to transmit frames, which makes the spectrum resource less flexible. Wi-Fi 7, however, allows multiple RUs combinations to serve one single user, which increases transmission efficiency. See Figure 5.

Figure 5 An example of single RU versus multi-RU. Source: Keysight

Understanding Wi-Fi 7

Wireless connectivity has become increasingly vital in our lives. Wi-Fi technology plays a crucial role in meeting our growing demands for higher speed, low latency, high capacity, and high efficiency for household and enterprise users. Wi-Fi 7 (802.11be) will bring improvements in all these major aspects compared to Wi-Fi 6 (802.11ax) and will open more doors to more and better IoT applications and services.

Wi-Fi 7 leverages the increased channel width, multi-channel operation, and channel puncturing to improve speed and efficiency. Other features like multi-user capabilities enhancements, 4K-QAM, and multi-RU support will further optimize the user experience.

Wi-Fi 7 also comes with several tough challenges. The most important one is finding a balance between wider feature support and power consumption. Of course, there is always an element of interference in the subchannels. To support all these new features, we need compatible APs and clients, which is not possible if we do not have all the regulatory guidelines in place for all regions in the world. This requires regulatory bodies to work closely with industry leaders to define these guidelines so that Wi-Fi 7 evolves to reality from theory.


Xiang Li is an experienced wireless network engineer with a master’s degree in electrical engineering. Currently, Xiang is an Industry Solution Marketing Engineer at Keysight Technologies.


Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Exploring the superior capabilities of Wi-Fi 7 over Wi-Fi 6 appeared first on EDN.

Disassembling the Echo Studio, Amazon’s Apple HomePod foe

Tue, 09/19/2023 - 17:30

Back in late February, within my teardown of an Apple HomePod mini smart speaker, I wrote:

I recently stumbled across a technique for cost-effectively obtaining teardown candidates, which I definitely plan to continue employing in the future (in fact, I’ve now got two more victims queued up in my office which I acquired the same way, although I’m not going to spoil the surprise by telling you about them yet).

In early June, while dissecting my second victim, a first-generation full-size HomePod, I summarized the aforementioned acquisition technique:

I picked up a couple of “for parts only” devices with frayed power cords on eBay for substantial discounts from the fully functional (whether brand new or used) price. One of them ended up still being fully functional; a bit of electrical tape sheltered the frayed segments from further degradation. The other went “under the knife”. Today, I’ll showcase one of the “two more victims” I previously foreshadowed.

Victim #2 wouldn’t fully boot, the result of either a bad logic board or a firmware update gone awry. And today, we’re going to take a look at victim #3, ironically a direct competitor to the HomePod. It’s a high-end Amazon Echo Studio smart speaker; here’s a stock shot to start us off:

That one, matching the one you’ll learn about in detail today, is “Charcoal” in color. The device also comes in white…err…“Glacier”. Here’s a conceptual image of its multi-transducer insides:

The story of how I came to obtain a device which normally sells for $199.99 (sometimes found on sale for $159.99) for only $49.99 is a bit convoluted but also educational—I’ll revisit the big-picture topic later as it relates to other similar-configured devices, specifically computers. So, I hope you’ll indulge my brief detour before diving into the device’s guts. As with its predecessors, this smart speaker was listed on eBay “for parts only”. And in something of a first after more than a quarter century of my being active on eBay, after I bought it the seller reached out to me to be sure I knew what I was (and wasn’t) getting before he sent it to me.

A funny (at least to me) aside: in sitting down to write this piece just now, I finally took a close look at the seller’s eBay username (“starpawn19”) followed by a visit to his eBay storefront (“Star Pawn of New Port Richey”). He runs a pawn shop, which I’d actually already suspected. He told me that someone sold one of his employees the Echo Studio, but the employee forgot to tell the seller (before leaving) to unregister the smart speaker from his or her Amazon account, and the phone number the seller gave the pawn shop ended up being inactive.

After doing a bit more research, there’s likely more to the tale than I was told (I’m not at all suggesting that “starpawn19” was being deceptive, mind you, only not fully knowledgeable). Amazon keeps a record of each customer account that each of its Echo devices (each with a unique device ID) has ever been registered with (in fact, if you buy a new device direct from Amazon, you often have the option for it to come pre-registered). If all that had happened, as had been related to me, was that the previous registered user forgot to unregister it first, I think (although information found online is contradictory, and the Amazon support reps I spoke with in fruitlessly striving to resurrect my new toy were tight-lipped) that a factory reset (which I tried) would enable its association with a new account. If, on the other hand, a previous user ever reports it lost or stolen (or if, apparently, Amazon thinks you’ve been nasty to its delivery personnel!) it gets unregistered and all subsequent activation attempts will fail, as I discovered:

The only recourse that “Contact Customer Service” offered me was to return the unit to the seller for a refund…which of course wasn’t an option available to me, since I knew about its compromised condition upfront. So, what happened? One of two things, I’m guessing. Either:

  • Whoever sold the device to “starpawn19” had previously stolen it from someone else or,
  • Whoever sold the device to “starpawn19” hadn’t been happy with the price they got for it and subsequently decided to get revenge by reporting it lost or stolen to Amazon.

With that backgrounder over, let’s get to tearing down, shall we? I’ll begin with a few overview images (albeit no typical box shots, sorry; it didn’t come in retail packaging), as-usual accompanied by the obligatory 0.75″ (19.1 mm) diameter U.S. penny for dimension comparison purposes but absent (I realized in retrospect) the detachable power cord. The Echo Studio is 8.1” high and 6.9” in diameter (206 mm x 175 mm), weighing 7.7 lbs. (3.5 kg). Here’s a front view:

Because I don’t want it to feel left out:

Now a back view:

A closeup of the backside connections reveals the power port in the center, flanked by a micro-USB connector to the left (with no documented user function) and a multipurpose 3.5 mm audio input jack on the right, capable of accepting both TRS analog plugs and incoming optical S/PDIF digital streams.

Like Apple’s HomePod, the Echo Studio contains a mix of speaker counts and sizes, capable of reproducing various audio frequency ranges, and variously located in the device. But the implementation details are quite different in both cases. Here’s a look at the internals of the first-generation HomePod (recall that the second-generation successor has only five midrange/tweeter combo transducers, versus the seven in this initial-version design):

Compare it against the “x-ray” image of the Echo Studio at the beginning of this article. Several deviations particularly jump out at me:

  • Apple employed a single speaker configuration to tackle both midrange and high frequencies, whereas Amazon used distinct drivers for each frequency span: three 2” midranges and a 1” high-frequency tweeter.
  • Apple’s woofer points upward, out the top of the HomePod, whereas Amazon’s 5” driver is downward firing. That said, both designs leverage porting (Amazon calls them “apertures”, one in front and the other in back) to enhance bass response.

  • The varying speaker counts and locations affect both bill-of-materials costs and sound reproduction capabilities. Recall that bass frequencies are omnidirectional; you can put a subwoofer pretty much anywhere in a room, with optimum location guided by acoustic response characteristics versus proximity to your ears. Conversely, high frequencies are unidirectional; your best results come from pointing each tweeter directly at your head (note, for example, its front-and-center location in the Echo Studio). Midrange frequencies have intermediary transducer location requirements.
  • The Echo Studio was also designed for surround sound reproduction. That explains, for example, the fact that one of its three midrange drivers points directly upward, to support Dolby Atmos’ “height” channel(s). The other two midrange drivers point to either side. And like its other modern Echo siblings, two Echo Studios can be paired together to more fully reproduce left and right channel “stereo” sound, as well as (paired with an Echo Sub) to further enhance the system’s low bass response.

Here’s our first look at the top of the Echo Studio, with the grille for the upward-firing midrange in the middle and an array of seven microphones spaced around the outer ring. Recall that the HomePod’s six (first-gen) then four (second-gen) mics were around the middle of the device.

Left-to-right along the lower edge of the ring are four switches: mute, volume down and up, and the multifunction “action” button. And, in the space between the speaker grill and outer ring, a string of multi-color LEDs shines through in various operating modes, with varying colors and patterns indicating current device status (bright red, for example, signifies mute activation).

Now for the Echo Studio’s rubberized “foot”:

including that unique DSN (device serial number) that I mentioned earlier:

and which I’m betting is our pathway inside:


Before continuing, I want to give credit to the folks at iFixit, who (as far as I know) never did a proper teardown of the Echo Studio but whose various repair manuals were still invaluable disassembly guides (in the spirit of giving back, you’ll find comments from yours truly posted to this one). I’ll also point you to a teardown video I found while doing preparatory research:

I don’t speak Hindi, so the narration was of no use to me, but the visuals were still helpful!

Anyhoo….onward. Amazon really gave my Torx driver set quite a workout; I’m not sure that I’ve ever encountered so many different screw head types and sizes in the same device. First, getting the plastic bottom off required removing 15 of them:

Lift off the bottom plate:

Disconnect the wiring harness and a flex PCB cable:

And the deed is done; we’re in!

Let’s first focus on the PCB-plus-power assembly attached to the inside of the bottom plate:

Remove four screws:

and the PCB comes free:

The IC to the right of the flex PCB connector is the Texas Instruments’ OPA1679 quad audio op amp. Flip the PCB over:

and you’ll find another, smaller IC below the audio input jack, presumably also from TI considering its two-line marking:

TI 12

but whose identity escapes me. Ideas, readers? (I’m also guessing, by the way, that the optical S/DPIF receiver is built into the audio input jack? And where does the DAC for the digital S/PDIF audio input, or alternatively the ADC for the analog audio input, reside? Keep reading…)

Here’s another shot of this same PCB from a different vantage point before we proceed:

And now for that wiring harness:

Now for the PCB inside the device, which we saw earlier and which (many of you have likely already figured out, given the massive transformer, passives, and all the thermal paste) handles DC power generation and distribution:

Remove one more screw and disconnect one more wiring harness:

And the PCB lifts right out, leaving a baseplate behind:

I strove mightily to chip away at and remove all that thermal paste while leaving everything around and underneath it (and embedded within it) intact, but eventually bailed on the idea. Sorry, folks; if anyone knows how to cost-effectively dissolve this stubborn stuff without marring everything else, I’m happy to give it a shot. Speaking of shots, here are some more:

About that baseplate:

The plastic ring around it, held in place by a single screw, needs to come out first:

And now for the baseplate itself, which does double-duty (on its other site) in redirecting the woofer’s output through the smart speaker’s two side “apertures”:

Bless you, iFixit registered user Jeff Roske, for suggesting in an iFixit guide comment (step 6, to be exact) that “Power supply baseplate must be rotated clockwise, ~1cm at outside edge, to release catch tabs before lifting out of unit” and, in the process, saving my sanity (I only wish it hadn’t taken me five minutes’ worth of frustration before I saw Jeff’s wise words):

Woof woof, look what’s underneath!

And I bet you know what my next step will be…

Finally, we get our first glimpse at the system’s “brains”, i.e., its main board:

Here are closeups of the insides’ four quadrants, as oriented above. Left:



And bottom:

Specifically, to get the system board out, we’re first going to have to disconnect a bunch of wiring harnesses and flex PCB cables. Here they are, sequentially clockwise around the board starting at the left side, and in both their connected and disconnected states:

Pausing the cadence briefly at this point, I encountered something I don’t think I’ve seen in a teardown before; a zip tie bundling multiple harnesses together to tidy up the insides!

And here’s another organizing entity, a harness restraint along one side:


Up top are also two antenna feeds that need to be unsnapped:

Now over to the right:

Next to go were sixteen total screws, eight of them shiny and screwed into metal underneath, the other eight black and anchored to underneath plastic:

Houston, we have a liftoff”:

Here’s the side of the system board that you’ve previously seen in its installed (pointed downward, to be precise) state, which I’ll call “side 1”:

And here’s when-installed upward-facing system board “side 2”:

Time for some closeups and Faraday cage removals. Side 1 first; keeping to the prior clockwise cadence, I’ll start with the left-quadrant circuitry, which includes (among other things) a Texas Instruments LM73605 step-down voltage converter:

Now the top:


And finally, the Faraday cage-dominated landscape at bottom:

Again, you already know what comes next, right?

That’s MediaTek’s MT7668 wireless connectivity controller at the center. Quoting from the product page on the manufacturer website: it’s a “Highly integrated single chip combining 2×2 dual-band 802.11ac Wi-Fi with MU-MIMO and latest Bluetooth 5.0 radios together in a single package.” The only wireless protocol NOT documented is Zigbee, support for which keen-eyed readers may have already noticed was mentioned in the smart speaker “foot” markings earlier. And by the way, if you hadn’t already noticed, in addition to the earlier noted external antenna wiring feeds, there are PCB-embedded antennae to either side of the Faraday cage (also visible on the other side of the PCB).

Speaking of which…now for the system board side 2. Already-exposed IC at left (a Texas Instruments TPA3129D2 class D audio amplifier) first:

At top (and top left) left is a clutch of ICs only some of which I’ve been able to ID:

The upper-left Faraday cage hides, I believe, the Zigbee controller. Its markings are:


and through a convoluted process involving Google Image search followed by a bunch of dead-end (and dead-page) searches that ended up bearing (hopefully not-rotten) fruit, I think it’s Silicon Labs’ EFR32MG21:

Now for the remainder of the ICs in this region:

The IC at the very top, with the silver rectangular “patch” atop it (which I at first thought was a sticker but can’t seem to peel off, so…) seemingly covering a same-size gold region underneath, is one of the puzzlers. Above the silver patch are the following faintly (so I may not be spot-on with my discernment) stamped lines:


Below the silver section are also two marked lines, the first stamped and the second embossed:


And next to the chip is this odd structure I’ve not encountered before, with a loop atop it:

Any ideas, folks?

To its lower right is another puzzler, marked as follows:

TI 13

Fortunately, the remainder aren’t as obscure. Directly below the “silver patch” IC is Cirrus Logic’s CS42526 2-in, 6-out multi-channel codec with an integrated S/PDIF receiver (hearkening back to my earlier S/PDIF discussion related to the bottom-panel PCB). And next to its lower-right corner are two Texas Instruments THS4532 dual differential amplifiers.

Last but not least, there are the Faraday cage-rich right and lower quadrants:

Let’s tackle the cage-free IC at the right edge first: it’s another Texas Instruments TPA3129D2 class D audio amplifier, the mate of its mirror-image at left. Above and two Cages to its left is nonvolatile storage, a Sandisk SDINBDG4-8G 8 GByte industrial NAND e.MMC flash memory:

To its right, underneath the large rectangular Faraday cage, is (curiously) another MediaTek Wi-Fi-plus-Bluetooth controller IC, the MT7658, which is online documentation-bereft in comparison to its MT7668 cousin on the other side of the PCB, mentioned earlier, but which I’ve seen before.

And what about that large square Faraday cage at the bottom? Glad you asked:

Underneath the lid, in the upper right and left corners, are two Samsung K4A4G165WE-BCRC 4 Gbit DDR4 SDRAMs. And underneath them both, befitting the thermal paste augmentation, is the system “brains”, MediaTek’s MT8516 application processor.

Remember that earlier mentioned LED ring, and those up-top buttons and mics? They suggest we’ve got one PCB to go, plus we haven’t seen any midrange or tweeter transducers yet. Next let’s get out those two shiny metal brackets:

Two screws for each attach to the side:

And one screw for each attach to the assemblage above (or below, in the Echo Studio’s current upside-down orientation, if you prefer):

And with impediments now removed, they lift right out:

Next up for removal is the baseplate underneath (more accurately: above) the now-absent metal braces and system board:

More speakers! Finally!

At this point, I hypothesized that I’d need to get the top of the Echo Studio off in order to proceed any further. Holding it in place, however, were 14 screws accessible only from underneath the top, only a subset of which are shown here:

Why? Those speakers. The midranges in particular had really strong magnets, which tended to intercept and cling to screws en route from their original locations to the outside world:

Those same magnets, I discovered complete with colorful accompanying language, would also yank Torx bits right out of my driver handle. I eventually bailed (temporarily) on my iFixit driver kit and went with a set of long-shaft single-piece Torx screwdrivers instead. Here’s the end result as it relates to the top speaker grill, alongside my various implements of destruction:

And here’s the now-exposed upward-directed midrange speaker:

Four more screws removed, less “momentously” than before, set it free:

Now for that top outer ring. With the screws originally holding it in place no longer present, I was able to pop it free using only my thumbnail:

The translucent ring around the underside for the LEDs to shine through is likely obvious (hold that thought), as are the “springs” for the four buttons. The one on the left, if you look closely, is different than the other three: there’s a cutout in it. Again, hold that thought:

Two easily removed screws are all that still hold the LED PCB in place:

Here’s an initial overview shot of the LED PCB standalone, and of its top with the ring of 24 LEDs around its middle, an immediately obvious feature.

Keen-eyed readers may have also already noticed the seven MEMS microphone ports, one of which is missing its gasket (stuck instead to the underside of the top outer ring, which you’ll now notice if you go back and look at the earlier photo of it). Let’s now do some closeups, beginning with the top:

The “mute” switch, at far right in this photo, is different than the others because it has dedicated LEDs alongside it that illuminate red when the mics are in a muted state (in addition to the entire ring going red, as mentioned earlier). And what’s that next to the “action” switch on the far left? Remember the cutout I mentioned in the accompanying switch? It’s an ambient light sensor that enables dynamic calibration of the LEDs’ brightness to the room’s current illumination level. The IC below it, judging from the logo, is sourced by Toshiba Semiconductor, part of its LCX low-voltage CMOS logic line, but I can’t definitively ID it. Here are the markings:


The chips at right:

are a Texas Instruments LP5036 36-channel I2C RGB LED driver and, below it, another TI chip unknown in function and labeled as follows:

T 0C8

That same two-chip combo is also present on the left side of the PCB:

The bottom of the top side of the PCB is comparatively unmemorable:

as is the underside of the PCB, aside from the bulk of the MEMS microphones’ housings:

We’re still not the rest of the way inside, but keep the faith; I still see a few screws up top:

Turns out the Echo Studio is comprised of two separate shells, the outer doing double-duty as front and side speakers’ grills (and, along with the top pieces’ colors, the sole differentiators between “Charcoal” and “Glacier” variants, come to think of it), which now slide apart:

Here’s what the outer-shell grilles for the front tweeter and side midranges look like from the inside:

No speaker, therefore no grill, in back, of course:

They correspond to the inner-shell locations of the left midrange:

Right midrange:

And front tweeter:

Again, the backside is bare:

Speaking of which…the midranges are identical to the top-mounted one you already saw. But let’s take a closer look at that tweeter:

In closing, one final set of images. In peering through the hole left from the tweeter’s removal:

I happened to notice the two antenna wires, one white and the other black, that I’d briefly showed you earlier. I realized I hadn’t yet discerned where they ended up and decided to rectify that omission before wrapping up:

This is the inside of the inner shell, looking toward the top and back of the device from the bottom. The antennas are in the back left (black wire) and right (white wire) of the Echo Studio, next to and “behind” the midrange drivers:

My guesses are as follows:

  • One of them is for 2.4 GHz Wi-Fi (black?), the other for 5 GHz (white?).
  • The 2.4 GHz one multitasks between Wi-Fi and Zigbee duties, and
  • The Bluetooth antennae are the PCB-embedded ones noted earlier.

Agree or disagree? And related: I’m still baffled as to why the design includes two Wi-Fi-plus-Bluetooth controller SoCs, MediaTek’s MT7658 and MT7668, plus a dedicated Zigbee controller. If you have any insights on this, or thoughts on anything else I’ve covered in this massive missive, I as-always welcome them in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Disassembling the Echo Studio, Amazon’s Apple HomePod foe appeared first on EDN.

What is moment of inertia?

Mon, 09/18/2023 - 17:00

In a state of bewilderment one fine day, I asked a group of three mechanical engineers at this company where I was then employed if they could please explain to me the concept of “moment of inertia”. To my utter astonishment, none of them could offer an explanation. None of them knew!

Since then, though, I think I’ve found out.

Sir Isaac Newton taught us (please forgive my paraphrasing) that for linear motion of some object, we have F = m*A which means that a mass “m” will undergo a changing linear velocity at some value of acceleration “A” under the influence of an applied force “F”.

Figure 1 Linear motion on a frictionless surface where force is equal to mass multiplied by acceleration. Source: John Dunn

There is an analogous equation for rotary motion. We have T = J*Θ where “T” is the torque applied to some object having a moment of inertia “J” which experiences a rotary acceleration called “Θ” which can be measured in units of radians per second squared.

Figure 2 Rotary Motion where the torque applied to an object is equal to its moment of inertia about the rotation axis multiplied by its rotary acceleration. Source: John Dunn

In a rotary motor, the armature will have some particular moment of inertia. There will also be a motor coefficient of torque designated kt. The torque that gets created within that motor will be kt multiplied by the armature current. In short, we will have T = kt*I where I is the armature current.

Writing further, we have kt*I = J* Θ which rearranges to Θ = kt I / J = Angular acceleration.

Angular acceleration is directly proportional to the applied armature current times the coefficient of torque and inversely proportional to the moment of inertia.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post What is moment of inertia? appeared first on EDN.