EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 12 min 6 sec ago

Bistable switch made on comparators

Tue, 01/16/2024 - 15:30

The bistable load switch is made on two comparators. The load is switched on and off sequentially by applying a voltage of two different levels to the input of the device.

Earlier in [1], a new class of bistable elements was proposed—two-threshold thyristors, which are switched on/off from one state to another when control voltages of two levels (“High” or “Low”) other than zero are applied to the input of the thyristor.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The bistable load switch, Figure 1, is designed for switching the load when an Uon or Uoff voltage is applied to the input of the device. The device contains two comparators U1.1 and U1.2, as well as an output transistor Q1, for example, 2N7000.

Figure 1 A bistable switch controlled by input voltage levels, with separately adjustable load on and off thresholds.

The device works as follows. Its input (inverting inputs of comparators U1.1 and U1.2) is briefly supplied with a voltage of a certain level (Uon or Uoff). The comparators comparison noninverting inputs are supplied with voltages of two levels from the potentiometers R2 and R3. When the switching voltage Uon (Uon<Uoff) is applied to the input of the device, the comparator U1.1 switches. At its output Uout1, the voltage switches from the conditional level of the logical unit to the level of the logical zero. The LED indicates the enabled state of the device. On the drain of the transistor Q1 (Uout2), on the contrary, the voltage changes from the level of logical zero to the level of logical unit. Through the resistor R10, the high-level voltage enters the inverting input of the comparator U1.1, fixing his condition.

To return the device to its initial state (disconnecting the load), a voltage of a higher level (Uoff) is applied to the input, which is able to switch the state of the second comparator U1.2. When switching this comparator, the voltage at the inverting input of the comparator U1.1 drops to zero, the circuit returns to its original state.

Such a device, with some simplification and modification, can be placed in the DIP6 housing, Figure 2. Switching the output signal level from the conditional level 0 to 1 occurs when a low-level voltage Uon is briefly applied to the input of the device, a return to the initial state occurs when a high-level voltage Uoff is applied to the input.

A typical circuit for switching on such a chip is shown in Figure 3. External adjustment elements R1 and R2 are used to adjust the on and off switching thresholds (Uthr1 and Uthr2).

Figure 2 A bistable switch, as well as a possible of integrated circuit based on it.

Figure 3 Variants of a bistable switch chip with external switching thresholds control circuits, or internal unregulated ones, and the possibility of using the circuit in a DIP4 case for an unregulated version with fixed switching thresholds.

If using a resistive divider R1–R3 to set constant on and off levels of Uthr2 and Uthr1, then the bistable switch can be placed in the DIP4 chip housing, Figure 3, which has power terminals as well as input and output. To obtain the switching levels, which do not depend on the supply voltage, you can use a simple voltage regulator (Zener diode) built into the microcircuit to power the resistive divider R1–R3.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 800 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

 Related Content


  1. Shustov M.A. Two-threshold ON/OFF thyristors, switchable by the input signal level // International Journal of Circuits and Electronics. – 2021. – V. 6. – P. 60–63. Pub. Date: 09 December 2021. https://www.iaras.org/iaras/home/computer-science-communications/caijce/two-threshold-on-off-thyristors-switchable-by-the-input-signal-level
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Bistable switch made on comparators appeared first on EDN.

Apple’s MagSafe technology: Sticky, both literally and metaphorically

Mon, 01/15/2024 - 13:34

My brief mention in the teardown that I just submitted (and you’ll either have already read or will read soon, depending on EDN’s publication order preference), of MagSafe wireless support in the charging dock for the Lenovo Smart Clock 2, reminded me that I’ve intended for a while to deliver some dedicated coverage of this Apple-branded technology. No better time than the present to actualize this aspiration, I suppose…

I’ve actually editorially introduced MagSafe already, in the context of a wireless charging pad teardown I did in late 2022. The term dates all the way back to 2006, when it was first applied to a magnetic coupling technique for powering and recharging laptops:

After taking it through two generations, the second more svelte than the first, Apple phased it out from laptops beginning in 2015, only to bring it back as MagSafe 3 six years later. I’m approximating the approach for the new-to-me Intel-based 2020 13” Retina MacBook Pro I migrated to while on Thanksgiving holiday last week (as I write these words on December 1) by means of a third-party USB-C intermediary that works pretty well (although I admittedly wish the magnets were a bit stronger):

That said, one year earlier, Apple had already resurrected the MagSafe brand name, albeit this time focused on a different product line: smartphones. Beginning with the iPhone 12, the company augmented its Qi-baseline, proprietary-enhanced integrated wireless charging scheme (which had dated from 2017’s iPhone X) with magnets, initially promoted to optimally align the device with charging coils. And even that wasn’t the first time Apple had implemented magnet-augmented wireless charging; it’s been the (proprietary protocol-only, in this particular case) standard for the company’s smart watches ever since 2015’s initial Apple Watch Series 1:

MagSafe got “personal” for my family when I upgraded my wife from the iPhone XS Max she’d had for the past several years (at left in the following photo):

to an iPhone 14 Plus (at right in the following photo) for her birthday earlier this year:

I took advantage of the opportunity, of course, to also pick her up some accessories; a couple of Apple MagSafe chargers along with third-party stands to install them in:

A MagSafe Duo for travel, since she also owns an Apple Watch (the watch charging pad is on the right; recall that as I previously mentioned, its charging scheme is proprietary-only, therefore incompatible with other Apple and more general Qi-supportive devices):

A Belkin charging dock for the car:

Several third-party multi-device chargers (she also has an AirPods Pro earbuds set with a Qi-compatible charging case, don’cha know):

Both Apple- and Speck-branded cases (along with several screen protector sets, of course):

Two Apple MagSafe Battery Packs:

A “wallet” for some paper currency, a credit card, identification documentation and the like:

And a nifty Belkin mini-stand that also does triple-duty as webcam stand and “finger grip”:

Admittedly, at least some of these are primarily-to-completely “convenience” purchases. After all, her existing Qi chargers continue to work fine, albeit in a non-magnetic-attached fashion. Others offer more meaningful enhancements to the status quo. Take cases, for example. The magnets built into the phone aren’t strong enough to grip a charging base through a standard leather, silicone, or plastic case intermediary. Instead, you need specific MagSafe-compatible cases with their own built-in magnets, appropriately polarity-oriented to attract (versus repel) both the phone and charger on either side. To that “attract” point, however, these cases don’t need to intensely wrap around and otherwise cling to the phone as their non-magnetic predecessors did; the magnets do all the “clinging” work by themselves. Which is nice.

Same goes, even more so, for supplemental batteries. She was used to ones like this:

which were not only bulky but also a pain in the derriere to install and remove. Now she only needs to slap a diminutive battery onto the back of the phone, where it’ll magnetically cling when the internal battery charge is low.

The more general charging situation is interesting. As I already mentioned, some companies dodge it completely, selling only stands into which you slip an Apple-branded charging pad:

Others, like my 2022 teardown victim, avoid the word “MagSafe” completely (while, note, still claiming iPhone compatibility), presumably in an attempt to dodge Apple legal attention:

Some, while actually chargers, state only that they’re “MagSafe compatible” (such as the Belkin car dock I showed you earlier). I’m not clear whether the manufacturers of such products are required to obtain a license from Apple so they can officially make such a claim, but they generally don’t support the maximum power output that iPhones accept, for example.

And others are fully “Made for MagSafe”. From what I can tell, among other requirements their suppliers need to put official Apple charging modules in them in order to brand them as such:

That said, Apple still keeps some feature set niceties to itself. Apple’s own battery packs, for example, are the only ones capable of reporting their charged state (specific percentage, versus just approximation LEDs built into the batteries) via a software-enabled display on the phone itself. And Apple’s “wallets” are the only ones with nifty integrated “Find My” location support.

Admittedly, I was initially somewhat cynical of the whole wireless charging concept, primarily due to its inherent environment-unfriendly inefficiency, and I doubled down on my scorn when Apple rolled out what I initially opined as being the MagSafe “gimmick”. Perhaps obviously, I’ve subsequently had at least somewhat of a change of heart since then. Partly, this is due to the admitted reduction of repeated insertion-and-removal wear-and-tear on a device’s charging port (Lightning, USB-C, etc.) that wireless charging affords. And once you take the wireless charging plunge, the magnets are legitimately beneficial in ensuring that the charging pad and device are optimally aligned for peak efficiency.

To wit, as I write these words the Qi consortium and its members are in the process of rolling out version 2 of the specification (and products based on it), also magnet-augmented (among other enhancements) and claimed MagSafe-compliant. And in advance, I’ve already purchased Mous MagSafe-compatible cases for my two Google Pixel 7 smartphones:

along with two Belkin magnetic external batteries:

And although my DJI gimbal has a magnetic mount which isn’t natively MagSafe-compliant:

a third-party adapter sturdily bridges the divide:

One other comment before concluding: as I mentioned a few months ago, Apple’s latest iPhone 15 smartphone family has migrated from Lightning to USB-C, following in the footsteps of several iPad family predecessors. As such, and in a seeming prematurely rushed fashion (the company still sells other Magsafe- and Lightning-based phones, after all), Apple in-parallel discontinued both the Lightning-based MagSafe Battery Pack and Magsafe Due charger, with no USB-based successors (yet, at least) unveiled as I write these words. Odd.

And speaking of the MagSafe Duo, I have “for parts only” examples of both it and the MagSafe Charger sitting in my to-do teardown pile. Stand by for writeups on both products to come, hopefully soon. And until then, I welcome your thoughts on this piece in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s MagSafe technology: Sticky, both literally and metaphorically appeared first on EDN.

Beyond lithium-ion: Exploring next-gen battery technologies

Mon, 01/15/2024 - 10:27

Electronics design engineers are well aware of lithium-ion’s shortcomings. So, the upcoming battery revolution revolves around experimental materials in novel applications to reduce price, resource scarcity, and environmental impact.

Which options are the most viable as lithium mining and production ramp up, becoming one of the most noteworthy yet contradictory global markets of the climate-cognizant age? Below is a sneak peek at three most viable alternatives to lithium-ion batteries.

  1. Solid-state batteries

Solid-state gigafactories are expanding to the West because of their unique compositions and promise to remove flammability concerns from liquid electrolytes. The problems plague energy storage and electric vehicle (EV) adoption, but solid-state alternatives promise a longer lifecycle and improved safety.

These batteries have another advantage over lithium-ion options because they minimize thermal runaway, one of the most prominent concerns when justifying lithium’s costs and labor investments. Li-ion batteries charge to 100% in two hours, giving electronics design experts a welcome challenge to innovate past this already incredible engineering feat.

The design benefits electronics design engineers by improving power density while still being lightweight. So, solid-state blueprints may require some lithium but in severely reduced capacities to reduce reliance. Enough solid-state battery variances exist to forge more sustainable anodes and cathodes, such as lithium-iron phosphates and polymers.

  1. Metal-air batteries

Metal-air batteries rely on oxygen as the cathode material, using a reduction reaction for power. Using oxygen removes barriers regarding storage and accessibility problems in other battery structures.

Professionals can use zinc, aluminum, iron and more to minimize exploiting the world’s limited lithium stores and get a denser battery. They are five to 30 times more energy efficient than li-ion products, leveraging materials more easily found and obtained in nature. This could empower other technologies to more sustainable futures, including hearing aids and uninterruptible power supplies (UPSs).

  1. Sodium-ion batteries

Sodium is a low-cost and potent material for batteries, and it doesn’t need nickel or cobalt to work. Supply chains need more abundant materials to maintain B2B relationships and meet market demands. It is not as well-known in the industry, but sodium could solve many of the quandaries electronic design engineers are trying to solve, including:

  • Cost-effectiveness
  • Improved safety
  • High energy density
  • Coulombic efficiency
  • Heightened durability
  • Decreased environmental impact

Even though sodium provides these advantages, it must overrun lithium as the incumbent. Despite lithium’s faults, the sector set it as the gold standard for batteries. Sodium must prove how much it can pack into a smaller battery than li-ions.

Sodium-ion battery density is already outperforming lower-tier products at a faster pace. The first sodium-powered vehicle will make its debut in January 2024. It has a 157-mile range and 25 kWh capacity, which is impressive given it’s the first of its kind.

What’s after lithium-ion

Other technologies like flow batteries and magnesium-ion options are also rising in the electronics design landscape. This scratches the surface of unharnessed energetic potential for renewable power storage and electric vehicle applications.

Engineers must consider every detail—from microprocessors to passive components—when prototyping new circuit designs. Attention to detail will make embedded system development run smoothly, leading to compliant, futuristic batteries for a greener planet.

Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Beyond lithium-ion: Exploring next-gen battery technologies appeared first on EDN.

Renesas’ Transphorm acquisition points to GaN writing on the wall

Fri, 01/12/2024 - 13:56

Less than a year after Infineon snapped GAN Systems to bolster its gallium nitride (GaN) technology roadmap, another GaN semiconductor specialist is becoming part of a bigger semiconductor company’s product portfolio. Renesas is acquiring Transphorm to leverage its GaN expertise in power electronics, serving a wide range of segments like automotive, consumer, and industrial.

The acquisition deal, amounting to approximately $339 million, is expected to be complete by the second half of 2024. Subsequently, Renesas aims to incorporate Transphorm’s automotive-qualified GaN technology in its X-in-1 powertrain solutions for electric vehicles (EVs) besides computing, renewable energy, industrial, and consumer applications.

Renesas CEO Hidetoshi Shibata joins Transphorm co-founder, president and CEO Primit Parikh to announce the $339 million acquisition deal.

After establishing an in-house silicon carbide (SiC) production supported by a 10-year SiC wafer supply agreement, Renesas is now turning to its wide bandgap (WBG) cousin GaN to broaden energy-efficient and high-voltage component offerings. According to an industry study quoted in Renesas’ press release about the Transphorm acquisition, demand for GaN is predicted to grow by more than 50% annually.

Transphorm, co-founded by Umesh Mishra and Primit Parikh in 2007, has roots in technology developed at the University of California at Santa Barbara. It claims to be the first supplier of GaN semiconductors that are JEDEC- and automotive-qualified. Transphorm managers are also quick to point to another unique aspect of the company’s technology; unlike most GaN suppliers opting for e-mode, Transphorm has adopted the d-mode delivered by a cascade (normally off).

Transphorm recently made waves by claiming that it will unveil 1,200-V GaN semiconductors. Though the Goleta, California-based outfit has demonstrated 900-V GaN devices, it calls them merely a showpiece, reiterating its commitment to GaN-on-sapphire 1,200-V semiconductors initially targeted at e-bikes and e-scooters.

Another GaN supplier is gone, and a few more are left on the block. We are likely to see more of them gobbled by bigger chipmakers in a quest to grasp this next-generation material for power electronics. Starting from scratch doesn’t seem a viable option for large semiconductor outfits, especially in a technology that’s now in roller-coaster development mode while being in the midst of commercial realization.

So, what’s left on the GaN block? While Navitas Semiconductor, growing at an impressive pace, seems an unlikely acquisition target, there are a handful of smaller GaN outfits, including Cambridge GaN Devices, Efficient Power Conversion (EPC), QPT and VisIC Technologies. Pay heed to these GaN companies and their potential suitors in 2024.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Renesas’ Transphorm acquisition points to GaN writing on the wall appeared first on EDN.

ATE system tests wireless BMS

Thu, 01/11/2024 - 21:11

Rhode & Schwarz, with technology from Analog Devices, has developed an automated test system for wireless battery management systems (wBMS). The collaboration aims to help the automotive industry adopt wBMS technology and realize its many advantages over wired battery management systems.

The ATE setup performs essential calibration of the wBMS module, as well as receiver, transmitter, and DC verification tests. It covers the entire wBMS lifecycle, from the development lab to the production line. The system comprises the R&S CMW100 radio communication tester, WMT wireless automated test software, and the ExpressTSVP universal test and measurement platform.

R&S and Analog Devices also worked together to develop a record and playback solution for RF robustness testing of the wBMS. During several test drives in various complex RF environments, the R&S FSW signal and spectrum analyzer monitored the RF spectrum and sent it to the IQW wideband I/Q data recorder. For playback of the recorded spectrum profiles, the IQW was connected to the SMW200A vector signal generator.

Analog Device’s complete wBMS solution, currently in production across multiple EV platforms, complies with the strictest cybersecurity requirements of ISO 21424 CAL4. In addition, its RF performance and robustness maximize battery capacity and lifetime values.

Rohde & Schwarz 

Analog Devices

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ATE system tests wireless BMS appeared first on EDN.

UWB RF switch aids automotive connectivity

Thu, 01/11/2024 - 21:11

A 50-Ω SPDT RF switch from pSemi, the automotive-grade PE423211, covers bandwidths ranging from 300 MHz to 10.6 GHz. The part can be used in Bluetooth LE, ultra-wideband (UWB), ISM, and WLAN 802.11 a/b/g/n/ac/ax applications. Its suitability for BLE and UWB make the switch particularly useful for secure car access, telematics, sensing, infotainment, in-cabin monitoring systems, and general-purpose switching.

Qualified to AEC-Q100 Grade 2 requirements, the PE423211 operates over a temperature range of -40° to +105°C. The device combines low power, high isolation, and wide broadband frequency support in a compact 6-lead, 1.6×1.6-mm DFN package. It consumes less than 90 nA and provides ESD performance of 2000 V at HBM levels and 500 V at CDM levels.

The RF switch is manufactured on the company’s UltraCMOS process, a silicon-on-insulator technology. It also leverages HaRP technology enhancement, which reduces gate lag and insertion loss drift.

The PE423211 RF switch is sampling now, with production devices expected in late 2024. A datasheet for the switch was not available at the time of this announcement.

PE423211 product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post UWB RF switch aids automotive connectivity appeared first on EDN.

Quectel unveils low-latency Wi-Fi 7 modules

Thu, 01/11/2024 - 21:11

The first entries in Quectel’s Wi-Fi 7 module family, the FGE576Q and FGE573Q, deliver fast data rates and low latency for real-time response. Both modules offer Wi-Fi 7 and Bluetooth 5.3 connectivity for use in a diverse range of applications, including smart homes, industrial automation, healthcare, and transportation.

The FGE576Q provides a data rate of up to 3.6 Gbps and operates on dual Wi-Fi bands simultaneously: 2.4 GHz and 5 GHz or 2.4 GHz and 6 GHz. The FGE573Q operates at a maximum data rate of 2.9 Gbps. Devices feature 4K QAM and multi-link operation (MLO), which enables routers to use multiple wireless bands and channels concurrently when connected to a Wi-Fi 7 client. With Bluetooth 5.3 integration, each module supports LE audio and a maximum data rate of 2 Mbps, as well as BLE long-range capabilities.

Housed in 16×20×1.8-mm LGA packages, the FGE576Q and FGE573Q operate over a temperature range of -20°C to +70°C. Quectel also offers Wi-Fi/Bluetooth antennas in various formats for use with these modules.

FGE576Q product page

FGE573Q product page

Quectel Wireless Solutions

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Quectel unveils low-latency Wi-Fi 7 modules appeared first on EDN.

Wi-Fi 7 SoCs garner Wi-Fi Alliance certification

Thu, 01/11/2024 - 21:11

MaxLinear’s Wi-Fi 7 SoC with integrated triband access point has been certified by the Wi-Fi Alliance and selected as a Wi-Fi Certified 7 test bed device. Certification ensures that devices interoperate seamlessly and deliver the high-performance features of the Wi-Fi 7 standard.

The test bed employs the MxL31712 SoC, with the triband access point capable of operating at 2.4 GHz, 5 GHz, and 6 GHz. Well-suited for high-density environments, the access point includes the advanced features of 4K QAM, multi-link operation (MLO), multiple resource units (MRU) and puncturing, MU-MIMO, OFDMA, advanced beamforming, and power-saving enhancements.

MaxLinear’s Wi-Fi Certified 7 SoC family, comprising the triband MxL31712 and dual-band MxL31708, is based on the upcoming IEEE 802.11be standard and delivers peak throughput of 11.5 Gbps on 6-GHz (6E) spectrum. The MxL31712 accommodates up to 12 spatial streams, while the MxL31708 handles up to 8 spatial streams.

To learn more about the Wi-Fi 7 SoCs, click here.


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wi-Fi 7 SoCs garner Wi-Fi Alliance certification appeared first on EDN.

6-DoF inertial sensor improves machine control

Thu, 01/11/2024 - 21:10

The SCH16T-K01 inertial sensor from Murata combines an XYZ-axis gyroscope and XYZ-axis accelerometer in a robust SOIC package. Based on the company’s capacitive 3D-MEMS process, the device achieves centimeter-level accuracy in machine dynamics and position sensing, even in harsh environments.

The SCH16T-K01 provides an angular rate measurement range of ±300°/s and an acceleration measurement range of ±8 g. A redundant digital accelerometer channel offers a dynamic range of up to ±26 g, which offers resistance against saturation and vibration. Gyro bias instability is typically 0.5°/h. According to the company, the component overall exhibits excellent linearity and offset stability over the entire operating temperature range of -40°C to +110°C.

Other features of the industrial sensor include a SafeSPI V2.0 digital interface, self-diagnostics, and options for output interpolation and decimation. Housed in a 12×14×3-mm, 24-pin SOIC plastic package, the SCH16T-K01 is suitable for lead-free soldering and SMD mounting.

SCH16T-K01 product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 6-DoF inertial sensor improves machine control appeared first on EDN.

The 2024 CES: It’s “AI everywhere”, if you hadn’t already guessed

Thu, 01/11/2024 - 17:23

This year’s CES officially runs from today (as I write these words), Tuesday, January 9 through Friday, January 12. So why, you might ask, am I committing my coverage to cyber-paper on Day 1, only halfway through it, in fact? That’s because CES didn’t really start just today. The true official kickoff, at least for media purposes, was Sunday evening’s CES Unveiled event, which is traditionally reminiscent of a Japanese subway car, or if you prefer, a Las Vegas Monorail:

Yesterday was Media Day, where the bulk of the press releases and other announcement paraphernalia was freed from its prior corporate captivity for public perusal:

And some companies “jumped the gun”, announcing last week or even prior to the holidays, in attempting to get ahead of the CES “noise”. So, the bulk of the news is already “in the wild”; all that’s left is for the huddled masses at the various Convention Centers and other CES-allotted facilities to peruse it as they aimlessly wander zombie-like from booth to booth in search of free tchotchkes (can you tell how sad I am to not be there in person this year? Have I mentioned the always-rancid restrooms yet? Or, speaking of which, the wastewater-suggestive COVID super-spreader potential? Or…). Plus, it enables EDN to get my writeup up on the website and in the newsletters earlier than would otherwise be the case. I’ll augment this piece with comments and/or do follow-on standalone posts if anything else notable arrives before end-of-week.

AI (nearly) everywhere

The pervasiveness of AI wasn’t a surprise to me, and likely wasn’t to you, either. Two years ago, after all, I put to prose something that I’d much earlier believed was inevitable, ever since I saw an elementary live demo of deep learning-based object recognition (accelerated by the NVIDIA GPU in his laptop) from Yann LeCun, Director of AI Research at Facebook and a professor at New York University, at the May 2014 Embedded Vision Summit:

One year later (and one year ago), I amped up my enthusiasm in discussing generative AI in its myriad implementation forms, a topic which I revisited just a few months ago. And just about a week ago, I pontificated on the exploding popularity of AI-based large language models. It takes a while for implementation ideas to turn into prototypes, not to mention for them to further transition to volume production (if they make it to that far at all, that is), so this year’s CES promised to be the “fish or cut bait” moment for companies run by executives who’d previously only been able to shoehorn the “AI” catchphrase into every earnings briefing and elevator pitch.

So this week we got, among other things, AI-augmented telescopes (a pretty cool idea, actually, says this owner of a conventional Schmidt-Cassegrain scope with an 8” primary mirror). We got (I’m resisting inserting a fecal-themed adjective here, but only barely) voice-controllable bidet seats, although as I was reminded of in doing the research for this piece, the concept isn’t new, just the price point (originally ~$10,000, now “only” ~$2,000, although the concept still only makes me shudder). And speaking of fecund subjects, AI brings us “smart” cat doors that won’t allow Fluffy to enter your abode if it’s carrying a recently killed “present” in its mouth. Meow.

Snark aside, I have no doubt that AI will also sooner-or-later deliver a critical mass of tangibly beneficial products. I’ll save further discussion of the chips, IP cores, and software that fundamentally enable these breakthroughs for a later section. For now, I’ll just highlight one technology implementation that I find particularly nifty: AI-powered upscaling. Graphics chips have leveraged conventional upscaling techniques for a while now, for understandably beneficial reasons: they can harness a lower-performance polygons-to-pixels “engine” (along with employing less dedicated graphics memory) than would otherwise be needed to render a given resolution frame, then upscale the pixels before sending them to the screen. Dedicated-function upscaling devices (first) and integrated upscaling ICs in TVs (later) have done the same thing for TVs, as long-time readers may recall, again using conventional “averaging” and other approaches to create the added intermediary pixels between “real” ones.

But over the past several years, thanks to the massive, function-flexible parallelism now available in GPUs, this upscaling is increasingly now being accomplished using more intelligent deep learning-based algorithms, instead. And now, so too with TVs. This transition is, I (perhaps simplistically) believe, fundamentally being driven by necessity. TV suppliers want to sell us ever-larger displays. But regardless of how many pixels they also squeeze into each panel, the source material’s resolution isn’t increasing at the same pace…4K content is still the exception, not the norm, and especially if you sit close and/or if the display is enormous, you’re going to see the individual pixels if they’re not upscaled and otherwise robustly processed.

See-through displays: pricey gimmick or effective differentiator?

Speaking of TVs…bigger (case study: TCL’s 115” monstrosity), thinner, faster-refreshing (case study: LG’s 480 Hz refresh-rate OLED…I’ll remind readers of my longstanding skepticism regarding this particular specification, recently validated by Vizio’s class action settlement) and otherwise “better” displays were as usual rife around CES. But I admittedly was surprised by another innovation, which LG’s suite reportedly most pervasively exemplified, with Samsung apparently a secondary participant: transparent displays. I’m a bit embarrassed to admit this, but so-called “See-through Displays” (to quote Wikipedia vernacular) have apparently been around for a few years now; this is the first time they’ve hit my radar screen.

Admittedly, they neatly solve (at least somewhat) a problem I identified a while back; ever-larger displays increasingly dominate the “footprint” of the room they’re installed in, to the detriment of…oh…furniture, or anything else that the room might otherwise also contain. A panel that can be made transparent (with consequent degradation of contrast ratio, dynamic range, and other image quality metrics, but you can always re-enable the solid background when those are important) at least creates the illusion of more empty room space. LG’s prototypes are OLED-based and don’t have firm prices (unless “very expensive” is enough to satisfy you) or production schedules yet. Samsung claims its MicroLED-based alternative approach is superior but isn’t bothering to even pretend that what it’s showing are anything but proof-of-concepts.

High-end TV supplier options expand and abound

Speaking of LG and Samsung…something caught my eye amidst the flurry of news coming through my various Mozilla Thunderbird-enabled RSS feeds this week. Roku announced a new high-end TV family, implementing (among other things) the aforementioned upscaling and other image enhancement capabilities. What’s the big deal, and what’s this got to do with LG and Samsung? Well, those two were traditionally the world’s largest LCD TV panel suppliers, by a long shot. But nowadays, China’s suppliers are rapidly expanding in market share, in part because LG and Samsung are instead striving to move consumers to more advanced display technologies, such as the aforementioned OLED and microLED, along with QLED (see my post-2019 CES coverage for more details on these potential successors).

LG and Samsung manufacture not only display panels but also TVs based on them, of course, and historically they’d likely be inclined to save the best panels for themselves. But now, Roku is (presumably) being supplied by Chinese panel manufacturers who don’t (yet, at least) have the brand name recognition to be able to sell their own TVs to the US and other Western markets. And Roku apparently isn’t afraid (or maybe it’s desperation?) to directly challenge other TV suppliers such as LG and Samsung, who it’d previously aspired to have as partners, integrate support for its streaming platform. Interesting.

Premium smartphones swim upstream

Speaking of aspiring for the high end…a couple of weeks ago, I shared my skepticism regarding any near-term reignition of new smartphone sales. While I’m standing by that premise in a broad sense, there is one segment of the market that seemingly remains healthy, at least comparatively: premium brands and models. Thereby explaining, for example, Qualcomm’s latest high-end Qualcomm Snapdragon 8 Gen 3 SoC platform, unveiled last October. And similarly explaining the CES-launched initial round of premium smartphones based on the Snapdragon 8 Gen 3 and competitive chipsets from companies like Apple and MediaTek.

Take, for example, the OPPO Find X7 Ultra. Apple’s iPhone 15 Pro Max might have one periscope lens, but OPPO’s new premium smartphone has two! Any sarcasm you might be sensing is intentional, by the way…that said, keep in mind that I’m one of an apparently dying breed of folks who’s still fond of standalone cameras, and that I also take great pride in not acquiring the latest-and-greatest smartphones (or brand-new ones at all, for that matter).

Wi-Fi gets faster and more robust…and slower but longer distance

Speaking of wireless communications…Wi-Fi 7 (aka IEEE 802.11be), the latest version of the specification from the Wi-Fi Alliance, was officially certified this week. Predictably, as with past versions of the standard, manufacturers had jumped the gun and began developing and sampling chipsets (and systems based on them) well ahead of this time; hopefully all the equipment already out there based on “draft” specs will be firmware-upgradeable to the final version. In brief, Wi-Fi 7 builds on Wi-Fi 6 (aka IEEE 802.11ax), which had added support for both MU-MIMO and OFDMA, and Wi-Fi 6e, which added support for the 6 GHz license-exempt band, with several key potential enhancements:

  • Wider channels: up to 80 MHz in the 5 GHz band (vs 20 MHz initially) and up to 320 MHz in the 6 GHz band (vs 160 MHz previously)
  • Multi-link operation: the transmitter-to-receiver connection can employ multiple channels in multiple bands simultaneously, for higher performance and/or reliability
  • Higher QAM levels for denser data packing: 4K-QAM, versus 1,024-QAM with Wi-Fi 6 and 256-QAM in Wi-Fi 5.

The key word in all of this, of course, is “potential”. The devices on both ends of the connection must both support Wi-Fi 7, first and foremost, otherwise it’ll down-throttle to a lower version of the standard. Wide channel usage is dependent on spectrum availability, and the flip side of the coin is also relevant: its usage may also adversely affect other ISM-based devices. And QAM level relevance is fundamentally defined by signal strength and contending interference sources…i.e., 4K-QAM is only relevant at close range, among other factors.

That said, Wi-Fi’s slower but longer range sibling, Wi-Fi HaLow (aka IEEE 802.11ah), which also had its coming-out party at CES this year, is to me actually the more interesting wireless communication standard. The key word here is “standard”. Long-time readers may remember my earlier discussions of my Blink outdoor security camera setup. Here’s a relevant excerpt from the premier post in the series:

A Blink system consists of one or multiple tiny cameras, each connected both directly to a common router or to an access point intermediary (and from there to the Internet) via Wi-Fi, and to a common (and equally diminutive) Sync Module control point (which itself then connects to that same router or access point intermediary via Wi-Fi) via a proprietary “LFR” long-range 900 MHz channel.

The purpose of the Sync Module may be non-intuitive to those of you who (like me) have used standalone cameras before…until you realize that each camera is claimed to be capable of running for up to two years on a single set of two AA lithium cells. Perhaps obviously, this power stinginess precludes continuous video broadcast from each camera, a “constraint” which also neatly preserves both available LAN and WAN bandwidth. Instead, the Android or iOS smartphone or tablet app first communicates with the Sync Module and uses it to initiate subsequent transmission from a network-connected camera (generic web browser access to the cameras is unfortunately not available, although you can also view the cameras’ outputs from either a standalone Echo Show or Spot, or a Kindle Fire tablet in Echo Show mode).

In summary, WiFi HaLow takes that “proprietary “LFR” long-range 900 MHz channel” and makes it industry-standard. One of the first Wi-Fi HaLow products to debut this week was Abode Systems’ Edge Camera, developed in conjunction with silicon partner Morse Micro and software partner Xailent, which will enter production later this quarter at $199.99 and touts a 1.5 mile broadcast range and one year of operating life from its integrated 6,000 mAh rechargeable Li-ion battery. The broader implications of the technology for IoT and other apps are intriguing.

Does Matter (along with Thread, for that matter) matter?

Speaking of networking…the Matter smart home communication standard, built on the foundation of the Thread (based on Zigbee) wireless protocol, had no shortage of associated press releases and product demos in Las Vegas this week. But to date, its implementation has been underwhelming (leading to a scathing but spot-on recent diatribe from The Verge, among other pieces), both in comparison to its backers’ rosy projections and its true potential.

Not that any of this was a surprise to me, alas. Consider that the fundamental premise of Matter and Thread was to unite the now-fragmented smart home device ecosystem exemplified by, for example, the various Belkin WeMo devices currently residing in my abode. If you’re an up-and-coming startup in the space, you love industry standards, because they lower your market-entry barriers versus larger, more established competitors. Conversely, if you’re one of those larger, more established suppliers, you love barriers to entry for your competitors. Therefore the lukewarm-at-best (and more frequently, nonexistent or flat-out broken) embrace of Matter and Thread by legacy smart home technology and product suppliers (for which, to be precise, and as my earlier Blink example exemplifies, conventional web browser access, vs a proprietary app, is even a bridge too far).

I’ll have more to say on Matter and Thread in a dedicated-topic post to come. But suffice it to say that I’m skeptical about their long-term prospects, albeit only cautiously so. I just don’t know what it might take to break the logjam that understandably prevents competitors from working together, in spite of the reality that a rising tide often does end up lifting all boats…or if you prefer, it’s often better to get a slice of a large pie versus the entirety of a much smaller pie. I’d promise to turn metaphors off at this point, but then there’s the title of the next section…

The Apple-ephant in the room

Speaking of standards…Apple, as far as I know, has never had a show floor, hospitality suite or other formal presence at CES, although I’m sure plenty of company employees attend, scope out competitors’ wares and meet with suppliers (and of course, there are plenty of third-party iPhone case suppliers and the like showing off their latest-and-greatest). That said, Apple still regularly casts a heavy pall over the event proceedings by virtue of its recently announced, already-public upcoming and rumored planned product and service offerings. Back in 2007, for example, the first-generation iPhone was all that anyone to talk about. And this year, it was the Vision Pro headset, which Apple announced on Monday (nothing like pre-empting CES, eh?) would be open for pre-sale beginning next week, with shipments starting on February 2:

The thematic commonality with the first iPhone commercial was, I suspect, not by accident:

What’s the competitive landscape look like? Well, in addition to Qualcomm’s earlier mentioned Snapdragon 8 Gen 3 SoC for premium smartphones, the company more recently (a few days ago, to be precise) unveiled a spec-bumped “+” variant of its XR2 Gen 2 SoC for mixed-reality devices, several of which were on display at the show. There was, for example, the latest-generation XREAL augmented reality (AR) glasses, along with an upcoming (and currently unnamed) standalone head-mounted display (HMD) from Sony. The latter is particularly interesting to me…it was seemingly (and likely obviously) rushed to the stage to respond to Apple’s unveil, for one thing. Sony’s also in an interesting situation, because it first and foremost wants to preserve its lucrative game console business, for which it already offers several generations of VR headsets as peripherals (thereby explaining why I earlier italicized “standalone”). Maybe that’s why development partner Siemens is, at least for now, positioning it as intended solely for the “industrial metaverse”?

The march of the semiconductors

Speaking of ICs…in addition to the announcements I’ve already mentioned, the following vendors (and others as well; these are what caught my eye) released chips and/or software packages:

The rest of the story

I’m a few words shy of 3,000 at this point, and I’m not up for incurring Aalyia’s wrath, so I’ll only briefly mention other CES 2024 announcements and trends that particularly caught my eye:

And with that, pushing beyond 3,100 words (and pushing my luck with Aalyia in the process) I’ll sign off. Sound off with your thoughts in the comments, please!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The 2024 CES: It’s “AI everywhere”, if you hadn’t already guessed appeared first on EDN.

More gated 555 astable multivibrators hit the ground running

Wed, 01/10/2024 - 17:23

Adding the long-first-pulse malady in less traditional 555 astable topologies including CMOS- and bipolar-based oscillators that generate 50:50 symmetrical square waves.

A previous Design Idea, “Gated 555 astable hits the ground running” offered a fix for the problem of the excessively long first pulse that’s generated by traditional topology 555 astable circuits on start up when gated by the RESET pin from oscillation-off to oscillation-on. See Figure 1 and Figure 2.

Figure 1 The problem—the first oscillation cycle has a too-long first pulse on start-up, when gated by the RESET pin from oscillation-off to oscillation-on.

Wow the engineering world with your unique design: Design Ideas Submission Guide


Figure 2 The fix via C2 charge injection on oscillation startup to equalize pulse length.

However, unaddressed in this design idea is the fact that less traditional 555 astable topologies also suffer from the same long-first-pulse malady. Important examples of such circuits are oscillators that generate 50:50 symmetrical square waves, such as Figure 3.

Figure 3 The long first-pulse problem also occurs in a 50:50 square wave topology popular for CMOS 555s.

Happily, the same fix from “Gated 555 astable hits the ground running” works in this genre of oscillators too, as illustrated in Figure 4.

Figure 4 C2 charge injection fix applied to CMOS 50:50 square wave oscillator.

So, the problem is solved for CMOS 555 square wave generators. But what about their bipolar kin?

Despite their age, bipolar 555s still get designed into contemporary applications. The reasons for the choice include advantages like higher supply voltage rating (18 V vs 15 V) and greater output current capability (hundreds vs tens of mA) than CMOS types. But they do need to be wired up somewhat differently—for example with an extra resistor (as described in a previous Design Idea “Add one resistor to give bipolar LM555 oscillator a 50:50 duty cycle“)—when a 50:50 square wave output is required. See Figure 5.

Figure 5 Bipolar 555 in gated 50:50 square wave configuration.

The C2 charge injection trick will still work to correct Figure 5’s first pulse, but there’s a complication. When held reset, Figure 5’s circuit doesn’t discharge the timing capacitor all the way to zero, but only to Vz where:

Vz = R3(R2 + R3)-1 V+
= 0.184 V+

Therefore, our old friend C2 = C1/2 won’t work. What’s needed is a smaller charge injection from a smaller C2 = 0.175 C1 as Figure 6 shows.

Figure 6 C2 charge injection first-pulse fix modified for bipolar 555 square wave generation.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post More gated 555 astable multivibrators hit the ground running appeared first on EDN.

CES 2024: Creating a frugal code in embedded software

Wed, 01/10/2024 - 12:29

At CES 2024, a French startup is presenting the notion of frugal code in embedded software by identifying and quantifying the optimization potential of the code. WedoLow, a spinoff from three research laboratories—IETR, INSA, and Inria in Rennes, France—will demonstrate how its automated software solution works for automotive applications ranging from advanced driver assistance systems (ADAS) to autonomous driving (AD) to in-vehicle infotainment systems.

WedoLow claims that its solution addresses complexity in embedded software by diagnosing and checking the code rapidly throughout the development process. That’s how it ensures if the code is fully optimized and if gains can be obtained in terms of speed of execution or energy consumption.

Source: WedoLow

Complexification of code in embedded software

At a time when applications are becoming larger and codes increasingly voluminous and complex, embedded systems are obviously no exception. That inevitably complexifies the work of developers, who now face a growing risk of delays with consequences for the efficiency and performance of their applications.

According to a 2020 survey from Sourcegraph, 51% of developers say they have more than 100 times the volume of code they had 10 years ago. Furthermore, 92% of developers say the pressure to release software faster has increased.

Take the case of the automotive industry, where cars have 200 million lines of code today and are expected to have 650 million by 2025. According to a McKinsey report titled “Outlook on the automotive software and electronics market through 2030,” the automotive software market is already worth more than 31 billion dollars and is forecast to reach around 80 billion in 2030.

The use of embedded software in the automotive sector has been constantly increasing since the introduction of anti-lock braking system (ABS) more than 40 years ago. So, gains in embedded software’s speed of execution and energy consumption will result in more responsive systems and longer battery life, which are crucial aspects for electric and autonomous mobilities.

How software works

WedoLow claims that its beLow software suite enables developers to understand the structure of a code and identify the parts that can be rewritten to generate more efficiency and performance. It’s enabled by optimization techniques that identify and quantify the potential optimization of the code at any stage of its development.

They build a line-by-line or function-by-function optimization strategy and obtain an optimized code rapidly and automatically. For example, WedoLow quotes a 23% gain in execution speed on the filtering of signals emitted by sensors on a road vehicle transmission system. Next, it helped achieve a 95% gain in execution speed on the processing of data and filtering of signals emitted by different sensors in battery management system (BMS) software.

Besides embedded software, WedoLow also aims to address the hosted software segment for server and cloud applications. Here, the French upstart conducted a test with an aerospace group on the processing of satellite images, reducing the software’s energy consumption by 18%.

WedoLow is presenting its frugal code solution at CES 2024; product launch is scheduled in the second quarter of 2024.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post CES 2024: Creating a frugal code in embedded software appeared first on EDN.

Power Tips #124: How to improve the power factor of a PFC

Tue, 01/09/2024 - 18:30


In Power Tips #116, I talked about how to reduce the total harmonic distortion (THD) of a power factor correction (PFC). In this power tip, I will talk about another important criterion to evaluate PFC performance: the power factor, defined as the ratio of real power in watts to the apparent power, which is the product of the root mean square (RMS) current and RMS voltage in volt amperes, as shown in Equation 1:

The power factor indicates how efficiently energy is drawn from the AC source. With a poor power factor, a utility needs to generate more current than the electrical load actually needs, which causes elements such as breakers and transformers to overheat, in turn reducing their life span and increasing the cost of maintaining a public electrical infrastructure.

Ideally, the power factor should be 1; then the load appears as a resistor to the AC source. However, in the real world, electrical loads not only cause distortions in AC current waveforms, but also make the AC current either lead or lag with respect to the AC voltage, resulting in a poor power factor. For this reason, you can calculate the power factor by multiplying the distortion power factor by the displacement power factor:

where φ is the phase angle between the current and voltage and THD is the total harmonic distortion of current.

As the THD requirement gets lower, the power factor requirement gets higher. Table 1 lists the power factor requirements in the recently released Modular Hardware System-Common Redundant Power Supply (M-CRPS) base specification.

Output power

10% load

20% load

50% load

100% load

Power factor





Table 1 M-CRPS power factor requirements

Equation 2 shows that to improve the power factor, the first thing to do is to reduce the THD (which I discussed in Power Tips #116). However, a low THD does not necessarily mean that the power factor is high. If the PFC AC input current and AC input voltage are not in phase, even if the current is a perfect sine wave (low THD), the phase angle φ will result in a power factor less than 1.

The phase difference between the input current and input voltage is mainly caused by the electromagnetic interference (EMI) filter used in the PFC. Figure 1 shows a typical PFC circuit diagram that consists of three major parts: an EMI filter, a diode bridge rectifier, and a boost converter.

Figure 1 Circuit diagram of a typical PFC that consists of an EMI filter, diode bridge rectifier, and a boost converter. Source: Texas Instruments

In Figure 1, C1, C2, C3, and C4 are EMI X-capacitors. Inductors in the EMI filter do not change the phase of the PFC input current; therefore, it is possible to simplify Figure 1 into Figure 2, where C is now a combination of C1, C2, C3 and C4.

Figure 2 Simplified EMI filter where C is a combination of C1, C2, and C3. Source: Texas Instruments

The X-capacitor causes the AC input current to lead the AC voltage, as shown in Figure 3. The PFC inductor current is , the input voltage is , and the X-capacitor reactive current is . The total PFC input current is , which is also the current from where the power factor is measured. Although the PFC current control loop forces  to follow , the reactive current of leads  by 90 degrees, which causes  to lead . The result is a poor power factor.

This effect is amplified at a light load and high line, as  takes more weight in the total current. As a result, it is difficult for the power factor to meet a rigorous specification such as the M-CRPS specification.

Figure 3 X-capacitor  causes the AC current to lead the AC voltage. Source: Texas Instruments

Fortunately, with a digital controller, you can solve this problem through one of the following methods.

Method #1

Since  makes the total current lead the input voltage, if you can force the  to lag  by some degree, as shown in Figure 4, then the total current  will be in phase with the input voltage, improving the power factor.

Figure 4 Forcing  to lag so that the total current  will be in phase with the input voltage. Source: Texas Instruments

Since the current loop forces the inductor current to follow its reference, to let  to lag , the current reference needs to lag . For a PFC with traditional average current-mode control, the current reference is generated by Equation 3:

where A is the voltage-loop output, B equals 1/VAC_RMS2, and C is the sensed input voltage VAC(t).

To delay the current reference, an analog-to-digital converter (ADC) measures , the measurement results are stored in a circulate buffer. Then, instead of using the newest input voltage (VIN) data, Equation 3 uses previously stored VIN data to calculate the current reference for the present moment. The current reference will lag ; the current loop will then make  lag . This can compensate the leading x-capacitor  and improve the power factor.

The delay period needs dynamic adjustment based on the input voltage and output load. The lower the input voltage and the heavier the load, the shorter the delay needed. Otherwise  will be over delayed, making the power factor worse than if there was no delay at all. To solve this problem, use a look-up table to precisely and dynamically adjust the delay time based on the operating condition.

Method #2

Since a poor power factor is caused mainly by the EMI X-capacitor , if you calculate  for a given X-capacitor value and input voltage, then subtract  from the total ideal input current to form a new current reference for the PFC current loop, you will get a better total input current that is in phase with the input voltage and can achieve a good power factor.

To explain in detail, for a PFC with a unity power factor of 1,  is in phase with . Equation 4 expresses the input voltage:

where VAC is the VIN peak value and f is the VIN frequency. The ideal input current then needs to be totally in phase with the input voltage, expressed by Equation 5:

where IAC is the input current peak value.

Since the capacitor current is , see Equation 6:

Equation 7 comes from Figure 2:

Combining Equations 5, 6 and 7 results in Equation 8:

If you use Equation 8 as the current reference for the PFC current loop, you can fully compensate the EMI X-capacitor , achieving a unity power factor. In Figure 5, the blue curve is the waveform of the preferred input current, iAC(t), which is in phase with . The green curve is the capacitor current, iC(t), which leads  by 90 degrees. The dotted black curve is iAC(t) ‒ iC(t). The red curve is the rectified iAC(t) ‒ iC(t). In theory, if the PFC current loop uses this red curve as its reference, you can fully compensate the EMI X-capacitor  and increase the power factor.

Figure 5 New current reference with iAC(t) (blue), iC(t) (green), iAC(t) ‒ iC(t) (red),and rectified iAC(t) ‒ iC(t) (red). Source: Texas Instruments

To generate the current reference as shown in Equation 8, you’ll first need to calculate the EMI X-capacitor reactive current, iC(t). Using a digital controller, an ADC samples the input AC voltage, which the CPU then reads in the interrupt loop routine at a fixed rate. By calculating how many ADC samples are in two consecutive AC zero crossings, Equation 9 determines the frequency of the input AC voltage:

where fisr is the frequency of the interrupt loop and N is the total number of ADC samples in two consecutive AC zero crossings.

To get the cosine waveform cos(2πft), a software phase-locked loop generates an internal sine wave that is synchronized with the input voltage, making it possible to obtain the cosine waveform. Use Equation 6 to calculate iC(t), then subtract from Equation 7 to get the new current reference.

Reshaping the current reference at the AC zero crossing area

These two methods let lag in order to improve the power factor; however, they may cause extra current distortion at the AC zero crossing. See Figure 6. Because of the diode bridge rectifier used in the PFC power stage, diodes will block any reverse current. Referencing Figure 6, during T1 and T2, VAC(t) is in the positive half cycle, but the expected iL(t) (the dotted black line) is negative. This is not possible, however, because the diodes will block the negative current, so the actual iL(t) remains zero during this period. Similarly, during T3 and T4, vAC(t) becomes negative, but the expected iL(t) is still positive. iL(t) also will be blocked by the diodes, and remains at zero.

Correspondingly, the current reference needs to be at zero during these two periods; otherwise the integrator in the control loop will build up. When the two periods are over and current starts to conduct, control loop generates a PWM duty cycle bigger than required, causing current spikes. The red curve in Figure 6 shows what the actual iL(t) would be with a diode bridge, and the red curve should be used as the current reference for the PFC current loop.

Figure 6 Final current reference curve where the red curve shows what the actual iL(t) would be with a diode bridge and should be used as the current reference for the PFC current loop. Source: Texas Instruments

Optimizing power factor

A poor power factor is mainly caused by the X-capacitor used in the PFC EMI filter, but it is possible to compensate for the effect of X-capacitor reactive current by delaying the inductor current. Now that you can use one of the two methods to delay the inductor current, you can combine them with guidance in Power Tips #116 to meet both a high-power factor and a low THD requirement.

Bosheng Sun is a systems, applications and firmware engineer at Texas Instruments.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #124: How to improve the power factor of a PFC appeared first on EDN.

Chips taking generative AI to edge reach CES floor

Tue, 01/09/2024 - 15:20

A new system-on-chip (SoC) demonstrated at CES 2024 in Las Vegas claims to run multi-modal large language models (LLMs) at a fraction of the power-per-inference of leading GPU solutions. Ambarella is targeting this SoC to bring generative AI to edge endpoint devices and on-premise hardware in video security analysis, robotics, and a multitude of industrial applications.

According to the Santa Clara, California-based chip developer, its N1 series SoCs are up to 3x more power-efficient per generated token than GPUs and standalone AI accelerators. Ambarella will initially offer optimized generative AI processing capabilities on its mid to high-end SoCs for on-device performance under 5W. It’ll also release a server-grade SoC under 50 W in its N1 series.

Generative AI will be a step function for computer vision processing that brings context and scene understanding to a variety of devices such as security installations and autonomous robots. Source: Ambarella

Ambarella claims that its SoC architecture is natively suited to process video and AI simultaneously at very low power. So, unlike a standalone AI accelerator, they carry out highly efficient processing of multi-modal LLMs while still performing all system functions. Examples of the on-device LLM and multi-modal processing enabled by these SoCs include smart contextual searches of security footage, robots that can be controlled with natural language commands, and different AI helpers that can perform anything from code generation to text and image generation.

Les Kohn, CTO and co-founder of Ambarella, says that generative AI networks are enabling new functions across applications that were just not possible before. “All edge devices are about to get a lot smarter with chips enabling multi-modal LLM processing in a very attractive power/price envelope.”

Alexander Harrowell, principal analyst for advanced computing at Omdia, agrees with the above notion and sees virtually every edge application getting enhanced by generative AI in the next 18 months. “When moving generative AI workloads to the edge, the game becomes all about performance per watt and integration with the rest of the edge ecosystem, not just raw throughput,” he added.

The AI chips are supported by the company’s Cooper Developer Platform, where Ambarella has pre-ported and optimized popular LLMs. That includes Llama-2 as well as the Large Language and Video Assistant (LLava) model running on N1 SoCs for multi-modal vision analysis of up to 32 camera sources. These pre-trained and fine-tuned models will be available for chip developers to download from the Cooper Model Garden.

Ambarella also claims that its N1 SoCs are highly suitable for application-specific LLMs, which are typically fine-tuned on the edge for each scenario. That’s unlike the classical server approach of using bigger and more power-hungry LLMs to cater to every use case.

With these features, Ambarella is confident that its chips can help OEMs quickly deploy generative AI into any power-sensitive application ranging from an on-premise AI box to a delivery robot. The company will demonstrate its SoC solutions for AI applications at CES in Las Vegas on 9-12 January 2024.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Chips taking generative AI to edge reach CES floor appeared first on EDN.

Simple log-scale audio meter

Mon, 01/08/2024 - 17:48

While refurbishing an ageing audio mixer, I decided that the level meters needed special attention. Their rather horrible 100 µA edgewise movements had VU-type scales, the drive electronics being just a diode (germanium?) and a resistor. Something more like a PPM, with a logarithmic (or linear-in-dBs) scale and better dynamics, was needed. This is a good summary of the history and specifications of both PPMs and VU meters.

Wow the engineering world with your unique design: Design Ideas Submission Guide

It occurred to me that since both peak detection and log conversion imply the use of diodes, it might be possible to combine those functions, at least partially.

Log conversion normally uses either a transistor in a feedback loop round an op-amp or a complex ladder using resistors and semiconductors. The first approach requires a special PTC resistor for temperature compensation and is very slow with low input levels; the second usually has a plethora of trimpots and is un-compensated. This new approach, sketched in Figure 1, shows the first pass at the idea, and avoids those disadvantages.

Figure 1 This basic peak log detector shows the principals involved and helps to highlight the problems. (Assume split, non-critical supply rails.)

As with all virtual-earth circuits, the input resistor feeds current into the summing point—the op-amp’s inverting input—which is balanced by current driven through the diodes by the op-amp’s output. Because the forward voltage across a diode (VF) is proportional to the logarithm of the current flowing through it (as described here), the op-amp’s output voltage now represents the log of the input signal. Positive input half-cycles cause it to clamp low at VF, which we ignore, as this is a half-wave design; for negative ones, it swings high by 2VF.

Driving that 2VF through another diode into the capacitor charges the latter to VF, losing a diode-drops’-worth in the process. (No, the VFs don’t match exactly, except momentarily, but no matter.) The meter now shows the log of negative-input half-cycle peaks, the needle falling back as the capacitor discharges through the meter.

As it stands, it works, with a very reasonable span of around 50 dB. Now for the problems:

  1. The integration, or attack time, is slow at ~70 ms to within 2 dB of the final reading.
  2. The return or decay time is rather fast, about a second for the full scale, and is exponential rather than linear.
  3. It’s too temperature-sensitive, the indication changing by ~5 dB over a 20°C range.

While this isn’t bad for a basic, log-scaled VU-ish meter, something snappier would be good: time for the second pass.

Figure 2 This upgraded circuit has a faster response and much better temperature stability.

A1 buffers the input signal to avoid any significant load of the source. C1 and R1 roll off bass components (-3 dB at ~159 Hz) to avoid spurii from rumbling vinyl and woodling cassette tapes. Drive into A2 is now via thermistor Th1 (a common 10k part, with a ꞵ-value of 3977), which largely compensates for thermal effects. C2 blocks any offset from A1, if used. (Omit the buffer stage if you choose, but the input impedance and LF breakpoint will then vary with temperature, so then choose C2 with care.) Three diodes in the forward chain give a higher output and a greater span. A2 now feeds transistor Tr1, which subtracts its own VBE from the diodes’ signal while emitter-following that into C2, thus decreasing the attack time. R2 can now be higher, increasing the decay time. Figure 3 shows a composite plot of the actual electronic response times; the meter’s dynamics will affect what the user sees.

Figure 3 This shows the dynamic responses to a tone-burst, with an attack time of around 12 ms to within 2 dB of the final value, and the subsequent decay.  (The top trace is the ~5.2 kHz input, aliased by the ’scope.)

The attack time is now in line with the professional spec, and largely independent of the input level. While the decay time is OK in practice, it is exponential rather than linear. 

The response to varying input levels is shown in Figure 4. I chose to use a full-scale reading of +10 dBu, the normal operating level being around -10 dBu with clipping starting at ~+16 dBu. For lower maximum readings, use a higher value for Th1 or just decrease R2, though the decay time will then be faster unless you also increase C3, impacting the attack time.

Figure 4 The simulated and actual response curves are combined here, showing good conformance to a log law with adequate temperature stability.

The simulation used a negative-going ramp (coupling capacitors “shorted”) while the live curve was for sine waves, with Th1 replaced by a 1% 10k resistor and R2 adjusted to give 100 µA drive for +10 dBu (6.6 Vpk-pk) input. I used LTspice here to verify the diodes’ performance and to experiment with the temperature compensation. (Whenever I see a diode other than a simple rectifier, I am tempted to reach for a thermistor. This is a good primer on implementing them in SPICE, with links to models that are easily tweakable. “Other compensation techniques are available.”) The meter coil has its own tempco of +3930 ppm/°C, which is also simulated here though it makes little practical difference. Just as well: might be tricky to keep it isothermal with the other temperature-sensitive stuff.

Simple though this circuit is, it works well and looks good in operation. (A variant has also proved useful in a fibre-optic power meter.) The original meters, rebuilt as in Figure 2, have been giving good service for a while now, so this is a plug-in breadboard rehash using a spare, similar, meter movement, with extra ’scoping and simulation. It’s possible to take this basic idea further, with still-faster attack, linear decay, adjustable span, better temperature compensation, and even full-wave detection—but that’s another story, and another DI.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Simple log-scale audio meter appeared first on EDN.

Partitioning to optimize AI inference for multi-core platforms

Mon, 01/08/2024 - 09:22

Not so long ago, artificial intelligence (AI) inference at the edge was a novelty easily supported by a single neural processing unit (NPU) IP accelerator embedded in the edge device. Expectations have accelerated rapidly since then. Now we want embedded AI inference to handle multiple cameras, complex scene segmentation, voice recognition with intelligent noise suppression, fusion between multiple sensors, and now very large and complex generative AI models.

Such applications can deliver acceptable throughput for edge products only when run on multi-core AI processors. NPU IP accelerators are already available to meet this need, extending to eight or more parallel cores and able to handle multiple inference tasks in parallel. But how should you partition expected AI inference workloads for your product to take maximum advantage of all that horsepower?

Figure 1 Multi-core AI processors can deliver acceptable throughput for edge applications like scene segmentation. Source: Ceva

Paths to exploit parallelism for AI inference

As in any parallelism problem, we start with a defined set of resources for our AI inference objective: some number of available accelerators with local L1 cache, shared L2 cache and a DDR interface, each with defined buffer sizes. The task is then to map the network graphs required by the application to that structure, optimizing total throughput and resource utilization.

One obvious strategy is in processing large input images which must be split into multiple tiles—partitioning by input map where each engine is allocated a tile. Here, multiple engines search the input map in parallel, looking for the same feature. Conversely you can partition by output map—the same tile is fed into multiple engines in parallel, and you use the same model but different weights to detect different features in the input image at the same time.

Parallelism within a neural net is commonly seen in subgraphs, as in the example below (Figure 2). Resource allocation will typically optimize breadth wise then depth wise, each time optimizing to the current step. Obviously that approach won’t necessarily find a global optimum on one pass, so the algorithm must allow for backtracking to explore improvements. In this example, three engines can deliver >230% of the performance that would be possible if only one engine were available.

Figure 2 Subgraphs highlight parallelism within a neural net. Source: Ceva

While some AI inference models or subgraphs may exhibit significant parallelism as in the graph above, others may display long threads of operations, which may not seem very parallelizable. However, they can still be pipelined, which can be beneficial when considering streaming operations through the network.

One example is layer-by-layer processing in a deep neural network (DNN). Simply organizing layer operations per image to minimize context switches per engine can boost throughput, while allowing the following pipeline operations to switch in later but still sooner than in purely sequential processing. Another good example is provided by transformer-based generative AI networks where alternation between attention and normalization steps allows for sequential recognition tasks to be pipelined.

Batch partitioning is another method, providing support for the same AI inference model running on multiple engines, each fed by a separate sensor. This might support multiple image sensors for a surveillance device. And finally, you can partition by having different engines run different models. This strategy is useful especially in sematic segmentation, say for autonomous driving where some engines might detect lane markings. Others might handle free (drivable) space segmentation, and some others might detect objects (pedestrians and other cars).

Architecture planning

There are plenty of options to optimize throughput and utilization but how do you decide how best to tune for your AI inference application needs? This architecture planning step must necessarily come before model compile and optimization. Here you want to explore tradeoffs between partitioning strategies.

For example, a subgraph with parallelism followed by a thread of operations might sometimes be best served simply by pipelining rather than a combination of parallelism and pipelining. Best options in each case will depend on the graph, buffer sizes, and latencies in context switching. Here, support for experimentation is critical to determining optimal implementations.

Rami Drucker is machine learning software architect at Ceva.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Partitioning to optimize AI inference for multi-core platforms appeared first on EDN.

The dangers of light glare from high-brightness LEDs

Fri, 01/05/2024 - 17:08

I learned to drive in 1966 when I lived in Nashua, New Hampshire. At that time, it was a major traffic violation to drive a car with “dazzling lights”. That piece of common sense seems to no longer apply in this modern era.

This front-page article in Figure 1 was recently published in our local newspaper, Newsday.

Figure 1 Front page of a recent issue of Newsday highlighting the hazards of bright LED headlights.

The article’s author describes the extremely dangerous issue of visual interference being experienced by drivers here on Long Island, but elsewhere as well in my own opinion.

Just as an example, this second image in Figure 2 was captured in a local diner. I was having dinner, and I could see my chicken cutlet, but the visual impact of high brightness LEDs from a nearby business is self evident.

Figure 2 The glare from high brightness headlights in a local Long Island diner.

I wrote an e-mail to the Newsday article’s author as shown below. I allowed myself to vent a little, but the issue is of grave concern to me. The email can be seen in Figure 3.

Figure 3 A letter from myself (John Dunn) to the editor of the “Glaring Issue on LI Road” article published in Newsday.

There is another LED abuse being committed by some homeowners as illustrated below in Figure 4.

Figure 4 Abusive lighting from nearby home blinding neighbors, pedestrians, and passing traffic.

This form of abuse has been enabled by the ready availability at Home Depot, Ace Hardware and so forth, of LED light sources with 5000°K color temperature. Light being emitted from such fixtures can be highly penetrating and intrusive. By comparison, LED illumination at a color temperature of 2700°K to 2800°K approximates the color temperature delivered by an incandescent lamp. Such lighting doesn’t tend to cause visual stress.

The lights shown in Figure 4 have are being turned on roughly an hour after sunset and are allowed to remain lit until 10 or 11 PM. If they were any part of a security system, they would be lit all through the night to facilitate camera imaging, but they’re not. For a variety of unrelated reasons, I submit that these lights are being used as a tool for neighborly harassment.

To me, this is a form of trespass, but there is no way of my awareness to restrain such behavior. With merely the flick of a switch, the glaring light is imposed on its intended victim. To my knowledge, there are no laws governing such misanthropy and even if there were such laws, there would be no way to achieve effective enforcement.

The only practical remediation I can imagine would be the discontinuation of 5000 °K LED products. Quartz halogen lamps and high wattage incandescent lamps have been recently discontinued for good reasons so I see no reason why that logic should not be further applied.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The dangers of light glare from high-brightness LEDs appeared first on EDN.

The need for post-quantum cryptography in the quantum decade

Fri, 01/05/2024 - 05:53

Cyber resilience has long been a key focus for industry leaders, but the stakes have been raised with the rapid acceleration of quantum computing. Quantum computing is a cutting-edge innovation that combines the power of computer science, physics, and mathematics to rapidly perform complex calculations outside the realm of classical computing. Expected to be commercialized by 2030, it offers incredible potential to further digitalize key industries and redefine the role technology plays in geopolitics. The possibilities of the post-quantum era cannot be understated.

While quantum computing can positively serve humanity in a myriad of ways, it also brings concerning cybersecurity threats. In turn, the U.S. government and security leaders have called for accelerated post-quantum cryptography (PQC) migration. President Biden signed the Quantum Computing Cybersecurity Preparedness Act after visiting IBM’s quantum data center in October 2022. In addition, NIST, CISA, and NSA recently advised organizations to develop PQC readiness roadmaps.

The message is clear: quantum-powered cyberattacks are of growing concern, and maintaining resilience in the face of this new threat is different than anything we’ve faced before.


Breaking down the threat

Quantum computing’s biggest double-edged sword is its ability to quickly and easily solve complex algorithms intended to safeguard systems and data. Quantum computers are exceptionally fast, utilizing specialized hardware components that leverage quantum physics to outpace current supercomputing technology.

For example, IBM and UC Berkeley recently collaborated on a quantum computer that performs calculations quicker and more accurately than supercomputers at Lawrence Berkeley National Lab and Purdue University. While this newfound speed might seem like a good thing, it’s also exceedingly dangerous.

Additionally, quantum computers have an innate ability to compromise legacy public key infrastructure (PKI) cryptographic algorithms, the type of algorithms utilized by most of today’s classical computing systems. By leveraging Shor’s Algorithm, quantum computers are able to factor and then decipher these PKI-based algorithms and bypass security controls.

Between their unmatched speed and ability to compromise most of the security measures utilized today, quantum computers are a huge threat to modern enterprises and, as such, new quantum resistant PKI encryption and cyber resiliency solutions are needed to mitigate risk.

Post-quantum cryptography

Due to the imminent threat of quantum computing, we’re seeing more and more organizations adopt post-quantum cryptography (PQC). At its core, PQC migration is about shifting away from legacy PKI-based cryptography to post-quantum cryptography that will be resilient to quantum computer attacks.

It’s worth noting here that bad actors are adopting a ‘steal now, decrypt later’ stance that puts significant confidential data stored on the cloud today at risk of future disaster as more and more capable quantum computers come online.

The shift to PQC is necessary and timely, especially since the existing security standards many organizations use today do not implement PQC infrastructure that protects against quantum computing attacks. For example, widely used security standards like Trusted Platform Modules (TPMs), IEC 62443, and ISO/SAE 21434 do not require PQC algorithms. Systems and devices built today to these specifications will not have what is needed in the future to be quantum safe.

While the transition to PQC won’t be immediate, we’re making exceptional progress. The U. S. National Institute of Standards and Technology (NIST) is in the process of an ongoing competition to find the best PQC algorithms to replace legacy PKI algorithms. The trials started in 2016 and, in July 2022, they announced four candidates for standardization, plus additional candidates for a fourth round of analysis. These four candidates—as well as the fourth-round selection—will become the new NIST-approved encryption standards.

Implementing PQC at scale

With quantum computers likely arriving sooner than anticipated, organizations must start constructing their own PQC migration roadmaps to build resilience for post-quantum attacks. NIST’s first standardized PQC algorithms are expected to be ready in 2024; however, organizations must begin making changes to their production and manufacturing efforts now to streamline migration once available. Through the integrated adoption of field programmable gate arrays (FPGAs), organizations can position themselves to facilitate PQC migration for a post-quantum future now.

FPGAs contain “crypto agile” capabilities that deliver enhanced protection. With flexible programmability and parallel processing functions, they can enable developers to easily update existing systems and hardware with new PQC algorithms for adherence to evolving standards. Further, FPGAs accelerate complex mathematical functions to enhance system performance and protection.

While quantum computing’s potential to revolutionize our world is massive, it’s overshadowed by the technology’s dangerous ability to dismantle cybersecurity and encryption. As we enter this new post-quantum world, cyber resilience is taking on a new meaning, one that demands our unwavering commitment to securing our systems, data, and infrastructure in the face of quantum-powered challenges. Now, maintaining resilience means implementing post-quantum cryptography facilitated by FPGAs to withstand attacks from quantum computers.

The need for PQC cannot be emphasized enough and it’s imperative that governments, industries, and organizations actively collaborate to implement solutions, such as those available today with FPGAs that safeguard our digital future from quantum-powered threats.

Eric Sivertson is VP of security business at Lattice Semiconductor.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The need for post-quantum cryptography in the quantum decade appeared first on EDN.

Embeddable IP enables Wi-Fi 7 connectivity

Thu, 01/04/2024 - 20:33

Ceva is bolstering its RivieraWaves portfolio of wireless connectivity IP with Wi-Fi 7 IP for high-end consumer and industrial IoT applications. Incorporating both PHY and MAC functions, the Wi-Fi 7 IP is available now for access points, with an optimized station implementation slated for later this year.

The embeddable IP leverages the key features of the IEEE 802.11be standard, including 4K quadrature amplitude modulation (QAM), multi-link operation (MLO), and multiple resource units (MRUs). 4K QAM provides a substantial increase in throughput compared to the 1K QAM of Wi-Fi 6. MLO introduces dynamic channel aggregation, seamlessly combining heterogeneous channels from the same or different bands to navigate interference and boost throughput. MRU stitches together punctured or disjointed resource units within the same band to increase throughput and reduce latency.

Other features of the RivieraWaves Wi-Fi 7 IP include:

  • 2×2 MU-OFDM(A) transmitter and receiver supporting 802.11be, up to MCS13-2SS
  • 2×2 MU-MIMO for client-dense environments
  • Beamforming to maximize link budget in multi-antenna systems
  • Full MAC software stack, supporting the latest security standards including WPA3 and China’s WAPI
  • Compatible with the EasyMesh standard for interconnectivity between different APs
  • Advanced Packet Traffic Arbiter (PTA) functionality for Wi-Fi & Bluetooth connectivity coexistence

RivieraWaves Wi-Fi 7 IP is backward-compatible with previous Wi-Fi generations. It employs a flexible RF interface that enables integration with radio circuitry from multiple Ceva partners or a licensee’s own radio technology.

For more information on the RivieraWaves Wi-Fi IP portfolio, click here.


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Embeddable IP enables Wi-Fi 7 connectivity appeared first on EDN.

DDR5 clock driver handles 7200 MT/s

Thu, 01/04/2024 - 20:33

A DDR5 registering clock driver (RCD) from Rambus, the RCD4-GA0A boosts data rates to 7200 MT/s, a significant increase over DDR5 devices operating at 4800 MT/s. Intended for registered DIMMs, the driver chip can accelerate performance in generative AI and other advanced data-center workloads.

The RCD4-GA0A supports a clock rate of up to 3.6 GHz, corresponding to its data rate of up to 7200 MT/s. It also accommodates double data rate (DDR) and single data rate (SDR) buses, providing clocks and command/address (CA) signals to the DRAM devices in registered DIMMs. The driver supports two independent subchannels per registered DIMM and two physical ranks per subchannel. For high-capacity registered DIMMs, the driver supports up to 16 logical ranks per physical rank.

Rambus began sampling the RCD4-GA0A registering clock driver to major DDR5 memory module manufacturers during the fourth quarter of 2023. The company also offers a serial presence detect (SPD) hub and temperature sensor for use with its DDR5 RCDs.

For more information about the Rambus lineup of DDR5 chips, click here.


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post DDR5 clock driver handles 7200 MT/s appeared first on EDN.