EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 14 хв 22 секунди тому

ATE system tests wireless BMS

Чтв, 01/11/2024 - 21:11

Rhode & Schwarz, with technology from Analog Devices, has developed an automated test system for wireless battery management systems (wBMS). The collaboration aims to help the automotive industry adopt wBMS technology and realize its many advantages over wired battery management systems.

The ATE setup performs essential calibration of the wBMS module, as well as receiver, transmitter, and DC verification tests. It covers the entire wBMS lifecycle, from the development lab to the production line. The system comprises the R&S CMW100 radio communication tester, WMT wireless automated test software, and the ExpressTSVP universal test and measurement platform.

R&S and Analog Devices also worked together to develop a record and playback solution for RF robustness testing of the wBMS. During several test drives in various complex RF environments, the R&S FSW signal and spectrum analyzer monitored the RF spectrum and sent it to the IQW wideband I/Q data recorder. For playback of the recorded spectrum profiles, the IQW was connected to the SMW200A vector signal generator.

Analog Device’s complete wBMS solution, currently in production across multiple EV platforms, complies with the strictest cybersecurity requirements of ISO 21424 CAL4. In addition, its RF performance and robustness maximize battery capacity and lifetime values.

Rohde & Schwarz 

Analog Devices

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ATE system tests wireless BMS appeared first on EDN.

UWB RF switch aids automotive connectivity

Чтв, 01/11/2024 - 21:11

A 50-Ω SPDT RF switch from pSemi, the automotive-grade PE423211, covers bandwidths ranging from 300 MHz to 10.6 GHz. The part can be used in Bluetooth LE, ultra-wideband (UWB), ISM, and WLAN 802.11 a/b/g/n/ac/ax applications. Its suitability for BLE and UWB make the switch particularly useful for secure car access, telematics, sensing, infotainment, in-cabin monitoring systems, and general-purpose switching.

Qualified to AEC-Q100 Grade 2 requirements, the PE423211 operates over a temperature range of -40° to +105°C. The device combines low power, high isolation, and wide broadband frequency support in a compact 6-lead, 1.6×1.6-mm DFN package. It consumes less than 90 nA and provides ESD performance of 2000 V at HBM levels and 500 V at CDM levels.

The RF switch is manufactured on the company’s UltraCMOS process, a silicon-on-insulator technology. It also leverages HaRP technology enhancement, which reduces gate lag and insertion loss drift.

The PE423211 RF switch is sampling now, with production devices expected in late 2024. A datasheet for the switch was not available at the time of this announcement.

PE423211 product page

pSemi

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post UWB RF switch aids automotive connectivity appeared first on EDN.

Quectel unveils low-latency Wi-Fi 7 modules

Чтв, 01/11/2024 - 21:11

The first entries in Quectel’s Wi-Fi 7 module family, the FGE576Q and FGE573Q, deliver fast data rates and low latency for real-time response. Both modules offer Wi-Fi 7 and Bluetooth 5.3 connectivity for use in a diverse range of applications, including smart homes, industrial automation, healthcare, and transportation.

The FGE576Q provides a data rate of up to 3.6 Gbps and operates on dual Wi-Fi bands simultaneously: 2.4 GHz and 5 GHz or 2.4 GHz and 6 GHz. The FGE573Q operates at a maximum data rate of 2.9 Gbps. Devices feature 4K QAM and multi-link operation (MLO), which enables routers to use multiple wireless bands and channels concurrently when connected to a Wi-Fi 7 client. With Bluetooth 5.3 integration, each module supports LE audio and a maximum data rate of 2 Mbps, as well as BLE long-range capabilities.

Housed in 16×20×1.8-mm LGA packages, the FGE576Q and FGE573Q operate over a temperature range of -20°C to +70°C. Quectel also offers Wi-Fi/Bluetooth antennas in various formats for use with these modules.

FGE576Q product page

FGE573Q product page

Quectel Wireless Solutions

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Quectel unveils low-latency Wi-Fi 7 modules appeared first on EDN.

Wi-Fi 7 SoCs garner Wi-Fi Alliance certification

Чтв, 01/11/2024 - 21:11

MaxLinear’s Wi-Fi 7 SoC with integrated triband access point has been certified by the Wi-Fi Alliance and selected as a Wi-Fi Certified 7 test bed device. Certification ensures that devices interoperate seamlessly and deliver the high-performance features of the Wi-Fi 7 standard.

The test bed employs the MxL31712 SoC, with the triband access point capable of operating at 2.4 GHz, 5 GHz, and 6 GHz. Well-suited for high-density environments, the access point includes the advanced features of 4K QAM, multi-link operation (MLO), multiple resource units (MRU) and puncturing, MU-MIMO, OFDMA, advanced beamforming, and power-saving enhancements.

MaxLinear’s Wi-Fi Certified 7 SoC family, comprising the triband MxL31712 and dual-band MxL31708, is based on the upcoming IEEE 802.11be standard and delivers peak throughput of 11.5 Gbps on 6-GHz (6E) spectrum. The MxL31712 accommodates up to 12 spatial streams, while the MxL31708 handles up to 8 spatial streams.

To learn more about the Wi-Fi 7 SoCs, click here.

MaxLinear

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wi-Fi 7 SoCs garner Wi-Fi Alliance certification appeared first on EDN.

6-DoF inertial sensor improves machine control

Чтв, 01/11/2024 - 21:10

The SCH16T-K01 inertial sensor from Murata combines an XYZ-axis gyroscope and XYZ-axis accelerometer in a robust SOIC package. Based on the company’s capacitive 3D-MEMS process, the device achieves centimeter-level accuracy in machine dynamics and position sensing, even in harsh environments.

The SCH16T-K01 provides an angular rate measurement range of ±300°/s and an acceleration measurement range of ±8 g. A redundant digital accelerometer channel offers a dynamic range of up to ±26 g, which offers resistance against saturation and vibration. Gyro bias instability is typically 0.5°/h. According to the company, the component overall exhibits excellent linearity and offset stability over the entire operating temperature range of -40°C to +110°C.

Other features of the industrial sensor include a SafeSPI V2.0 digital interface, self-diagnostics, and options for output interpolation and decimation. Housed in a 12×14×3-mm, 24-pin SOIC plastic package, the SCH16T-K01 is suitable for lead-free soldering and SMD mounting.

SCH16T-K01 product page

Murata

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 6-DoF inertial sensor improves machine control appeared first on EDN.

The 2024 CES: It’s “AI everywhere”, if you hadn’t already guessed

Чтв, 01/11/2024 - 17:23

This year’s CES officially runs from today (as I write these words), Tuesday, January 9 through Friday, January 12. So why, you might ask, am I committing my coverage to cyber-paper on Day 1, only halfway through it, in fact? That’s because CES didn’t really start just today. The true official kickoff, at least for media purposes, was Sunday evening’s CES Unveiled event, which is traditionally reminiscent of a Japanese subway car, or if you prefer, a Las Vegas Monorail:

Yesterday was Media Day, where the bulk of the press releases and other announcement paraphernalia was freed from its prior corporate captivity for public perusal:

And some companies “jumped the gun”, announcing last week or even prior to the holidays, in attempting to get ahead of the CES “noise”. So, the bulk of the news is already “in the wild”; all that’s left is for the huddled masses at the various Convention Centers and other CES-allotted facilities to peruse it as they aimlessly wander zombie-like from booth to booth in search of free tchotchkes (can you tell how sad I am to not be there in person this year? Have I mentioned the always-rancid restrooms yet? Or, speaking of which, the wastewater-suggestive COVID super-spreader potential? Or…). Plus, it enables EDN to get my writeup up on the website and in the newsletters earlier than would otherwise be the case. I’ll augment this piece with comments and/or do follow-on standalone posts if anything else notable arrives before end-of-week.

AI (nearly) everywhere

The pervasiveness of AI wasn’t a surprise to me, and likely wasn’t to you, either. Two years ago, after all, I put to prose something that I’d much earlier believed was inevitable, ever since I saw an elementary live demo of deep learning-based object recognition (accelerated by the NVIDIA GPU in his laptop) from Yann LeCun, Director of AI Research at Facebook and a professor at New York University, at the May 2014 Embedded Vision Summit:

One year later (and one year ago), I amped up my enthusiasm in discussing generative AI in its myriad implementation forms, a topic which I revisited just a few months ago. And just about a week ago, I pontificated on the exploding popularity of AI-based large language models. It takes a while for implementation ideas to turn into prototypes, not to mention for them to further transition to volume production (if they make it to that far at all, that is), so this year’s CES promised to be the “fish or cut bait” moment for companies run by executives who’d previously only been able to shoehorn the “AI” catchphrase into every earnings briefing and elevator pitch.

So this week we got, among other things, AI-augmented telescopes (a pretty cool idea, actually, says this owner of a conventional Schmidt-Cassegrain scope with an 8” primary mirror). We got (I’m resisting inserting a fecal-themed adjective here, but only barely) voice-controllable bidet seats, although as I was reminded of in doing the research for this piece, the concept isn’t new, just the price point (originally ~$10,000, now “only” ~$2,000, although the concept still only makes me shudder). And speaking of fecund subjects, AI brings us “smart” cat doors that won’t allow Fluffy to enter your abode if it’s carrying a recently killed “present” in its mouth. Meow.

Snark aside, I have no doubt that AI will also sooner-or-later deliver a critical mass of tangibly beneficial products. I’ll save further discussion of the chips, IP cores, and software that fundamentally enable these breakthroughs for a later section. For now, I’ll just highlight one technology implementation that I find particularly nifty: AI-powered upscaling. Graphics chips have leveraged conventional upscaling techniques for a while now, for understandably beneficial reasons: they can harness a lower-performance polygons-to-pixels “engine” (along with employing less dedicated graphics memory) than would otherwise be needed to render a given resolution frame, then upscale the pixels before sending them to the screen. Dedicated-function upscaling devices (first) and integrated upscaling ICs in TVs (later) have done the same thing for TVs, as long-time readers may recall, again using conventional “averaging” and other approaches to create the added intermediary pixels between “real” ones.

But over the past several years, thanks to the massive, function-flexible parallelism now available in GPUs, this upscaling is increasingly now being accomplished using more intelligent deep learning-based algorithms, instead. And now, so too with TVs. This transition is, I (perhaps simplistically) believe, fundamentally being driven by necessity. TV suppliers want to sell us ever-larger displays. But regardless of how many pixels they also squeeze into each panel, the source material’s resolution isn’t increasing at the same pace…4K content is still the exception, not the norm, and especially if you sit close and/or if the display is enormous, you’re going to see the individual pixels if they’re not upscaled and otherwise robustly processed.

See-through displays: pricey gimmick or effective differentiator?

Speaking of TVs…bigger (case study: TCL’s 115” monstrosity), thinner, faster-refreshing (case study: LG’s 480 Hz refresh-rate OLED…I’ll remind readers of my longstanding skepticism regarding this particular specification, recently validated by Vizio’s class action settlement) and otherwise “better” displays were as usual rife around CES. But I admittedly was surprised by another innovation, which LG’s suite reportedly most pervasively exemplified, with Samsung apparently a secondary participant: transparent displays. I’m a bit embarrassed to admit this, but so-called “See-through Displays” (to quote Wikipedia vernacular) have apparently been around for a few years now; this is the first time they’ve hit my radar screen.

Admittedly, they neatly solve (at least somewhat) a problem I identified a while back; ever-larger displays increasingly dominate the “footprint” of the room they’re installed in, to the detriment of…oh…furniture, or anything else that the room might otherwise also contain. A panel that can be made transparent (with consequent degradation of contrast ratio, dynamic range, and other image quality metrics, but you can always re-enable the solid background when those are important) at least creates the illusion of more empty room space. LG’s prototypes are OLED-based and don’t have firm prices (unless “very expensive” is enough to satisfy you) or production schedules yet. Samsung claims its MicroLED-based alternative approach is superior but isn’t bothering to even pretend that what it’s showing are anything but proof-of-concepts.

High-end TV supplier options expand and abound

Speaking of LG and Samsung…something caught my eye amidst the flurry of news coming through my various Mozilla Thunderbird-enabled RSS feeds this week. Roku announced a new high-end TV family, implementing (among other things) the aforementioned upscaling and other image enhancement capabilities. What’s the big deal, and what’s this got to do with LG and Samsung? Well, those two were traditionally the world’s largest LCD TV panel suppliers, by a long shot. But nowadays, China’s suppliers are rapidly expanding in market share, in part because LG and Samsung are instead striving to move consumers to more advanced display technologies, such as the aforementioned OLED and microLED, along with QLED (see my post-2019 CES coverage for more details on these potential successors).

LG and Samsung manufacture not only display panels but also TVs based on them, of course, and historically they’d likely be inclined to save the best panels for themselves. But now, Roku is (presumably) being supplied by Chinese panel manufacturers who don’t (yet, at least) have the brand name recognition to be able to sell their own TVs to the US and other Western markets. And Roku apparently isn’t afraid (or maybe it’s desperation?) to directly challenge other TV suppliers such as LG and Samsung, who it’d previously aspired to have as partners, integrate support for its streaming platform. Interesting.

Premium smartphones swim upstream

Speaking of aspiring for the high end…a couple of weeks ago, I shared my skepticism regarding any near-term reignition of new smartphone sales. While I’m standing by that premise in a broad sense, there is one segment of the market that seemingly remains healthy, at least comparatively: premium brands and models. Thereby explaining, for example, Qualcomm’s latest high-end Qualcomm Snapdragon 8 Gen 3 SoC platform, unveiled last October. And similarly explaining the CES-launched initial round of premium smartphones based on the Snapdragon 8 Gen 3 and competitive chipsets from companies like Apple and MediaTek.

Take, for example, the OPPO Find X7 Ultra. Apple’s iPhone 15 Pro Max might have one periscope lens, but OPPO’s new premium smartphone has two! Any sarcasm you might be sensing is intentional, by the way…that said, keep in mind that I’m one of an apparently dying breed of folks who’s still fond of standalone cameras, and that I also take great pride in not acquiring the latest-and-greatest smartphones (or brand-new ones at all, for that matter).

Wi-Fi gets faster and more robust…and slower but longer distance

Speaking of wireless communications…Wi-Fi 7 (aka IEEE 802.11be), the latest version of the specification from the Wi-Fi Alliance, was officially certified this week. Predictably, as with past versions of the standard, manufacturers had jumped the gun and began developing and sampling chipsets (and systems based on them) well ahead of this time; hopefully all the equipment already out there based on “draft” specs will be firmware-upgradeable to the final version. In brief, Wi-Fi 7 builds on Wi-Fi 6 (aka IEEE 802.11ax), which had added support for both MU-MIMO and OFDMA, and Wi-Fi 6e, which added support for the 6 GHz license-exempt band, with several key potential enhancements:

  • Wider channels: up to 80 MHz in the 5 GHz band (vs 20 MHz initially) and up to 320 MHz in the 6 GHz band (vs 160 MHz previously)
  • Multi-link operation: the transmitter-to-receiver connection can employ multiple channels in multiple bands simultaneously, for higher performance and/or reliability
  • Higher QAM levels for denser data packing: 4K-QAM, versus 1,024-QAM with Wi-Fi 6 and 256-QAM in Wi-Fi 5.

The key word in all of this, of course, is “potential”. The devices on both ends of the connection must both support Wi-Fi 7, first and foremost, otherwise it’ll down-throttle to a lower version of the standard. Wide channel usage is dependent on spectrum availability, and the flip side of the coin is also relevant: its usage may also adversely affect other ISM-based devices. And QAM level relevance is fundamentally defined by signal strength and contending interference sources…i.e., 4K-QAM is only relevant at close range, among other factors.

That said, Wi-Fi’s slower but longer range sibling, Wi-Fi HaLow (aka IEEE 802.11ah), which also had its coming-out party at CES this year, is to me actually the more interesting wireless communication standard. The key word here is “standard”. Long-time readers may remember my earlier discussions of my Blink outdoor security camera setup. Here’s a relevant excerpt from the premier post in the series:

A Blink system consists of one or multiple tiny cameras, each connected both directly to a common router or to an access point intermediary (and from there to the Internet) via Wi-Fi, and to a common (and equally diminutive) Sync Module control point (which itself then connects to that same router or access point intermediary via Wi-Fi) via a proprietary “LFR” long-range 900 MHz channel.

The purpose of the Sync Module may be non-intuitive to those of you who (like me) have used standalone cameras before…until you realize that each camera is claimed to be capable of running for up to two years on a single set of two AA lithium cells. Perhaps obviously, this power stinginess precludes continuous video broadcast from each camera, a “constraint” which also neatly preserves both available LAN and WAN bandwidth. Instead, the Android or iOS smartphone or tablet app first communicates with the Sync Module and uses it to initiate subsequent transmission from a network-connected camera (generic web browser access to the cameras is unfortunately not available, although you can also view the cameras’ outputs from either a standalone Echo Show or Spot, or a Kindle Fire tablet in Echo Show mode).

In summary, WiFi HaLow takes that “proprietary “LFR” long-range 900 MHz channel” and makes it industry-standard. One of the first Wi-Fi HaLow products to debut this week was Abode Systems’ Edge Camera, developed in conjunction with silicon partner Morse Micro and software partner Xailent, which will enter production later this quarter at $199.99 and touts a 1.5 mile broadcast range and one year of operating life from its integrated 6,000 mAh rechargeable Li-ion battery. The broader implications of the technology for IoT and other apps are intriguing.

Does Matter (along with Thread, for that matter) matter?

Speaking of networking…the Matter smart home communication standard, built on the foundation of the Thread (based on Zigbee) wireless protocol, had no shortage of associated press releases and product demos in Las Vegas this week. But to date, its implementation has been underwhelming (leading to a scathing but spot-on recent diatribe from The Verge, among other pieces), both in comparison to its backers’ rosy projections and its true potential.

Not that any of this was a surprise to me, alas. Consider that the fundamental premise of Matter and Thread was to unite the now-fragmented smart home device ecosystem exemplified by, for example, the various Belkin WeMo devices currently residing in my abode. If you’re an up-and-coming startup in the space, you love industry standards, because they lower your market-entry barriers versus larger, more established competitors. Conversely, if you’re one of those larger, more established suppliers, you love barriers to entry for your competitors. Therefore the lukewarm-at-best (and more frequently, nonexistent or flat-out broken) embrace of Matter and Thread by legacy smart home technology and product suppliers (for which, to be precise, and as my earlier Blink example exemplifies, conventional web browser access, vs a proprietary app, is even a bridge too far).

I’ll have more to say on Matter and Thread in a dedicated-topic post to come. But suffice it to say that I’m skeptical about their long-term prospects, albeit only cautiously so. I just don’t know what it might take to break the logjam that understandably prevents competitors from working together, in spite of the reality that a rising tide often does end up lifting all boats…or if you prefer, it’s often better to get a slice of a large pie versus the entirety of a much smaller pie. I’d promise to turn metaphors off at this point, but then there’s the title of the next section…

The Apple-ephant in the room

Speaking of standards…Apple, as far as I know, has never had a show floor, hospitality suite or other formal presence at CES, although I’m sure plenty of company employees attend, scope out competitors’ wares and meet with suppliers (and of course, there are plenty of third-party iPhone case suppliers and the like showing off their latest-and-greatest). That said, Apple still regularly casts a heavy pall over the event proceedings by virtue of its recently announced, already-public upcoming and rumored planned product and service offerings. Back in 2007, for example, the first-generation iPhone was all that anyone to talk about. And this year, it was the Vision Pro headset, which Apple announced on Monday (nothing like pre-empting CES, eh?) would be open for pre-sale beginning next week, with shipments starting on February 2:

The thematic commonality with the first iPhone commercial was, I suspect, not by accident:

What’s the competitive landscape look like? Well, in addition to Qualcomm’s earlier mentioned Snapdragon 8 Gen 3 SoC for premium smartphones, the company more recently (a few days ago, to be precise) unveiled a spec-bumped “+” variant of its XR2 Gen 2 SoC for mixed-reality devices, several of which were on display at the show. There was, for example, the latest-generation XREAL augmented reality (AR) glasses, along with an upcoming (and currently unnamed) standalone head-mounted display (HMD) from Sony. The latter is particularly interesting to me…it was seemingly (and likely obviously) rushed to the stage to respond to Apple’s unveil, for one thing. Sony’s also in an interesting situation, because it first and foremost wants to preserve its lucrative game console business, for which it already offers several generations of VR headsets as peripherals (thereby explaining why I earlier italicized “standalone”). Maybe that’s why development partner Siemens is, at least for now, positioning it as intended solely for the “industrial metaverse”?

The march of the semiconductors

Speaking of ICs…in addition to the announcements I’ve already mentioned, the following vendors (and others as well; these are what caught my eye) released chips and/or software packages:

The rest of the story

I’m a few words shy of 3,000 at this point, and I’m not up for incurring Aalyia’s wrath, so I’ll only briefly mention other CES 2024 announcements and trends that particularly caught my eye:

And with that, pushing beyond 3,100 words (and pushing my luck with Aalyia in the process) I’ll sign off. Sound off with your thoughts in the comments, please!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The 2024 CES: It’s “AI everywhere”, if you hadn’t already guessed appeared first on EDN.

More gated 555 astable multivibrators hit the ground running

Срд, 01/10/2024 - 17:23

Adding the long-first-pulse malady in less traditional 555 astable topologies including CMOS- and bipolar-based oscillators that generate 50:50 symmetrical square waves.

A previous Design Idea, “Gated 555 astable hits the ground running” offered a fix for the problem of the excessively long first pulse that’s generated by traditional topology 555 astable circuits on start up when gated by the RESET pin from oscillation-off to oscillation-on. See Figure 1 and Figure 2.

Figure 1 The problem—the first oscillation cycle has a too-long first pulse on start-up, when gated by the RESET pin from oscillation-off to oscillation-on.

Wow the engineering world with your unique design: Design Ideas Submission Guide

 

Figure 2 The fix via C2 charge injection on oscillation startup to equalize pulse length.

However, unaddressed in this design idea is the fact that less traditional 555 astable topologies also suffer from the same long-first-pulse malady. Important examples of such circuits are oscillators that generate 50:50 symmetrical square waves, such as Figure 3.

Figure 3 The long first-pulse problem also occurs in a 50:50 square wave topology popular for CMOS 555s.

Happily, the same fix from “Gated 555 astable hits the ground running” works in this genre of oscillators too, as illustrated in Figure 4.

Figure 4 C2 charge injection fix applied to CMOS 50:50 square wave oscillator.

So, the problem is solved for CMOS 555 square wave generators. But what about their bipolar kin?

Despite their age, bipolar 555s still get designed into contemporary applications. The reasons for the choice include advantages like higher supply voltage rating (18 V vs 15 V) and greater output current capability (hundreds vs tens of mA) than CMOS types. But they do need to be wired up somewhat differently—for example with an extra resistor (as described in a previous Design Idea “Add one resistor to give bipolar LM555 oscillator a 50:50 duty cycle“)—when a 50:50 square wave output is required. See Figure 5.

Figure 5 Bipolar 555 in gated 50:50 square wave configuration.

The C2 charge injection trick will still work to correct Figure 5’s first pulse, but there’s a complication. When held reset, Figure 5’s circuit doesn’t discharge the timing capacitor all the way to zero, but only to Vz where:

Vz = R3(R2 + R3)-1 V+
= 0.184 V+

Therefore, our old friend C2 = C1/2 won’t work. What’s needed is a smaller charge injection from a smaller C2 = 0.175 C1 as Figure 6 shows.

Figure 6 C2 charge injection first-pulse fix modified for bipolar 555 square wave generation.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post More gated 555 astable multivibrators hit the ground running appeared first on EDN.

CES 2024: Creating a frugal code in embedded software

Срд, 01/10/2024 - 12:29

At CES 2024, a French startup is presenting the notion of frugal code in embedded software by identifying and quantifying the optimization potential of the code. WedoLow, a spinoff from three research laboratories—IETR, INSA, and Inria in Rennes, France—will demonstrate how its automated software solution works for automotive applications ranging from advanced driver assistance systems (ADAS) to autonomous driving (AD) to in-vehicle infotainment systems.

WedoLow claims that its solution addresses complexity in embedded software by diagnosing and checking the code rapidly throughout the development process. That’s how it ensures if the code is fully optimized and if gains can be obtained in terms of speed of execution or energy consumption.

Source: WedoLow

Complexification of code in embedded software

At a time when applications are becoming larger and codes increasingly voluminous and complex, embedded systems are obviously no exception. That inevitably complexifies the work of developers, who now face a growing risk of delays with consequences for the efficiency and performance of their applications.

According to a 2020 survey from Sourcegraph, 51% of developers say they have more than 100 times the volume of code they had 10 years ago. Furthermore, 92% of developers say the pressure to release software faster has increased.

Take the case of the automotive industry, where cars have 200 million lines of code today and are expected to have 650 million by 2025. According to a McKinsey report titled “Outlook on the automotive software and electronics market through 2030,” the automotive software market is already worth more than 31 billion dollars and is forecast to reach around 80 billion in 2030.

The use of embedded software in the automotive sector has been constantly increasing since the introduction of anti-lock braking system (ABS) more than 40 years ago. So, gains in embedded software’s speed of execution and energy consumption will result in more responsive systems and longer battery life, which are crucial aspects for electric and autonomous mobilities.

How software works

WedoLow claims that its beLow software suite enables developers to understand the structure of a code and identify the parts that can be rewritten to generate more efficiency and performance. It’s enabled by optimization techniques that identify and quantify the potential optimization of the code at any stage of its development.

They build a line-by-line or function-by-function optimization strategy and obtain an optimized code rapidly and automatically. For example, WedoLow quotes a 23% gain in execution speed on the filtering of signals emitted by sensors on a road vehicle transmission system. Next, it helped achieve a 95% gain in execution speed on the processing of data and filtering of signals emitted by different sensors in battery management system (BMS) software.

Besides embedded software, WedoLow also aims to address the hosted software segment for server and cloud applications. Here, the French upstart conducted a test with an aerospace group on the processing of satellite images, reducing the software’s energy consumption by 18%.

WedoLow is presenting its frugal code solution at CES 2024; product launch is scheduled in the second quarter of 2024.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post CES 2024: Creating a frugal code in embedded software appeared first on EDN.

Power Tips #124: How to improve the power factor of a PFC

Втр, 01/09/2024 - 18:30

Introduction

In Power Tips #116, I talked about how to reduce the total harmonic distortion (THD) of a power factor correction (PFC). In this power tip, I will talk about another important criterion to evaluate PFC performance: the power factor, defined as the ratio of real power in watts to the apparent power, which is the product of the root mean square (RMS) current and RMS voltage in volt amperes, as shown in Equation 1:

The power factor indicates how efficiently energy is drawn from the AC source. With a poor power factor, a utility needs to generate more current than the electrical load actually needs, which causes elements such as breakers and transformers to overheat, in turn reducing their life span and increasing the cost of maintaining a public electrical infrastructure.

Ideally, the power factor should be 1; then the load appears as a resistor to the AC source. However, in the real world, electrical loads not only cause distortions in AC current waveforms, but also make the AC current either lead or lag with respect to the AC voltage, resulting in a poor power factor. For this reason, you can calculate the power factor by multiplying the distortion power factor by the displacement power factor:

where φ is the phase angle between the current and voltage and THD is the total harmonic distortion of current.

As the THD requirement gets lower, the power factor requirement gets higher. Table 1 lists the power factor requirements in the recently released Modular Hardware System-Common Redundant Power Supply (M-CRPS) base specification.

Output power

10% load

20% load

50% load

100% load

Power factor

>0.92

>0.96

>0.98

>0.99

Table 1 M-CRPS power factor requirements

Equation 2 shows that to improve the power factor, the first thing to do is to reduce the THD (which I discussed in Power Tips #116). However, a low THD does not necessarily mean that the power factor is high. If the PFC AC input current and AC input voltage are not in phase, even if the current is a perfect sine wave (low THD), the phase angle φ will result in a power factor less than 1.

The phase difference between the input current and input voltage is mainly caused by the electromagnetic interference (EMI) filter used in the PFC. Figure 1 shows a typical PFC circuit diagram that consists of three major parts: an EMI filter, a diode bridge rectifier, and a boost converter.

Figure 1 Circuit diagram of a typical PFC that consists of an EMI filter, diode bridge rectifier, and a boost converter. Source: Texas Instruments

In Figure 1, C1, C2, C3, and C4 are EMI X-capacitors. Inductors in the EMI filter do not change the phase of the PFC input current; therefore, it is possible to simplify Figure 1 into Figure 2, where C is now a combination of C1, C2, C3 and C4.

Figure 2 Simplified EMI filter where C is a combination of C1, C2, and C3. Source: Texas Instruments

The X-capacitor causes the AC input current to lead the AC voltage, as shown in Figure 3. The PFC inductor current is , the input voltage is , and the X-capacitor reactive current is . The total PFC input current is , which is also the current from where the power factor is measured. Although the PFC current control loop forces  to follow , the reactive current of leads  by 90 degrees, which causes  to lead . The result is a poor power factor.

This effect is amplified at a light load and high line, as  takes more weight in the total current. As a result, it is difficult for the power factor to meet a rigorous specification such as the M-CRPS specification.

Figure 3 X-capacitor  causes the AC current to lead the AC voltage. Source: Texas Instruments

Fortunately, with a digital controller, you can solve this problem through one of the following methods.

Method #1

Since  makes the total current lead the input voltage, if you can force the  to lag  by some degree, as shown in Figure 4, then the total current  will be in phase with the input voltage, improving the power factor.

Figure 4 Forcing  to lag so that the total current  will be in phase with the input voltage. Source: Texas Instruments

Since the current loop forces the inductor current to follow its reference, to let  to lag , the current reference needs to lag . For a PFC with traditional average current-mode control, the current reference is generated by Equation 3:

where A is the voltage-loop output, B equals 1/VAC_RMS2, and C is the sensed input voltage VAC(t).

To delay the current reference, an analog-to-digital converter (ADC) measures , the measurement results are stored in a circulate buffer. Then, instead of using the newest input voltage (VIN) data, Equation 3 uses previously stored VIN data to calculate the current reference for the present moment. The current reference will lag ; the current loop will then make  lag . This can compensate the leading x-capacitor  and improve the power factor.

The delay period needs dynamic adjustment based on the input voltage and output load. The lower the input voltage and the heavier the load, the shorter the delay needed. Otherwise  will be over delayed, making the power factor worse than if there was no delay at all. To solve this problem, use a look-up table to precisely and dynamically adjust the delay time based on the operating condition.

Method #2

Since a poor power factor is caused mainly by the EMI X-capacitor , if you calculate  for a given X-capacitor value and input voltage, then subtract  from the total ideal input current to form a new current reference for the PFC current loop, you will get a better total input current that is in phase with the input voltage and can achieve a good power factor.

To explain in detail, for a PFC with a unity power factor of 1,  is in phase with . Equation 4 expresses the input voltage:

where VAC is the VIN peak value and f is the VIN frequency. The ideal input current then needs to be totally in phase with the input voltage, expressed by Equation 5:

where IAC is the input current peak value.

Since the capacitor current is , see Equation 6:

Equation 7 comes from Figure 2:

Combining Equations 5, 6 and 7 results in Equation 8:

If you use Equation 8 as the current reference for the PFC current loop, you can fully compensate the EMI X-capacitor , achieving a unity power factor. In Figure 5, the blue curve is the waveform of the preferred input current, iAC(t), which is in phase with . The green curve is the capacitor current, iC(t), which leads  by 90 degrees. The dotted black curve is iAC(t) ‒ iC(t). The red curve is the rectified iAC(t) ‒ iC(t). In theory, if the PFC current loop uses this red curve as its reference, you can fully compensate the EMI X-capacitor  and increase the power factor.

Figure 5 New current reference with iAC(t) (blue), iC(t) (green), iAC(t) ‒ iC(t) (red),and rectified iAC(t) ‒ iC(t) (red). Source: Texas Instruments

To generate the current reference as shown in Equation 8, you’ll first need to calculate the EMI X-capacitor reactive current, iC(t). Using a digital controller, an ADC samples the input AC voltage, which the CPU then reads in the interrupt loop routine at a fixed rate. By calculating how many ADC samples are in two consecutive AC zero crossings, Equation 9 determines the frequency of the input AC voltage:

where fisr is the frequency of the interrupt loop and N is the total number of ADC samples in two consecutive AC zero crossings.

To get the cosine waveform cos(2πft), a software phase-locked loop generates an internal sine wave that is synchronized with the input voltage, making it possible to obtain the cosine waveform. Use Equation 6 to calculate iC(t), then subtract from Equation 7 to get the new current reference.

Reshaping the current reference at the AC zero crossing area

These two methods let lag in order to improve the power factor; however, they may cause extra current distortion at the AC zero crossing. See Figure 6. Because of the diode bridge rectifier used in the PFC power stage, diodes will block any reverse current. Referencing Figure 6, during T1 and T2, VAC(t) is in the positive half cycle, but the expected iL(t) (the dotted black line) is negative. This is not possible, however, because the diodes will block the negative current, so the actual iL(t) remains zero during this period. Similarly, during T3 and T4, vAC(t) becomes negative, but the expected iL(t) is still positive. iL(t) also will be blocked by the diodes, and remains at zero.

Correspondingly, the current reference needs to be at zero during these two periods; otherwise the integrator in the control loop will build up. When the two periods are over and current starts to conduct, control loop generates a PWM duty cycle bigger than required, causing current spikes. The red curve in Figure 6 shows what the actual iL(t) would be with a diode bridge, and the red curve should be used as the current reference for the PFC current loop.

Figure 6 Final current reference curve where the red curve shows what the actual iL(t) would be with a diode bridge and should be used as the current reference for the PFC current loop. Source: Texas Instruments

Optimizing power factor

A poor power factor is mainly caused by the X-capacitor used in the PFC EMI filter, but it is possible to compensate for the effect of X-capacitor reactive current by delaying the inductor current. Now that you can use one of the two methods to delay the inductor current, you can combine them with guidance in Power Tips #116 to meet both a high-power factor and a low THD requirement.

Bosheng Sun is a systems, applications and firmware engineer at Texas Instruments.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #124: How to improve the power factor of a PFC appeared first on EDN.

Chips taking generative AI to edge reach CES floor

Втр, 01/09/2024 - 15:20

A new system-on-chip (SoC) demonstrated at CES 2024 in Las Vegas claims to run multi-modal large language models (LLMs) at a fraction of the power-per-inference of leading GPU solutions. Ambarella is targeting this SoC to bring generative AI to edge endpoint devices and on-premise hardware in video security analysis, robotics, and a multitude of industrial applications.

According to the Santa Clara, California-based chip developer, its N1 series SoCs are up to 3x more power-efficient per generated token than GPUs and standalone AI accelerators. Ambarella will initially offer optimized generative AI processing capabilities on its mid to high-end SoCs for on-device performance under 5W. It’ll also release a server-grade SoC under 50 W in its N1 series.

Generative AI will be a step function for computer vision processing that brings context and scene understanding to a variety of devices such as security installations and autonomous robots. Source: Ambarella

Ambarella claims that its SoC architecture is natively suited to process video and AI simultaneously at very low power. So, unlike a standalone AI accelerator, they carry out highly efficient processing of multi-modal LLMs while still performing all system functions. Examples of the on-device LLM and multi-modal processing enabled by these SoCs include smart contextual searches of security footage, robots that can be controlled with natural language commands, and different AI helpers that can perform anything from code generation to text and image generation.

Les Kohn, CTO and co-founder of Ambarella, says that generative AI networks are enabling new functions across applications that were just not possible before. “All edge devices are about to get a lot smarter with chips enabling multi-modal LLM processing in a very attractive power/price envelope.”

Alexander Harrowell, principal analyst for advanced computing at Omdia, agrees with the above notion and sees virtually every edge application getting enhanced by generative AI in the next 18 months. “When moving generative AI workloads to the edge, the game becomes all about performance per watt and integration with the rest of the edge ecosystem, not just raw throughput,” he added.

The AI chips are supported by the company’s Cooper Developer Platform, where Ambarella has pre-ported and optimized popular LLMs. That includes Llama-2 as well as the Large Language and Video Assistant (LLava) model running on N1 SoCs for multi-modal vision analysis of up to 32 camera sources. These pre-trained and fine-tuned models will be available for chip developers to download from the Cooper Model Garden.

Ambarella also claims that its N1 SoCs are highly suitable for application-specific LLMs, which are typically fine-tuned on the edge for each scenario. That’s unlike the classical server approach of using bigger and more power-hungry LLMs to cater to every use case.

With these features, Ambarella is confident that its chips can help OEMs quickly deploy generative AI into any power-sensitive application ranging from an on-premise AI box to a delivery robot. The company will demonstrate its SoC solutions for AI applications at CES in Las Vegas on 9-12 January 2024.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Chips taking generative AI to edge reach CES floor appeared first on EDN.

Simple log-scale audio meter

Пн, 01/08/2024 - 17:48

While refurbishing an ageing audio mixer, I decided that the level meters needed special attention. Their rather horrible 100 µA edgewise movements had VU-type scales, the drive electronics being just a diode (germanium?) and a resistor. Something more like a PPM, with a logarithmic (or linear-in-dBs) scale and better dynamics, was needed. This is a good summary of the history and specifications of both PPMs and VU meters.

Wow the engineering world with your unique design: Design Ideas Submission Guide

It occurred to me that since both peak detection and log conversion imply the use of diodes, it might be possible to combine those functions, at least partially.

Log conversion normally uses either a transistor in a feedback loop round an op-amp or a complex ladder using resistors and semiconductors. The first approach requires a special PTC resistor for temperature compensation and is very slow with low input levels; the second usually has a plethora of trimpots and is un-compensated. This new approach, sketched in Figure 1, shows the first pass at the idea, and avoids those disadvantages.

Figure 1 This basic peak log detector shows the principals involved and helps to highlight the problems. (Assume split, non-critical supply rails.)

As with all virtual-earth circuits, the input resistor feeds current into the summing point—the op-amp’s inverting input—which is balanced by current driven through the diodes by the op-amp’s output. Because the forward voltage across a diode (VF) is proportional to the logarithm of the current flowing through it (as described here), the op-amp’s output voltage now represents the log of the input signal. Positive input half-cycles cause it to clamp low at VF, which we ignore, as this is a half-wave design; for negative ones, it swings high by 2VF.

Driving that 2VF through another diode into the capacitor charges the latter to VF, losing a diode-drops’-worth in the process. (No, the VFs don’t match exactly, except momentarily, but no matter.) The meter now shows the log of negative-input half-cycle peaks, the needle falling back as the capacitor discharges through the meter.

As it stands, it works, with a very reasonable span of around 50 dB. Now for the problems:

  1. The integration, or attack time, is slow at ~70 ms to within 2 dB of the final reading.
  2. The return or decay time is rather fast, about a second for the full scale, and is exponential rather than linear.
  3. It’s too temperature-sensitive, the indication changing by ~5 dB over a 20°C range.

While this isn’t bad for a basic, log-scaled VU-ish meter, something snappier would be good: time for the second pass.

Figure 2 This upgraded circuit has a faster response and much better temperature stability.

A1 buffers the input signal to avoid any significant load of the source. C1 and R1 roll off bass components (-3 dB at ~159 Hz) to avoid spurii from rumbling vinyl and woodling cassette tapes. Drive into A2 is now via thermistor Th1 (a common 10k part, with a ꞵ-value of 3977), which largely compensates for thermal effects. C2 blocks any offset from A1, if used. (Omit the buffer stage if you choose, but the input impedance and LF breakpoint will then vary with temperature, so then choose C2 with care.) Three diodes in the forward chain give a higher output and a greater span. A2 now feeds transistor Tr1, which subtracts its own VBE from the diodes’ signal while emitter-following that into C2, thus decreasing the attack time. R2 can now be higher, increasing the decay time. Figure 3 shows a composite plot of the actual electronic response times; the meter’s dynamics will affect what the user sees.

Figure 3 This shows the dynamic responses to a tone-burst, with an attack time of around 12 ms to within 2 dB of the final value, and the subsequent decay.  (The top trace is the ~5.2 kHz input, aliased by the ’scope.)

The attack time is now in line with the professional spec, and largely independent of the input level. While the decay time is OK in practice, it is exponential rather than linear. 

The response to varying input levels is shown in Figure 4. I chose to use a full-scale reading of +10 dBu, the normal operating level being around -10 dBu with clipping starting at ~+16 dBu. For lower maximum readings, use a higher value for Th1 or just decrease R2, though the decay time will then be faster unless you also increase C3, impacting the attack time.

Figure 4 The simulated and actual response curves are combined here, showing good conformance to a log law with adequate temperature stability.

The simulation used a negative-going ramp (coupling capacitors “shorted”) while the live curve was for sine waves, with Th1 replaced by a 1% 10k resistor and R2 adjusted to give 100 µA drive for +10 dBu (6.6 Vpk-pk) input. I used LTspice here to verify the diodes’ performance and to experiment with the temperature compensation. (Whenever I see a diode other than a simple rectifier, I am tempted to reach for a thermistor. This is a good primer on implementing them in SPICE, with links to models that are easily tweakable. “Other compensation techniques are available.”) The meter coil has its own tempco of +3930 ppm/°C, which is also simulated here though it makes little practical difference. Just as well: might be tricky to keep it isothermal with the other temperature-sensitive stuff.

Simple though this circuit is, it works well and looks good in operation. (A variant has also proved useful in a fibre-optic power meter.) The original meters, rebuilt as in Figure 2, have been giving good service for a while now, so this is a plug-in breadboard rehash using a spare, similar, meter movement, with extra ’scoping and simulation. It’s possible to take this basic idea further, with still-faster attack, linear decay, adjustable span, better temperature compensation, and even full-wave detection—but that’s another story, and another DI.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Simple log-scale audio meter appeared first on EDN.

Partitioning to optimize AI inference for multi-core platforms

Пн, 01/08/2024 - 09:22

Not so long ago, artificial intelligence (AI) inference at the edge was a novelty easily supported by a single neural processing unit (NPU) IP accelerator embedded in the edge device. Expectations have accelerated rapidly since then. Now we want embedded AI inference to handle multiple cameras, complex scene segmentation, voice recognition with intelligent noise suppression, fusion between multiple sensors, and now very large and complex generative AI models.

Such applications can deliver acceptable throughput for edge products only when run on multi-core AI processors. NPU IP accelerators are already available to meet this need, extending to eight or more parallel cores and able to handle multiple inference tasks in parallel. But how should you partition expected AI inference workloads for your product to take maximum advantage of all that horsepower?

Figure 1 Multi-core AI processors can deliver acceptable throughput for edge applications like scene segmentation. Source: Ceva

Paths to exploit parallelism for AI inference

As in any parallelism problem, we start with a defined set of resources for our AI inference objective: some number of available accelerators with local L1 cache, shared L2 cache and a DDR interface, each with defined buffer sizes. The task is then to map the network graphs required by the application to that structure, optimizing total throughput and resource utilization.

One obvious strategy is in processing large input images which must be split into multiple tiles—partitioning by input map where each engine is allocated a tile. Here, multiple engines search the input map in parallel, looking for the same feature. Conversely you can partition by output map—the same tile is fed into multiple engines in parallel, and you use the same model but different weights to detect different features in the input image at the same time.

Parallelism within a neural net is commonly seen in subgraphs, as in the example below (Figure 2). Resource allocation will typically optimize breadth wise then depth wise, each time optimizing to the current step. Obviously that approach won’t necessarily find a global optimum on one pass, so the algorithm must allow for backtracking to explore improvements. In this example, three engines can deliver >230% of the performance that would be possible if only one engine were available.

Figure 2 Subgraphs highlight parallelism within a neural net. Source: Ceva

While some AI inference models or subgraphs may exhibit significant parallelism as in the graph above, others may display long threads of operations, which may not seem very parallelizable. However, they can still be pipelined, which can be beneficial when considering streaming operations through the network.

One example is layer-by-layer processing in a deep neural network (DNN). Simply organizing layer operations per image to minimize context switches per engine can boost throughput, while allowing the following pipeline operations to switch in later but still sooner than in purely sequential processing. Another good example is provided by transformer-based generative AI networks where alternation between attention and normalization steps allows for sequential recognition tasks to be pipelined.

Batch partitioning is another method, providing support for the same AI inference model running on multiple engines, each fed by a separate sensor. This might support multiple image sensors for a surveillance device. And finally, you can partition by having different engines run different models. This strategy is useful especially in sematic segmentation, say for autonomous driving where some engines might detect lane markings. Others might handle free (drivable) space segmentation, and some others might detect objects (pedestrians and other cars).

Architecture planning

There are plenty of options to optimize throughput and utilization but how do you decide how best to tune for your AI inference application needs? This architecture planning step must necessarily come before model compile and optimization. Here you want to explore tradeoffs between partitioning strategies.

For example, a subgraph with parallelism followed by a thread of operations might sometimes be best served simply by pipelining rather than a combination of parallelism and pipelining. Best options in each case will depend on the graph, buffer sizes, and latencies in context switching. Here, support for experimentation is critical to determining optimal implementations.

Rami Drucker is machine learning software architect at Ceva.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Partitioning to optimize AI inference for multi-core platforms appeared first on EDN.

Сторінки