EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 33 min ago

Partners advance automotive radar development

Fri, 12/29/2023 - 00:32

R&S has verified NXP’s radar sensor reference design, which is based on the industry’s first 28-nm RFCMOS automotive radar SoC, the SAF85xx. The radar test system pairs the R&S AREG800A automotive radar echo generator with the R&S QAT100 antenna mmWave frontend. Offering short-distance object simulation and solid RF performance, the platform enables realistic tests of next-generation automotive radar applications, such as ADAS and autonomous driving.

NXP’s reference design, enabled by the SAF85xx 77-GHz automotive radar SoC, supports the development of short, medium, and long-range radar applications. The reference design helps engineers address challenging New Car Assessment Program (NCAP) safety requirements, as well as SAE L2+ and L3 levels of driving automation.

The R&S test system offers complete characterization of radar sensors and radar echo generation, with object distances down to the airgap value of the radar under test. Fully scalable, the system is suitable for the entire automotive radar lifecycle, including development lab, hardware-in-the-loop, vehicle-in-the-loop, validation, and production application requirements.

NXP will present its latest developments for radar, including the automotive radar sensor reference design, at next month’s CES 2024 trade show.

Rohde & Schwarz 

NXP Semiconductors 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Partners advance automotive radar development appeared first on EDN.

USB4 device controller gains USB-IF certification

Fri, 12/29/2023 - 00:32

The VL832 USB4 endpoint device controller from Via Labs has achieved USB4 certification from the USB Implementers Forum (USB-IF). Offering data transfer rates of 20 Gbps and 40 Gbps, the VL832 chip integrates a USB 3.2 hub, USB 2.0 hub, and DisplayPort 1.4 output interface to afford connectivity for such devices as multifunction adapters and docking stations.

In USB4 40-Gbps mode, the VL832 supports full DisplayPort HBR3 bandwidth of 32.4 Gbps. The USB 20-Gbps hub allows multiple USB 10-Gbps devices to operate a full speed on supported host platforms. The increased display bandwidth of the VL832 aligns with display trends transitioning to QHD (2560×1440 pixels) and higher resolutions. It also supports higher refresh rates, including uncompressed 4K at 120 Hz or even 240 Hz with display stream compression. Further, the device can run up to four 4K displays at a consistent refresh rate of 60 Hz.

Housed in a 10×10×1.03-mm flip-chip CSP, the VL832 provides five downstream USB ports comprising one USB 2.0 port and four USB 10-Gbps ports. When paired with the VL108 USB PD controller with extended power range (EPR) support, compatible host systems can achieve charging rates of 140 W or more.

The VL832 USB4 endpoint device controller is listed on the USB-IF integrator’s list under TID 10033. It is available now in quantity. Contact sales@via-labs.com.

VL832 product page

Via Labs 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post USB4 device controller gains USB-IF certification appeared first on EDN.

AC/DC adapters get their GaN shrink

Thu, 12/28/2023 - 17:48

The humble yet ever-present AC/DC wall-power adapter doesn’t get a lot of consideration from consumers. It’s one of those things that’s needed but is viewed as a necessary nuisance. I suspect that part of that is due to the fact that the adapter is a sealed box with an AC line cord and DC output cord, with no operational setup, no software, no upgrades, no security issues, nothing. It provides one vital and unambiguous function and stays in the background while doing so.

There’s certainly more to their story. These adapters—sometimes derisively referred to as “wall warts” —are something we absolutely need with us when using battery-powered devices at the office, home, or on the go. Being stuck without the right one can be frustrating or panic-inducing, depending on the situation. Forget about strangers asking if you can spare a dollar or two for coffee: instead, I’ve had them ask to borrow mine when on a longer-distance Amtrak ride.

Although adapters are usually viewed as single group, they really serve two distinct roles. First, there’s the recharging use where the connected item, such as a smartphone, will be used without the adapter most of the time. The adapter plays a critical but transitory role.

Second, there’s the operational role where the adapter is used full-time as the power source for the end-device. Based on a limited, non-scientific sample, I’m seeing more of this trend in consumer products, especially at the lower end of the product spectrum such as an entry-level turntable (record player) and rock tumbler (Figure 1).

Figure 1 (left) This Victrola Brighton record player, with an external 5 V/1 A AC-DC adapter, certainly won’t damage any eardrums; (right) the motor of this rock tumbler is powered by a 12 V/500 mA adapter. Sources: Amazon; Target

Using the adapter for DC power makes a lot of sense, as it totally gets the risky high voltage out of the box, so to speak. Doing so greatly simplifies obtaining the various safety-related regulatory approvals, since having only the low voltage in the box means that the complicated issue of line-volage electrical safety is avoided as long as an approved adapter is used (which is easy enough). Plus, it gives the OEM more flexibility in sourcing the power-supply functions, since alternate adapters are available from multiple sources.

How many AC/DC adapters are sold each year? Quick answer: I can’t tell. The many market report summaries at which I looked only call out OEM dollar volume, not units. Their numbers indicate a 2022 market of around $10 billion growing to $24 billion by 2032 with a compound annual growth rate (CAGR) of about 8%. I think I can say with confidence that the number of adapters shipped each year is many millions.

(Note: I rounded their numbers; these research firms have a fondness for providing market numbers and long-term forecasts to two and three significant figures. I assume that precision is used to imply accuracy, when instead a realistic level-of-confidence band—especially for forecasts—would be ±10% or ±20%. It’s yet another example of innumeracy.)

Pressures on adapters

Despite their humble place in the technology rankings, the pervasiveness of these adapters makes them worthy of serious attention, beginning with regulatory considerations. For example, they have been subject to ever-tightening regulations related to their efficiency. I won’t try to summarize the complicated standards which are a function of many factors including maximum output voltage, maximum output power, operating power level, and even define quiescent (standby mode) power, Figure 2. They are also subject to the usual EMI/RFI regulators, isolation (Class I or Class II—standard or reinforced), and more; adapters for medical uses have additional mandates.

Figure 2 This chart shows both the timing and technical complexity of the increasingly stringent efficiency standards for AC/DC adapters. Source: CUI

In 2023, another mandate for these lower-power adapters also received final approval. In the EU, they must use the USB Type-C connector, and so the proprietary Apple Firewire connector (as well as others) will no longer be allowed, Figure 3. The USB-C port will become mandatory for a whole range of electronic devices such as mobile phones, tablets, and headphones. In practical terms, this EU-centric standard will likely result in worldwide uniformity.

Figure 3 The USB Type-C charger (left) will be the EU (and beyond) standard for low-power connectors and will supersede Apple’s proprietary Lightning cable. Source: BBC

The objective of this one-size-fits-all charging port is clear enough: to reduce waste, trim needless clutter, and save consumers money. For many years, the primary to this commonality rule came from Apple, but they were eventually worn down and had little choice but to relent.

What about size?

While regulators may be concerned mostly about efficiency, users also want an adapter which is smaller and lighter. That’s especially the case if they need to carry it, but it is also an issue for relatively fixed settings such as the crowded confines of an ambulance or hospital crash cart with all of its electronics.

There’s major news in the size area as well, with the increased availability of switching devices based on gallium nitride (GaN) materials and processes, thus allowing for more efficient, smaller adapters with reductions in size of up to 50%. Texas Instruments (TI) has just introduced a family of low-power GaN devices specifically targeting these adapter applications (although they are obviously not limited to that role only). The devices are designed to help improve power density, maximize system efficiency, and shrink the size of AC/DC consumer power electronics and industrial systems.

TI maintains that new devices will help engineers to develop smaller, lighter AC/DC solutions, achieve up to 95% systems efficiency, and reduce the size of a typical 67-W power adapter using silicon materials by up to 50%, Figure 4.

Figure 4 AC/DC adapters based on GaN switching devices such those in the newly introduced TI family should reduce the size of the adapters by half. Source: Texas Instruments

The portfolio is optimized for the most common topologies in AC/DC power conversion such as quasi-resonant flyback, asymmetrical half-bridge flyback, inductor-inductor-converter, totem-pole power factor correction, and active clamp flyback.

The three GaN devices from TI—LMG3622, LMG3624 and LMG3626—are housed in 8-mm × 5.3-mm 38-pin, quad flat no-lead packages. To simplify design-in, each incorporates the requisite driver (always appreciated) as well as high-accuracy integrated current sensing, which eliminates the need for an external shunt resistor, Figure 5. (For those who don’t need the internal current sensing, pin-to-pin compatible devices without integrated current sensing are also available.)

Figure 5 Like other members of the GaN family, the LMG3622 includes both the switch driver and internal current sensing. Source: Texas Instruments

Much-smaller adapters are already on the market; I can’t say if they use the TI GaN devices. For example, XP Power recently introduced the AQM200 series of 200-W external power supplies, with 12-V to 48-VDC output. These are fully approved to the many international medical safety standards, in sealed IP22 cases with two means of patient protection (2×MOPP) ratings. Using GaN devices, the units measure approximately 166 × 54 × 33 mm (roughly 6.5 × 2.2 × 1.3 inches) which is about half of size of their non-GaN units, Figure 6.

Figure 6 The AQM200 family of 200-W external power supplies (with 12 V to 48 VDC outputs) is not only half the size of similar adapters, but also meets the additional medical-supply mandates including a 2xMOPP rating. Source: XP Power

External AC/DC adapters may not be glamorous, but they are an important if underappreciated consumer product used in countless charging and operational situations. By reducing their size and weight, as well as improving their efficiency, they will likely be increasingly used for many expected and unexpected applications. It’s these invisible improvements that can make a big difference with unexpected consequences and implications.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

 Related Content

References

Market size

Standards & efficiency

GaN devices and adapters

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AC/DC adapters get their GaN shrink appeared first on EDN.

2024: A technology forecast for the year ahead

Wed, 12/27/2023 - 17:19

This time last year, as I mentioned at the time, we decided to try an experiment. In past few years, EDN had published my retrospectives on the prior year first, followed by my forecasts for the year to come roughly one month later. While, as I noted, the cadence might make conceptual sense—reviewing and learning from the past before portending what’s to come tends to work out best, after all, in the spirit of “Those who cannot remember the past are condemned to repeat it”—from a practical standpoint it was non-ideal, due to the non-zero delay between when I submitted a writeup and when, roughly a month later in most circumstances, it got published. In 2021, for example, my past-year retrospective appeared on the EDN website on December 9, but due to publishing lead times, I’d submitted it two weeks prior and more than a month before the year’s end, on November 27, 2021. A lot can happen in a month-plus!

Last year, in contrast, I submitted my year-ahead forecast in late November, with the retrospective following it. While this still means that “a lot can happen in a month-plus”, especially in these seemingly increasingly crazy times, any latency impact would instead affect my future insights, which are already inherently imperfect (I know how shocked you all are to hear that my crystal ball is indeed perpetually murky). My look back at the past year, on the other hand, was more comprehensive than it had been before, thanks to the ordering flip-flop. And so, we’re continuing with the updated cadence again this year. Without further ado…

(Increasingly) unpredictable geopolitical tensions

As with my yearly Holiday Shopping Guides, I try to not repeat themes in either these or my retrospectives from one year to another. But it would be negligent of me to not do so in this case, as not only have last year’s noted conflicts continued, but they’ve also been joined by at least one notable other.

Europe first. As I write these words in early November, we’re a week shy of 600 days and a month since Russia invaded Ukraine on February 24, 2002. By the time you read these words, we’ll likely be drawing ever closer to the two-year mark, and with no definitive end in sight. What was initially forecasted by many to end up as a rapid Ukrainian collapse has turned out to be anything but that, thanks to the invaded country’s population’s resilience, determination, and inventiveness, along with Western allies’ financial aid, armament and other assistance.

As with other topics I’ll explore in this piece, I’m not going to (publicly, at least) take sides in this conflict. What I will say is that the battle lines have largely stagnated of late, with technology in a variety of forms likely what’ll be necessary to jump-start Ukraine’s counterattack progress again, as it faces a much larger foe. Ironically, no less an expert on such matters than Valery Zaluzhny, the commander-in-chief of Ukraine’s armed forces, opined at length on this particular topic in a guest piece in The Economist just a week ago. I highly recommend his words and their associated experience and insights as well worth your perusal time and attention.

Next: Asia. The tensions between China and Taiwan (and their effects not only on both sides but the world at large), which I wrote about at length a year ago, have largely remained on “slow burn”. That said, there have been plenty of added confrontations not only between them but also between China and other players in the region, such as the Philippines and United States. Again, I’ll publicly keep my opinions to myself, with the exception of noting that the potential impact to the semiconductor and broader technology industry (not to mention the even broader world economy) of a China invasion of Taiwan would be catastrophic and long-lasting.

Last but definitely not least: the Middle East. On October 7, Hamas militants, numbering as many as 1,000 according to some estimates, launched a surprise attack on southern Israel across the Gaza Strip border, killing more than 1,400 people, taking more than 200 hostages and reigniting longstanding, long-simmering conflict in the region. As I write these words, the Israeli military response is underway, after a multi-week delay intended to (among other things) allow time for non-combatant Palestinians to flee the northern section of the Gaza Strip where Hamas is centralized, albeit still resulting in a significant loss of civilian life along with even more sizeable population injury and displacement. Clashes between Israel and Lebanon-based Hezbollah at Israel’s northern border are also ramping up, threatening to create a two-front war.

Once again, I offer no public opinion on any of this. That said, there’s widespread debate and concern (as with Ukraine and Taiwan) as to whether this conflict will spread beyond an existing existential threat to the state of Israel, to the Palestinian peoples in the Gaza Strip and West Bank, and to Israel’s neighbor countries. The bigger-picture threat is a further expansion to a regional or even worldwide conflict. Consider, for example, that Hamas and Hezbollah are two of the many proxy militias of Iran, who is allied with Russia, China, N. Korea, and others. Note, too, Israel’s alliances with the United States and other Western countries. Even in its current form, this conflict is having a deleterious economic effect on Israel, a technology powerhouse particularly when assessed on a per capita basis. And I doubt I need to remind any of you of the sizeable percentage of the world’s oil production that comes from the Middle East, and of the worldwide economic impact were that petrochemical flow to abruptly decrease or even cease.

Note that in reference to the topic of this section (as well as the next one, for that matter), I’m not going to attempt to hazard a guess as to how the situations in Europe, Asia, and the Middle East (and anywhere else where conflict might flare up between now and the end of 2023, for that matter) will play out in the year to come. My intent is solely to note that regardless of the outcomes in all of these cases, they’ll have notable influence the technology sector in myriad ways.

The 2024 United States election

Americans are accused (at least sometimes rightly, IMHO) of inappropriately acting as if their country and its citizens are the “center of the world”. That said, the United States’ policies, economy, events, and trends inarguably do notably affect those of its allies, foes and other countries and entities, as well as the world at large, which is why I’m including this particular entry in my list. It’s particularly timely considering that the third Republican Party presidential primary debate, which did not include the current polling-claimed frontrunner (by several dozen percentage points, no less) for the nomination, was just a couple of nights back as I write this.

That Republican frontrunner is, of course, the former President, Donald Trump, who’s currently 77 years old and will be 78 next November 5th, election day in the United States. Trump is currently facing 91 indictments in four separate felony cases, two state (New York and Georgia) and two federal, all currently scheduled to go to trial before election day. Trump’s business enterprise is also currently on trial for civil charges in New York, and a defamation lawsuit from earlier this year has just been re-opened to consider additional penalties resulting from his post-initial judgement statements and actions.

The current Democratic frontrunner, and the current president, Joe Biden, is a week and a few days away from turning 81 as I write these words. If he succeeds in being re-inaugurated on January 20, 2025, he’ll be 82 at the time. Biden’s current low polling performance comes, so say the pundits, in part from voters’ concerns about his age and its effect on his mental acuity (concerns that also plague his leading opponent) and physical robustness. Policy concerns are also a factor, specifically regarding sizeable government subsidies and lingering inflation, no matter that data suggests that the U.S. economy has exited the pandemic in comparatively solid shape. And in addition to whoever the Democratic and Republican nominees end up being, increasing numbers of third party (both potential and already active) and unaffiliated candidates further muddy the already cloudy waters of who’ll be the victor a year from now.

To that latter “policy” point, this is more than just a big-picture tug-of-war between “small” and “big” government advocates (not that either major political party even fits neatly into either of those buckets anymore). Trump, for example, aspires to fundamentally transform the U.S. government if he and his allies return to power in the executive branch, moves which would undoubtedly also have myriad impacts big and small on technology and broader economies around the world. Who ends up in power a year from now, not only in the presidency but also controlling both branches of Congress, and not only at the federal but also states’ levels, will heavily influence other issues already discussed here, such as support (or not) for Ukraine, Taiwan, and Israel, and sanctions and other policies against Russia and China.

Note once again that I have not (and will not) reveal personal opinions on any of this, nor will I be forecasting what I think the election outcomes will be. That said, you’re welcome to sound off (civilly and respectfully, please!) with your thoughts in the comments!

Windows (and Linux) on Arm

After two big-picture topics, these last three will be more focused (and hopefully also less controversial). As long-time readers may recall, I’ve been on-and-off using Arm-based Windows computers for nearly a decade now, beginning with the first-generation Surface with Windows RT. More recently, I acquired a first-generation Surface Pro X in mid-2021 (which I’m typing on as I “speak”, in fact). Although initial impressions were underwhelming due to a dearth of native app support (along with other programs that flat-out refused to run even x86-emulated), the release a few months ago of a full-featured Dropbox client has significantly transformed my opinion of the platform for the better, both generally because Dropbox is my “cloud” repository of choice and specifically because that’s where my 1Password database is housed.

Granted, this system isn’t a “screamer” from a performance standpoint, but that’s not why I bought it. All I needed was something to handle various “office” application functions, including email, while being thin and light and delivering long battery life. The Surface Pro X’s integrated cellular connectivity also often comes in handy. And any performance detriment with this particular system implementation isn’t endemic to the entire product category, as Apple’s three generations’ worth of Arm-based SoC families and systems based on them clearly exemplifies.

To wit, Qualcomm has to date released three generations’ worth of its own Arm-based and Microsoft-destined SQ-series SoCs (my Surface Pro X contains the first-generation SQ1), along with additional mobile computing-tailored Snapdragon application processors for other OEM customers. The company is also already showing off its next-generation Snapdragon Elite X, destined for production availability in system form mid-next year. Claimed competition is also looming on the horizon in the form of Arm-based chips from first-time entrant AMD and re-entrant NVIDIA (the Surface with Windows RT I mentioned earlier was also NVIDIA-based), a rumor that when released resulted in a dip in Intel’s stock price. But speaking of which, Intel isn’t standing still either, as its recently announced Meteor Lake mobile processors exemplify.

So, what do you think, readers? Will Windows- and Linux-on-Arm, in linking arms with their MacOS-on-Arm siblings, end up being a serious threat to the longstanding x86 hegemony? Why or why not? And if yes: in what market segments (mobile, desktop, server, multiple), and when? Again, let me know your thoughts in the comments.

Declining smartphone demand

Stating the likely already obvious, smartphones have for many years been a notable recipient of the semiconductor output of captive fabs and foundries all over the world—application processors, cellular modems, DRAM, flash memory, image sensors, displays, and the like. So when I read that smartphone sales are down by a significant degree of late versus historical norms (with the notable exception of my particular product family platform, interestingly), and particularly when the trend is more than a one-quarter negative “blip”, it grabs my attention.

What’s going on here? (At least) three things that I see. For one, the historical frustrating software-induced “obsolescence by design” trend is, at least for the moment, thankfully running in reverse. My Pixel phones are case study examples of this recent OEM “enlightenment”. Until the Pixel 6, software updates were only guaranteed for three years post-introduction. With the Pixel 6 and 7 families, this extended to five years for bug fixes and the like, albeit still only three years for primary operating system upgrades. And with the newly launched Pixel 8 successors, it’s seven years for all software, intra- and inter-operating system versions alike.

Hardware is also getting increasingly durable, thanks to added and beefed-up features such as water and dust exposure tolerance and crack-resistant displays. The result? The refurbished phone market is booming, as consumers upgrade from their existing phones to newer, but still previously used by others, successors. Of course, the decreasing effectiveness of “free new phone” promotions from cellular carriers might also have something to do with it, as consumers increasingly wise up to the reality that the phones aren’t actually “free” at all, but instead are paid for by installment plans baked into the monthly service bills…

That all said, none of this would be particularly impactful on new phone demand if new phones came equipped with new features deemed compelling by consumers. Alas, that particular temptation seems to be decreasingly effective, too. Once your existing phone has two rear-camera lenses, or even just one really good one, do you really need three? Is a gram or so of weight saved by moving from aluminum to titanium really still enough motivation to crack open the wallet for another $1,000-or-so purchase? And just how “retina” does a display really need to be before you can’t tell the difference between it and what’s currently in your pocket?

Internal and external interface evolutions

Last but not least, I’d like to say a few words about buses. No, not the transportation kind of bus, unless of course we’re talking about transporting data bits across interfaces, in which case…yep, those. Internal first. PCI Express (PCIe) is increasingly dominant not only in computers but also in embedded systems that leverage the same fundamental silicon building blocks. Plus, PCIe forms the technology foundation of spinoff interfaces such as the CFExpress card.

Today’s mainstream deployed PCIe variant is Gen4 (aka, 4.0), whose public unveil was more than a decade ago, believe it or not, in November 2011 (the spec was finalized in mid-2017). Its successor, Gen5 (5.0), whose spec was finalized in 2019 and which is now beginning to show up in leading-edge PCs, doubles the PCIe Gen4 bandwidth, from 31.5 GBytes/s in each direction for a 16-lane configuration to 63 GBytes/sec bidirectional.

Who needs all that speed? Good question. Graphics cards, arguably, although all but the highest end PCIe Gen4 ones available today benchmark comparably even when plugged into a backwards-compatible PCIe Gen3 slot, begging the question of what additional benefit a PCIe Gen5 successor might deliver. And SSDs, again arguably, although less so, particularly given that the M.2, U.2 and other to-system interfaces are narrower in terms of the number of parallel lanes than what’s typically found with full-size add-in cards. Still, benchmarking of initial PCIe Gen5 SSDs versus high-end Gen4 predecessors reveals only modest improvements, and then only when sequentially reading and (especially) writing short sequences of data to and from the DRAM cache onboard the flash memory module. And the incremental power draw demanded by the newer SSDs is definitely not modest. That said, PCIe Gen5 capabilities both at the system and peripheral level are forecasted to ramp into fuller production volume beginning in 2024.

And what about external interfaces? I’m talking here, in today terms, specifically about Thunderbolt 3, Thunderbolt 4, USB 3.x and USB 4. Explaining the differences between them (including scenarios when TB3 might actually perform better than its TB4 successor) is beyond the wordcount-and-other scope of this summary, although I intend to dive into detail on the topic in a dedicated writeup next year. Also to be included in it is what’s motivating the mention here: Thunderbolt 5, which Intel officially unveiled back in September.

Unlike with the Thunderbolt 3-to-4 generational transition, which maintained the same 40 Gbps bidirectional bandwidth albeit adding USB 4 compatibility and other implementation tweaks, Thunderbolt 5 marks a return to the bandwidth doubling of prior generational transitions, now 80 Gbps in each direction. Plus, a dynamically configurable feature called Bandwidth Boost allows for three of the four Thunderbolt lanes to optionally transport traffic in one direction (120 Gbps, with 40 Gbps in the other direction), supporting ultra-high resolution, ultra-high frame rate displays, for example. Still, I wonder when (if at all) mainstream applications and the hardware they run on will beg for this much speed (and low accompanying latency), considering the potential power, cost, and other tradeoffs necessary to deliver it. We’ll supposedly find out soon enough; per Intel’s release, “Computers and accessories based on Intel’s Thunderbolt 5 controller, code-named Barlow Ridge, are expected to be available starting in 2024.”

Merry Christmas (and broader happy holidays) to all, and to all a good night

I’ll close with a thank-you to all of you for your encouragement, candid feedback and other manifestations of support this year, which have enabled me to once again derive an honest income from one of the most enjoyable hobbies I could imagine: playing with and writing about various tech “toys” and the foundation technologies on which they’re based. I hope that the end of 2023 finds you and yours in good health and happiness, and I wish you even more abundance in all its myriad forms in the year to come. Let there be Peace on Earth.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2024: A technology forecast for the year ahead appeared first on EDN.

Adding one resistor improves anemometer analog linearity to better than +/-0.5%

Tue, 12/26/2023 - 18:41

Awhile back I published a simple design idea for a thermal airspeed sensor based on a self-heated Darlington transistor pair. See Figure 1.

Figure 1 Older design idea with self-heated Darlington thermal airflow sensor.

In the circuit, Q1 plays the role of self-heated sensor. Its Vbe tempco converts temperature into voltage which is then offset and scaled by A2 to a 5 V span. Meanwhile 200 mV reference A1 regulates Q1’s heating current to 0.2 V/R3 = 67 mA, for a constant power dissipation of 67 mA * 4.8 V = 320 mW. The resulting ambient vs junction temperature differential provides an airspeed readout as it’s cooled from a delta T above ambient of 64oC at 0 fpm, down to 22oC at 2000 fpm.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The resulting sensor is simple, sensitive, and solid-state, but suffers from a radically nonlinear airspeed response, as shown in Figure 2.

Figure 2 Vout versus airspeed response of thermal sensor is very nonlinear.

An astute and helpful suggestion from reader Konstantin Kim resulted in the anti-log linearization VFC shown in Figure 3.

Figure 3 Anti-log linearizing VFC.

Figure 3 makes a useful improvement on linearity shown as Figure 4’s blue curve, but its lingering ~12% of FS error at mid-span is decidedly still far from perfect.

Figure 4 Airspeed response linearity of Figure 3’s anti-log VFC is better but still un-terrific.

Veteran DI contributor Jordan Dimitrov noticed this shortcoming and provided an elegant computational numerical solution that virtually obliterates the problem and makes the net response all but perfectly linear, in his Proper function linearizes a hot transistor anemometer with less than 0.2 % error design idea.

Nicely done, Mr. Dimitrov!

However, a consequence of performing linearization in the digital domain after analog-to-digital conversion instead of doing it in analog before conversion, is a significant increase in necessary ADC resolution, i.e., from 11 bits to 15. 

Here’s why.

Acquisition of a linear 0 to 2000 fpm airspeed signal resolved to 1 fpm would require an ADC resolution of 1 in 2000 = 11 bits.  But inspection of Figure 2’s curve reveals that, while the full-scale span of the airspeed signal is 5 V, the signal change associated with an airspeed increment of 1999 fpm to 2000 fpm is only 0.2 mV. Thus, to keep the former on scale while resolving the latter would require a minimum ADC resolution of 1 in 5 / 0.0002 = 1 in 25,000 = 14.6 bits.

15-bit (and higher resolution) ADCs are neither rare nor especially expensive, but they’re not usually integrated peripherals inside microcontrollers as mentioned in Mr. Dimitrov’s article. So, it seems plausible that a significant cost might be associated with provision of an ADC with resolution adequate for his design.

This prompted me to wonder whether a better performing analog linearization scheme might be feasible. If so and if not too complicated or costly to implement, it could provide an alternative to the digital solution with similar performance but without the need for a high resolution ADC. Turned out, it was. Figure 5 shows how.

Figure 5 Adding one resistor (R6) and adjusting another (R1) ironed out the bump in Figure 3’s analog linearizing.

Key to the linearity improvement is added resistor R6. It works by reducing the amplitude of the sawtooth timing waveform at the 555’s pin 2 by making it trigger early by an amount proportional to anti-log Q2’s collector current. This shortens VFC period and raises VFC frequency by a nonlinearity-correcting factor and results in Figure 6.

The resulting airspeed function deviates from perfect linearity by only -0.4% to +0.2% = -8 to +4 fpm as shown in Figure 6 and Figure 7 (expanded scale).

Figure 6 Improved analog linearity resulting from VFC modifications shown by overlaid blue and black lines.

 Figure 7 Magnified residual linearity error shown in Figure 6.

Admittedly this is certainly not as good as Mr. Dimitrov’s impressive post-conversion numerical result but is perhaps still acceptable for a simple analog solution. At any rate, as a practical matter, it’s better than any reasonable expectation for sensor accuracy, that the difference would seem of mostly academic interest only.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Adding one resistor improves anemometer analog linearity to better than +/-0.5% appeared first on EDN.

Synopsys plus Ansys: The making of an EDA giant?

Tue, 12/26/2023 - 15:26

The big three of the EDA industry—Cadence, Siemens EDA, and Synopsys—largely owe their ‘big boys’ status to a plethora of small acquisitions. However, the news about Synopsys acquiring Ansys seems to be a different affair. While Synopsys has a market valuation of $85 billion, Ansys has a market cap of nearly $26 billion.

The news about Synopsys’ potential deal to buy Ansys for over $400 per share first appeared on Bloomberg late Thursday, 21 December. However, according to a Wall Street Journal report published on Friday, there could also be other suitors. It could be another EDA industry heavyweight like Cadence or a redux of Siemens acquiring Mentor Graphics to expand its design arsenal.

Figure 1 Ansys develops, markets, and supports software solutions for design analysis and optimization.

Ansys, based in Canonsburg, Pennsylvania, provides simulation software solutions for product design and testing. Founded by John Swanson by the name of Swanson Analysis Systems Inc. (SASI) in 1970, it was acquired by venture capital firm T.A. Associates in 1994. The company then changed its name to Ansys and went public in 1996.

Over the years, Ansys has made its name as a supplier of multiphysics engineering simulation technologies that enable engineers to simulate the interactions between structures, heat transfer, fluids, electronics, and optical elements in a unified engineering environment.

It’s also worth noting that the news about this potential deal comes when Synopsys co-founder Aart de Geus is about to move to the executive chairman role while handing over the CEO job to his protégé Sassine Ghazi. The transition will come into effect on January 1, 2024.

Moreover, as has often been happening recently, Synopsys and Ansys first warmed up their ties by inking a strategic partnership. The two EDA outfits have recently partnered to offer signoff solutions for system-on-chips (SoCs) as well as 2.5D and 3D ICs. As part of this strategic alliance, Ansys has integrated its RedHawk-SC family of power integrity, thermal, and reliability signoff products with Synopsys’ Fusion Compiler platform, 3DIC Compiler platform, and PrimeTime signoff platform.

Figure 2 Ansys signoff solutions are certified for all FinFET nodes down to 3 nm.

Ansys’ acquisition by Synopsys is still a work in progress, and more details are expected to emerge in the coming days. But if this deal pushes through, it will probably be an important event in the electronics design industry in 2024. It could also significantly impact the EDA industry in specific and IC design landscape in general.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Synopsys plus Ansys: The making of an EDA giant? appeared first on EDN.

End-of-year tech-hiccups redux

Mon, 12/25/2023 - 17:34

Last time, in introducing this blog post series, I mentioned that as the year was drawing to a close, I seemed to be dealing with an ever-increasing number of tech hiccups, big and small alike. I focused specifically on two classes of issues:

  • Those involving (among other things) new-to-me technologies and products associated with them, such as high-end video capture equipment, and
  • Those involving computers, both ones for which I’d been compelled to make significant operating system and applications updates due to existing-software support cessation and those that I’d needed to outright replace, driven by obsolescence by design.

In that previous writeup, I dove into three case studies associated with the first bullet point. This time, the second bullet point gets the three-example treatment. Without further ado…

An SSD with one quarter of its claimed capacity

As I mentioned last month, my in-progress transitions to Thunderbolt 3- and 4-based computers have also motivated me to explore flash memory-based external storage—specifically DAS, or direct-attached storage—products. I specifically referenced, for example, one of the two OWC Mercury Pro U.2 Dual enclosures I’d purchased, which I then populated with a Shuttle U.2 carrier containing two 2TB Western Digital BLACK SN750 SSDs. The latter are the focus here.

Reiterating what I previously mentioned, while I rarely buy used SSDs, the prices I got for these were irresistible. Amazon had run a 15%-off sale on some of the inventory in the Warehouse area of its website during the company’s mid-October Prime Big Deal Days promotion period. Among that inventory were two 2 TB BLACK SN750s, priced and described as follows:

  • $93.25 ($79.26 after discount)
    Used – Acceptable
    Cosmetic imperfection(s) bigger than 1″ on bottom or back of item. Item will come repackaged.
  • $108.60 ($92.31 after discount)
    Used – Good

The BLACK SN750 is “only” a PCIe 3.0 SSD, versus a newer and speedier PCIe 4.0- or PCIe 5.0-based alternative. But given this particular usage pattern (externally tethered to a computer over an intermediary TB3 interface) I suspected it’d swamp the external bus regardless. And considering that this same SSD is, as I type these words, selling brand new for nearly $300 at retailers such as Amazon and Newegg (although curiously, I just noticed that it’s only $109 at WD’s own online store right now), I think you’ll understand why I was willing to roll the dice on these two. My openness was fundamentally driven by the fact that I was planning on running the two-drive array in redundant mirrored RAID 1 mode anyway, and was especially the case in this particular circumstance since that from past experience, I was confident I’d have no trouble returning one or both for full refund if any within-30-days problems arose.

What problems? Well, past writeups of mine had highlighted SD cards whose actual capacities didn’t match their claims, along with those whose interfaces weren’t as speedy as promised.

But I’d (naively, it turned out) presumed that it’d be harder to fake out the performance or other attributes of a SSD, specifically one in a “naked” M.2 form factor (vs an enclosure-based 2.5” alternative). Instead, I’d assumed the worst that might arise would be a drive already heavily used and full of “bad blocks”, diminishing its remaining usable capacity along with shrinking the timeframe until inevitable full failure. How’s that saying go…“ignorance is bliss”?

Here’s what the “Used – Good” drive looks like:

It passed all of my testing (including a full reformat to exFAT) with flying colors.

Now here’s the “Used – Acceptable” one.

Cosmetically, and contrary to Amazon’s description of it, it looked fine to me, although it had definitely arrived repackaged…it showed up solely inside a taped-up antistatic bag. The first hint of trouble came when it seemingly reformatted much faster than its sibling. The second hint of trouble came when I looked at its reformatted capacity:

500 GB? What? But the official-looking backside label says it’s a 2 TB drive? All became clear when I fired up my copy of CrystalDiskInfo for a more thorough examination:

It was a 500 GB drive. And it wasn’t a WD BLACK WD750. What you’re actually looking at is an entry-level WD Blue SN550 SSD, a heavily used one at that, judging from the accumulated read and write cycles and power-on hours along with CrystalDiskInfo’s overall health rating for it.

But I was right about one thing; the backside label claiming a 2 TB capacity was official. How is it possible to reconcile these seemingly contradictory data points? Recall first that these particular BLACK SN750 variants came with heat sinks surrounding the M.2 module PCBs. Next, watch this video and the following video (I don’t speak the native language in either case, but I don’t think you need to understand the narration to get the point):

Now I’ll show you the side views of this particular SSD:

While not obvious unless you already know what you’re looking for, there’s a bit of wear on the heads suggestive that the screws had likely been removed and later reinstalled. Ironically, had the previous owner spent a few seconds on them with a black Sharpie, he or she could have completely covered the crime’s tracks. Because, as far as I’m concerned, a crime against Amazon is exactly what was committed, unfortunately the latest in a series of initial-customers’ scams against the company whose outcomes have also impacted me.

What presumably happened here is that the previous owner bought a brand new 2 TB Western Digital BLACK SN750 SSD from Amazon, swapped out the M.2 module PCB in it for his or her existing and heavily used, inferior-performance 500 GB WD Blue SN550 SD, and sent it back for full refund. Shame on them. I only hope that Amazon follows my submitted diagnostics-results advice and “eats” the loss instead of trying to resell the SSD again.

The overenthusiastic character viewer

In case it wasn’t already obvious from the abundance of vs-x86 claims that Apple made at its late-October surprise launch event, the company is highly motivated to move its remaining user base of legacy Intel-based Macs (folks like me) to successor Apple Silicon-based computers. Some of this encouragement comes, as we saw on October 30, from oodles of performance- and power consumption-themed comparisons. Some of it comes from intentional, documented operating system feature set omissions with Intel-based hardware in comparison to that same software running on Apple Silicon platforms, beginning with MacOS 12 “Monterey”. And some of it, largely undocumented (at least from Apple itself) and (presumably, although maybe I’m just being charitable) unintentional, comes from bugs that seemingly crop up solely in x86-based Macs. This as well as the final section of this post showcase two maddening examples.

As mentioned before, I strive whenever possible to run the oldest version of MacOS still actively supported by Apple, consciously trading off latest-and-greatest features for peak probability of software stability. To wit, back in September I out of-necessity migrated my actively-used computer stable (one last time, alas) from MacOS 11 “Big Sur” to “Monterey”, commensurate with MacOS 14 “Sonoma’s” gold release and the consequent cessation of “Big Sur” support (Apple traditionally supports both the current and prior two major O/S versions concurrently).

Immediately afterwards, I started noticing that whenever I’d press the Fn (function) key in the lower left corner of the keyboard, to raise or lower the system volume, for example:

the Character Viewer utility built into MacOS would also pop up:

(quick aside: you can learn a lot about a person by seeing what emoji they commonly use, eh?)

This behavior was new to MacOS Monterey, and it frankly drove me batty. What I ended up discovering, after no shortage of research and intermediary dead-ends, was that MacOS Monterey added a keyboard setting that, by default, brings up Character Viewer each time you press the “Globe” (🌐) key, multiplexed with Fn on the keyboards that come with newer Apple computers based on Apple Silicon SoCs. Here’s an example of what I’m talking about, from the M1-based 24” iMac’s discrete keyboard; the same goes for laptops with integrated keyboards:

And here’s a default MacOS Monterey setting screenshot taken straight from my computer:

The likely obvious problem with this, if you go back and look at the photo of my keyboard, is that it has no Fn-multiplexed “Globe” key. Apple’s software either wasn’t smart enough to differentiate between legacy and newer keyboards or was intentionally focused solely on Apple Silicon Macs, ignoring the legacy installed base of x86 computers and keyboards in the process.

Once I overrode the defaults:

Sanity was (arguably…I know…) restored.

Tangled up in Blue(tooth)

In unfortunate contrast to the prior Monterey-migration-related issue, this one hasn’t yet been successfully resolved and my necessary workarounds for it are fundamentally flawed. Shortly after logging into my first Zoom meeting post-Monterey upgrade, as-usual using my pair of AirPods Pro earbuds for both audio output and input (i.e., microphone) functions, I noticed that the sounds streaming into my ears first became choppy then dropped completely. Others in the meeting reported to me over Zoom Chat that the same thing had happened to the audio that I was outputting, and they were listening to. Out of necessity, I switched to the laptop’s inherently inferior built-in speakers and microphone array for the remainder of that meeting, then immediately jumped on Google to see whether this was a just-me or more widespread issue.

What I learned was deeply disturbing. Apple had apparently revamped the Bluetooth audio software subsystem beginning with Monterey, a decision which resulted in (at least) two significant issues, the second one of which remains seemingly unresolved to this day even through two successive major operating system revisions…and primarily to completely afflicts x86-based systems.

The first bug existed in Monterey betas and, despite beta tester feedback, remained present in the “gold” release. And it lingered through incremental updates for more than six months and for all systems until finally fixed. That this was the case despite its combination of seemingly obvious presence and functional importance is mind-blowing; thankfully I hadn’t come across it myself, as I’d jumped straight to the latest Monterey release when upgrading. In summary: whenever the user of a Bluetooth headset or earbuds set (Apple-branded or otherwise) muted the mic input, the output audio would also mute; the two settings were inexorably linked. The workaround employed by some app developers ignored the user-desired Bluetooth audio input device setting and instead “hard-wired” the integrated microphone array. Hold that thought.

In the process of (unsuccessfully, to date) debugging my Bluetooth audio issue, I learned something interesting…one of those obvious-in-retrospect things, to be exact. At one point, I’d been streaming a YouTube-sourced music concert in the background while messing around with both system-wide and Zoom-specific audio settings. Any time I selected the earbuds’ mic as my audio source, the tunes I was hearing over the earbuds’ speakers switched from stereo to (after a short delay) mono, “tinny” mono at that.

It turns out that the Bluetooth specifications don’t support simultaneously using one profile for transmitting audio to a Bluetooth peripheral and another profile for audio transmitted from that same peripheral. The profiles need to be the same in simultaneous-use scenarios, as well as being the mic-friendly lowest common denominator, specifically HSP (the handset profile) or HFP (hands- free profile). This means that whenever you select a Bluetooth device’s mic as the application’s audio input source, the playback profile switches away from A2DP (and whatever high-quality codec it had been using) to voice-tailored CVSD or an equivalent low-quality codec, too. And to clarify, this profile-switching behavior is generic, not O/S-specific.

I tried everything I could think of to minimize if not completely eliminate the misbehaving audio behavior I was struggling with. I boosted the CPU priority of the Bluetooth and Zoom processes (again to clarify, this issue affects every application that potentially uses a bidirectional-stream Bluetooth audio device, but we mostly use Zoom at my “day job”); no tangible improvement. Someone in one of the online discussion threads I perused in my research had suggested that disabling the Airplay Receiver service newly added to Monterey fixed the issue for them. This setting was originally “exposed” in Monterey to Intel-based Macs but was later removed, for unknown reasons, but I found a way to control it via the command line; again, though, no improvement. And someone else had said that Apple tech support ultimately told them that they needed to disable 2.4 GHz Wi-Fi on their LAN and run it 5 GHz-only. Sorry, not for me.

Ultimately, on a hunch I tried the workaround that the folks at Octopus Think had implemented as a temporary “patch” for the now-fixed other (linked mute settings) Monterey Bluetooth audio issue. Specifically, I tried instead using my laptop’s built-in microphone for audio input in Zoom, while still using my AirPods Pro headset for audio output. Huzzah; no more glitches! The issue seems to be specific to the HSP and/or HFP profiles (Apple has unfortunately deprecated support for its Bluetooth Explorer utility, so I can’t tell which profile, or for that matter which codec, is in use at any point in time, aside from the already-noted generalities), or, said another way, simultaneous use of the same Bluetooth device for both audio input and output purposes.

Generally, speaking any time I use a microphone found on a device different from my Bluetooth headset but still somehow connected (wired or wireless) to my computer, everything’s fine. I only run into problems when I use both a given Bluetooth headset’s or earbuds set’s mic and its speakers. Thankfully the mic array built into my MacBook Pro is reasonably directional in its pickup pattern, so background noise is inaudible on the other end of the connection as long as it’s not too egregious. That said, nothing beats a microphone only a couple of inches away from your mouth, so I remain springs-eternal hopeful that this bug too will eventually get squashed.

Over to you

I trust that I’m not the only one who’s encountered baffling system hardware and/or software glitches. And I also trust that I’m not the only one who’s (sadistically, admittedly) enjoyed, even if only a little bit, successfully debugging them to root cause, despite the migraines and teeth-gnashing they also cause. Please share your own tech-hiccup tales in the comments; your fellow readers and I will enjoy reading them! And have a happy holiday season, everyone, along with wishes for an even better 2024.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post End-of-year tech-hiccups redux appeared first on EDN.

AI assistant for hardware design gets a vision upgrade

Fri, 12/22/2023 - 15:35

Copilot, an AI-based hardware design assistant that understands a project context automatically, now has a vision version, and that makes it the first multi-modal AI tool for hardware design. According to Flux, a startup based in San Francisco, that will facilitate access to powerful new use cases and open up a world of new possibilities in hardware design workflow.

Flux Copilot, conversational AI that lives in a project, streamlines tedious tasks, conducts design reviews, and makes hardware teams more productive. With Copilot Vision, engineers can input a block diagram to the AI tool and watch it recommend suitable parts by intelligently parsing the diagram into functional sections.

Figure 1 Users can input a block diagram and get recommendations on suitable components. Source: Flux

How Copilot Vision works

In hardware design, a lot of thinking is visual and requires context, as design engineers commonly rely on visual resources such as block diagrams, charts, and drawings. However, these resources can be hard to interpret and separate from the actual design.

Take the case of an engineer working on a project based on an existing block diagram. That calls for turning that block diagram into a working circuit, selecting the right components, and connecting them correctly. As a result, engineers must read through datasheets and application notes, which can often get confusing, time-consuming, and hard to keep track of.

Here, Copilot’s latest upgrade incorporates images as a more natural way to communicate ideas and integrate those resources into a design. Engineers can provide Copilot with an image as a file upload, and Copilot will instantly understand what they are trying to build. Next, engineers can ask questions, learn, and get design reviews in entirely new and more effective ways.

Figure 2 Copilot’s vision capability enables engineers to compare schematic diagram against a block diagram and catch discrepancies. Source: Flux

Besides part recommendations, Copilot can help ensure design quality by comparing schematic diagram against a block diagram. Its analysis catches discrepancies like missing elements in a design and offers suggestions for improvement.

Moreover, when engineers are ambivalent about how to interpret a chart on a datasheet, they can provide Copilot with an image of the chart and ask in-depth questions. Copilot Vision will interpret input and explain any aspect of the chart and how it relates to hardware design needs.

Figure 3 Copilot is capable of chart interpretation and dimensional analysis. Source: Flux

Design engineers are often overwhelmed by all the information and dimensions when looking at the part drawings. Here, engineers can provide an image to Copilot and ask any question about part dimensions. The AI assistant will interpret and correlate them to real component footprints, even making educated guesses about component types based on project context.

The biggest Copilot upgrade

Copilot, built around the concept of agile hardware development, aims to facilitate every step of the hardware development process from initial brainstorming all the way through production. With the launch of the vision upgrade, design engineers not only can chat with Copilot, they can also see what they are working on.

Flux calls it the biggest upgrade to Copilot yet. The company welcomes user feedback on what works well, what could be better, and what other modes of interaction users would want to build into Copilot. The feedback can be shared through Flux’s Slack community.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI assistant for hardware design gets a vision upgrade appeared first on EDN.

MCUs optimize BLDC motor applications

Thu, 12/21/2023 - 19:32

Microchip’s AVR EB family of MCUs provides complex waveform control in systems that employ brushless DC (BLDC) motors. Aimed at a wide range of cost-sensitive applications, the microcontrollers can adjust speed, timing, and waveform shape, creating sinusoidal and trapezoidal waveforms. According to the manufacturer, the devices improve the smoothness of motor operations, reduce noise, and increase efficiency.

On-chip peripherals enable multiple functions with minimal programming. The MCUs respond quickly to changes in operating conditions. and adjustments can be made on-the-fly with near-zero latency. They can also lower overall BOM cost since several tasks, such as reading environmental sensors and serial communication, can be performed independent of the CPU.

To enable smooth BLDC motor control, the MCUs employ a 16-bit timer/counter with four compare channels for pulse width modulation and waveform extension. A 24-bit timer/counter provides frequency generation and timing, while a programming and debug interface disable (PDID) function enhances code protection.

In addition to motor control, the AVR EB series can be used for predictive maintenance, home automation, industrial process control, and automotive tasks.

AVR EB MCUs are offered in various package types (some as small as 3×3 mm), with 14, 20, 28, and 32 pins. The AVR16EB32 Curiosity Nano board allows evaluation and prototyping. 

AVR EB series product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCUs optimize BLDC motor applications appeared first on EDN.

Resettable fuses save space in 1206 packages

Thu, 12/21/2023 - 19:32

Bourns has added 20 new devices to it MF-NSMF series of Multifuse polymer positive temperature coefficient (PPTC) resettable fuses in 1206 size packages. By combining higher power ratings in a smaller footprint, these surface-mount devices give designers more options for overcurrent and overtemperature protection, while conserving board space. For example, fuse models in the 1206 size package achieve a 67% reduction in space compared to the company’s fuses in 1812 size packages.

MF-NSMF resettable fuses leverage Bourns’ freeXpansion technology, which enables the fuse to handle higher currents and voltages, improves its resistance stability, and allows for a smaller footprint. The components provide a broad range of hold current options, extending from 0.05 A to 2.0 A, and voltage ratings, ranging from 6 VDC to 60 VDC.

Devices in the MF-NSMF series deliver reliable overcurrent and overtemperature protection for low DC voltage ports commonly found in USB, HDMI, HDTV, PCs, laptops, data centers, and portable consumer electronics.

The MF-NSMF PPTC resettable fuses are available now.

MF-NSMF series product page

Bourns

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Resettable fuses save space in 1206 packages appeared first on EDN.

Hall latches meet AEC-Q100 requirements

Thu, 12/21/2023 - 19:32

Automotive-compliant Hall-effect latches in Diodes’ AH371xQ series have an operating voltage range of 3 V to 27 V, with 40-V load dump protection. Resistant to physical stress, the parts are qualified to the AEC-Q100 Grade 0 standard and operate over a temperature range of -40°C to +150°C.

AH371xQ latches are used for brushless DC motor control, valve operation, linear and incremental rotary encoders, and position sensing functions. They support numerous in-vehicle comfort and engine management applications, including window power-lift, sunroof movement, cooling fans, water/oil pumps, and speed measurement.

The open-drain Hall-effect latches employ a chopper-stabilized design that mitigates the effects of thermal variation and provides enhanced stray field immunity. Their power-on time is typically 13 µs. Six different magnetic operate and release points span ±25 gauss to ±140 gauss, with tight operating windows and low temperature coefficients.

Latches in the AH371xQ family are offered in SOT23 (type S), SC59, and SIP-3 (bulk pack) packages. The AH3712Q latch also comes in a U-DFN2020-6 (SWP) package. Devices cost $0.32 each in lots of 1000 units.

AH371xQ series product page 

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Hall latches meet AEC-Q100 requirements appeared first on EDN.

Image sensors facilitate machine vision tasks

Thu, 12/21/2023 - 19:31

Joining Samsung’s ISOCELL Vizion sensor lineup are the Vizion 63D time-of-flight (ToF) sensor and the Vizion 931 global shutter sensor. Tailored for robotics and extended reality (XR) applications, the sensors leverage the company’s ISOCELL pixel technology for improved image quality.

The Vizion 63D is an indirect ToF (iToF) sensor that measures the phase shift between emitted and reflected light to sense its surroundings in three dimensions. Samsung says it is the first iToF sensor with a built-in image signal processor. It can precisely capture 3D depth information without the help of another chip, enabling up to a 40% reduction in system power consumption compared to its predecessor.

The Vizion 63D also boasts high quantum efficiency, reaching 38% at 940 nm. Based on a 3.5-µm pixel size, the sensor delivers a resolution of 640×480 within a 1/6.4-in. optical format.

With its global shutter, the Vizion 931 image sensor captures sharp, undistorted images of moving objects. It features a resolution of 640×640 pixels and quantum efficiency of 60% at 850 nm. Additionally, the Vizion 931 supports multidrop operation, which seamlessly connects up to four cameras to the application processor using a single wire.

ISOCELL Vizion 63D and ISOCELL Vizion 931 sensors are currently sampling to OEMs worldwide.

Vizion 63D product page 

Vizion 931 product page

Samsung

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Image sensors facilitate machine vision tasks appeared first on EDN.

Partners demo PCIe 6.0 64-GT/s interoperability

Thu, 12/21/2023 - 19:31

Alphawave Semi and Keysight joined forces to accelerate PCIe 6.0 compliance testing and bolster interconnectivity across AI compute infrastructure. The companies demonstrated interoperability between Alphawave’s PCIe 6.0 PHY and controller subsystem and Keysight’s PCIe 6.0 protocol exerciser, negotiating a link to the maximum PCIe 6.0 data rate of 64 GT/s. They also effectively established a CXL 2.0 link to address future cache coherency in the datacenter.

The PCIe 6.0 specification introduces FLIT mode, where packets are organized in Flow Control Units of fixed sizes. Along with FLIT mode, PCIe 6.0 employs PAM4 signaling and forward error correction to achieve low latency, low complexity, and low bandwidth overhead.

Alphawave’s PCIe subsystem is a power-efficient, low-latency interface IP built off of its PAM4 SerDes IP. Using the silicon implementation of the PCIe 6.0 FLIT protocol, Keysight was able to successfully verify FLIT-mode transactions (requests and completions) of payload FLITs at 64 GT/s.

For more information about Alphawave’s PCIe and CXL IP portfolio, click here. Additional information about Keysight’s PCIe 6.0 protocol exerciser and analyzer can be found here.

Alphawave Semi

Keysight Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Partners demo PCIe 6.0 64-GT/s interoperability appeared first on EDN.

A simple, accurate, and efficient charge pump voltage inverter for $1 (in singles)

Thu, 12/21/2023 - 18:41

Keeping op-amp outputs “live” at and below zero volts, generating symmetrical output signals, and processing bipolar analog inputs, are all examples of design situations where a few milliamps of negative voltage rail can be a necessity. Figure 1 shows a simple inverter design based on the venerable x4053 family of triple CMOS SPDT switches that efficiently and accurately inverts a positive voltage rail and does it for a buck. 

Figure 1 The generic and versatile xx4053 provides the basis for a cheap, efficient, and accurate voltage inverter.

Wow the engineering world with your unique design: Design Ideas Submission Guide

 Here’s how it works.

U1a and U1b act in combination with C2 to form an inverting capacitor charge pump that transfers charge to filter capacitor C3. Charge transfer occurs in a cycle that begins with C2 being charged to V+ via U1a, then completes by partially discharging C2 into C3 via U1b. Pump frequency is roughly 100 kHz under control of the U1c Schmidt trigger style oscillator, so that a charge transfer occurs every 10 µs. Note the positive feedback around U1c via R3 and inverse feedback via R1, R2, and C1. 

The resulting (approximate) oscillator waveforms (Vc1 and U1c Vpin9) are illustrated in Figure 2.

Figure 2 100kHz timing signals generated by the U1c Schmidt trigger oscillator.

The guaranteed break-before-make switching of the xx4053 family maximizes efficiency while minimizing noise. The inherent increase of switch ON resistance with decreasing Vout reduces shorted-output fault current to ~20 mA for V+ = 5 V. Startup at power-on requires approx 5 milliseconds.

Figure 3 Vout and power conversion efficiency versus output current for +Vin = 5 V.

No-load power consumption is less than 500 µW and is divided more or less equally between U1 and the oscillator RC network. When Vout is lightly loaded, it will precisely approach -1.0 x V+. Under load, it will decline at ~160 mV/mA.

If operation at higher V+ inputs (up to 10 V) is required, the metal-gate CD4053B can be employed.  Of course, capacitor voltage ratings would need to be correspondingly higher.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A simple, accurate, and efficient charge pump voltage inverter for $1 (in singles) appeared first on EDN.

Distortion Factor as a function of total harmonic distortion

Wed, 12/20/2023 - 18:46

Coming to grips with the semantics and vocabulary that apply to “distortion” takes a little discipline, but it isn’t all that intimidating. All you need is a little patience to go over the relevant algebra and not scribble.

Two phrases are commonly encountered in discussions of harmonic distortion. One is total harmonic distortion (THD), and the other is distortion factor (DF). The definition of the first phrase is universally accepted, but the second phrase is often used rather loosely defined and there is no universally accepted definition for it. We now look at distortion and its relationship to power factor (Figure 1).

Some authors have used DF and THD as if they were synonyms, but they are not. The two terms are taken as what was once shown at the following URL:

https://www.p3-inc.com/blog/entry/understanding-total-harmonic-distortion-thd-in-power-systems

(Sadly, this URL was no longer functioning when I last looked for it.)

Figure 1 Power factor, displacement factor and DF equations.

In examining power factor, if we have zero distortion so that voltage and current are both pure sinusoids, the cosine of the angular difference between the voltage and current waveforms becomes the power factor all by itself. However, we do have waveform distortion and harmonics are involved, so we must take DF into account as well.

We will now look at how the above equation changes as DF arises.

First, we reiterate the definition of THD and then we do one more step as shown in Figure 2.

Figure 2 Defining THD as per IEC 61000-2-2 and its square to easily define DF in Figure 3.

That step of showing THD² will be important in just another moment as we address DF (Figure 3).

Figure 3 Deriving DF by dividing the fundamental by the all-inclusive power and taking the square root of it, leading to the definition of THD.

Power delivered to a load is that of the fundamental frequency signal as I1² plus that of the second harmonic signal as I2² plus that of the third harmonic signal as I3² and so on and so on to as many harmonics as there are in the driving waveform.

As seen in the first line of Figure 3, we look at the ratio of the power delivered by the fundamental divided by the all-inclusive power and take the square root of that ratio, that leads to the DF. A little algebraic manipulation then takes us to the equation for DF as a function of THD.

Easy, wasn’t it, but that’s okay. This makes my head spin too.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Distortion Factor as a function of total harmonic distortion appeared first on EDN.

Figuring out a dimmable filament LED light bulb

Tue, 12/19/2023 - 20:23

As also noted previously, I’ve done a lot of LED light bulb teardowns over the years, not even counting LED illumination sources that aren’t bulb-shaped, like touch-activated and motion-sensing panels:

They’re consistently popular with the readers, which is admittedly part of my (and EDN’s) motivation to continue doing them. But I personally also find them fascinating; inevitably I come across at least a thing-or-few that surprise me and/or I learn something from each time.

Today’s teardown “victim” is a two-fer: it’s (traditionally) dimmable, and it’s got a historically atypical but increasingly common multi-LED structure inside it. I alluded to the latter attribute within that earlier three-way bulb dissection project:

…see this comparative image of a conventional (albeit dimmable) LED light bulb also in my teardown pile:

(By the way, notice the hint of what looks like a filament structure inside this dimmable LED bulb. You’re going to have to wait for a future teardown to find out more about that!)

That time is now. But I’m getting ahead of myself; let’s first revisit that earlier “(traditionally) dimmable” comment. Why exactly is it that dimmer switch-compatible LED light bulbs have historically been (and to some degree remain):

  • User complaint-rife
  • Rare, and
  • Notably pricier

than their non-dimmable counterparts?

Here’s a good summary of the situation:

Most dimmers installed today are designed to be used with high-power circuits to drive traditional filament lamps which were all quite uniform and dimmable by just a voltage change. LED lamps on the other hand are low-power and more complex. An LED bulb is a solid-state product that has built in circuitry (called a driver) that takes high-voltage AC input current and converts it to low-voltage DC current to drive the LEDs. Furthermore driver specifications are not uniform across the LED industry.

They are many different types of dimmers installed in homes and offices of various specifications (e.g. resistive; leading-edge and trailing-edge and electronic). So when using new LED lamps with existing dimmers there is a matching of old technology with new which can be challenging.

The driver in dimmable LED lamps may work with many types of dimmer but not all, for instance LED lamps tend to work better with trailing-edge dimmers rather than leading-edge dimmers, but an existing dimmer may have a minimum load that is too high for an LED lamp, e.g. A 60W filament lamp may use a dimmer that has a minimum load of 25W the replacement LED has a power rating of 6.5W – below the level required by the dimmer. Dedicated LED dimmers have a very low minimum power rating.

The dimming experience can be different with LED. Overall the LED dimming performance is regulated by the capability of the LED driver/chip and the compatibility of the dimming circuit. Since there are a huge number of possible combinations of lamps and dimmer, it is very difficult to produce an LED lamp that works in all dimming environments.

LED currently have a lower dimming range than a filament lamp – LEDs currently dim down to about 10% of the total light output whereas filaments may go down to 1-2%. Low-voltage transformers as used with MR16 12V spotlights also add to the complexity.

Some of the issues that may occur when a dimmer is incompatible with an LED lamp are:

  • Flickering – Lamps will flicker (can also occur if a non-dimmable lamp is used)
  • Drop-out – No light output at the end of the scale
  • Dead travel – When the dimmer is adjusted there is no matching change in light output (light may not dim to acceptable level)
  • Not smooth – Light output may not go from dim to bright linearly
  • Multiple lamps – issues may become apparent when multiple lamps are added
  • Damage or failure – LED driver, circuit or LED is damaged or fails.
  • Load below minimum – The power load of the LED lamp is below the minimum required by the dimmer
  • Mixed models- Different models of LED will likely have different drivers – since drivers behave differently this could result in dimming issues.

I’ve personally experienced several of these issues with the arrays of dimmable BR40 LED light bulbs in my residence’s hallways and rooms, which replaced incandescent predecessors. On that note, however, also notice the words “traditional filament lamps” in the previous website article excerpt. Hold that thought.

One more clarification, regarding “(traditionally)”, before proceeding. If you look back at my prior LED light bulb teardowns, you’ll find several “smart” bulbs documented as being dimmable (as well as capable of changing their color temperature and broader color output, generating various strobe patterns, varying their behavior at various times of day, and the like). This isn’t them. Those earlier bulbs, as their “smart” names imply, integrate networked intelligence that handles not only AC-to-DC conversion but also dimming and other functions, are powered by a consistent AC voltage input and are controlled by a smartphone app, an Amazon Echo, or the like. The “dumb” bulb we’re showcasing today is conversely intended to act just like its incandescent precursor, dependent on the varying voltage coming to it from premises power in combination with an in-between dimmer switch to determine the brightness lumens it outputs.

Let’s dive in, as-usual starting with some outer box shots. Today’s victim comes from an A21 form factor four-bulb package, with “soft white” (2700K, to be exact) color temperature and 100W incandescent-equivalent (15W actual) brightness (1600 lumens, to be precise). I bought ‘em from Walmart (“Great Value” is the store brand) back in early February on sale for $1.97, believe it or not (that said, they’re $15.97-for-four as I type these words in early November).

Front view first; particularly note the “Frosted Glass” mention. I didn’t. Again, hold that thought:

Left side:

Back (it came to me pre-dented, but the contents were thankfully still intact):

Now for the right side. In that earlier box-front shot you might have also noticed the three asterisks next to “Dimmable”. Per the earlier discussion in this writeup, here’s what right-side verbiage they reference:

May not be compatible with all dimmers. Dimming compatibility available at www.walmart.com.

Top:

and bottom:

Here are some “stock” images of a standalone bulb:

and its industry-standard E26 base:

Speaking of stock images, look how happy these two are with their new light bulb! (I digress):

And here’s our victim in real life, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Top:

Various views of the base from the side, to show the various markings stamped in it:

And another of the bottom end, reminiscent of the earlier one (which I admittedly didn’t look at until after I took this one, therefore the similarity):

Now to get inside…and now for my confession regarding the earlier “frosted glass” comment. I hadn’t, as previously mentioned, in-advance noticed that particular portion of the packaging’s front-panel notation. And I’m pretty sure that every other LED light bulb I’ve taken apart to date has had a plastic globe. So, although I still distinctly remember having thought something along the lines of “gee, this sure feels like a legacy glass-globe incandescent bulb” when taking it out of the box, I banished the thought and proceeded forward, operating on my must-be-plastic presumption. Insufficient questioning of assumptions strikes again…

First, I tried clamping down on, and then twisting, the bulb base with a pair of pliers while holding onto the globe with my bare hands. Are you cringing yet? I sure am, thinking about it in retrospect. Thankfully, that attempt was unsuccessful (or, depending on your perspective, successful). As was my next “brilliant” idea, to take a hacksaw to the junction between globe and base. My guardian angel (unlike my brain) was obviously working overtime that day.

Third attempt: expose the globe to my heat gun on its highest temperature setting. The globe didn’t eventually begin softening/melting as plastic ones previously had, which I thought was strange at the time. Instead, it eventually exploded with a loud “pop”, partially shattering into shards all over my kitchen. Thank goodness I was wearing eyes-protecting glasses that day:

Putting the remainder in a thick plastic bag and tapping on it with a ball peen hammer completed the glass (yes, Brian, glass, not plastic) globe-removal task:

What do we have here? I’ve provided multiple side-view overviews to give you a fuller picture:

In contrast, here’s a multi-LED array picture taken from my very first LED light bulb teardown of a 60W (dimmable as well, as it turns out; I didn’t realize its distinctiveness at the time) device back in September 2016:

What we have here today is a set of six LED filaments, together comprising the illumination nexus of this light bulb. From the as-usual excellent Wikipedia summary:

A LED filament light bulb is a LED lamp which is designed to resemble a traditional incandescent light bulb with visible filaments for aesthetic and light distribution purposes, but with the high efficiency of light-emitting diodes (LEDs). It produces its light using LED filaments, which are series-connected strings of diodes that resemble in appearance the filaments of incandescent light bulbs. They are direct replacements for conventional clear (or frosted) incandescent bulbs, as they are made with the same envelope shapes, the same bases that fit the same sockets, and work at the same supply voltage. They may be used for their appearance, similar when lit to a clear incandescent bulb, or for their wide angle of light distribution, typically 300°. They are also more efficient than many other LED lamps.

Here’s more, complete with Wikipedia-sourced pictures:

The LED filament consists of multiple series-connected LEDs on a transparent substrate, referred to as chip-on-glass (COG). These transparent substrates are made of glass or sapphire materials. This transparency allows the emitted light to disperse evenly and uniformly without any interference. An even coating of yellow phosphor in a silicone resin binder material converts the blue light generated by the LEDs into light approximating white light of the desired colour temperature—typically 2700 K to match the warm white of an incandescent bulb.

Structure of a typical filament.

Closeup of a filament at 5% power, showing the individual LED light spots.

And now, some photos of my own, taken of the assemblage from various perspectives:

The various LED filaments’ conductors split off at the base and travel vertically in parallel, rejoining and “completing the circuit” via the transparent “tube” at the center.

So, here’s what baffles me, although I have some theories. Why on earth would Walmart and its bulb supplier go with a fairly exotic LED filament approach for bulbs that sold for only $0.50 each? I’d get it if the globes were clear, so that a customer (like those two happy folks you saw earlier) could fully enjoy the incandescent-reminiscent cosmetics:

But why with a bulb whose illumination source is obscured by a frosted globe?

Part of the reason, I suspect, is the aforementioned near-360° coverage of the approach versus a 180°-or-less spread of a traditional bulb with an array of LEDs spread out in only two horizontal dimensions (although again, the diffusion aspects of a near-360° frosted globe above the array would seemingly minimize any inherent LED filament advantage).

The other, bigger, reason, I suspect, is two-fold and related. Although the LED filament structure itself may be more expensive to manufacture (albeit less so over time thanks to high-volume manufacturing efficiencies) the complexity, therefore cost, not to mention size of the circuitry driving those LED filaments can be less, as a total-cost counterbalance. I suspect, too, that this circuitry simplification also makes LED filament-based bulbs inherently more dimmer-friendly.

Here’s a pictorial representation of what I’m talking about:

And here’s more from Wikipedia, further bolstering my hypothesis:

A benefit of the filament design is potentially higher efficiency due to the use of more LED emitters with lower driving currents…The power supply in a clear bulb must be very small to fit into the base of the lamp. The large number of LEDs (typically 28 per filament) simplifies the power supply compared to other LED lamps, as the voltage per blue LED is between 2.48 and 3.7 volts DC. Some types may additionally use red LEDs (1.63 V to 2.03 V). Two filaments with a mix of red and blue is thus close to 110 V, and four are close to 220–240 V, compared to the mains AC voltage reduction to between 3 V and 12 V needed for other LED lamps.

Then there’s this, explaining (among other things) the eventual explosion after my bulb’s glass (did I mention that?) globe’s lengthy exposure to high heat:

The lifespan of LED emitters is reduced by high operating temperatures. LED filament bulbs have many smaller, lower-power LED chips than other types, avoiding the need for a heatsink, but they must still pay attention to thermal management; multiple heat-dissipation paths are needed for reliable operation. The lamp may contain a high-thermal-conductivity gas (helium) blend to better conduct heat from the LED filament to the glass bulb. The LED filaments can be arranged to optimize heat dissipation. The life expectancy of the LED chips correlates to the junction temperature (Tj); light output falls faster with time at higher junction temperatures. Achieving a 30,000 hour life expectancy while maintaining 90% luminous flux requires the junction temperature to be maintained below 85 °C. Also worth noting is that LED filaments can burn out quickly if the controlled gas fill is ever lost for any reason.

So, there you have it. The base of this bulb, like that of the charging base for the rechargeable electric toothbrush in a recent teardown, is “potted”, as you can probably tell from some of the photos I took, so I’m not going to bother trying to pry it open (the lingering shards of sharp-edge glass are admittedly also a deterrent). But as the earlier conceptual diagram suggests, the circuitry inside it is likely pretty elementary. I hope you’ve found this teardown exercise “illuminating” (hardy har har) and I welcome your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Figuring out a dimmable filament LED light bulb appeared first on EDN.

Proper function linearizes a hot transistor anemometer with less than 0.2 % error

Mon, 12/18/2023 - 16:57

A recent Design Idea presents a circuit to measure an airflow rate up to 2000 fpm using two transistors in a Darlington configuration. One transistor works as a self-heated thermal sensor and the other one compensates for ambient temperature variations. The circuit is smart and simple; however, the output voltage depends on the input flow rate in a very nonlinear fashion. The paper presents two hardware options to linearize the sensor response, which reduce nonlinearity to about 10-12% of the maximum flow rate. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

Modern microcontrollers offer significant calculation power, sometimes at a very low price, so it is worth trying to find a calculation solution to the nonlinearity problem.

Before we start, let’s recall the principle of linearization: the circuit or the calculation formula that will process the output signal of the sensor circuit must generate the inverse function of the sensor response. For example, if the sensor response is a log function, the response of the linearizing section must be exponential.

The work started with getting 46 discrete points of the sensor response (see Figure 4 in the reference paper). The discretion step is small at the beginning where the curve rises fast and gets bigger as the curve becomes more and more flat. Attempts to fit the flow rate versus voltage response with a piecewise approximation or cubic splines can reduce linearity error to 1-2% at the cost of bulky formulas. It would be much better if the whole curve is covered by a single smooth function.

Several functions of different complexity were tested. The best results were achieved with a composite function of the form:

where N is the number to be generated by the microcontroller and Vs is the output voltage of the sensor circuit. The presence of four coefficients, A to D, provides a lot of flexibility to fit the desired set of points.

The Solver tool of MS Excel found the proper values of the unknown coefficients:

A = 10525.4, B = -4.49563, C = 9103.05 and D = -1.36567.

As Figure 1 shows, passing the sensor voltage through this function provides a highly linear relation between the number N to be displayed and the flow rate. Figure 2 presents the deviation between the discrete points of the response and the best fit linear equation. The error is within the ±2.5 fpm range, which is 0.125 % of the maximum flow rate. This is 80 times better than the hardware solutions in the reference paper. An important feature is that the error will affect only the last digit in the displayed number.

Figure 1 The calculation approach provides a highly linear, 1:1 relation between the displayed number and the airflow rate.

 Figure 2 Close insight reveals a very small nonlinearity of the overall response.

In real applications, the error may not be that small due to errors in the A-to-D conversion, limited size of numbers and rounding errors during the calculations; however, it will be still much better than the hardware solutions.

If the proposed function looks too complicated to you, feel free to try any other function you may wish. A good tutorial on how to use the Solver tool is available here. Uncheck the “Make Unconstrained Variables Non-negative” box, so the unknown coefficients can get negative values.

Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently he teaches electrical and electronics courses at a Toronto community college.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Proper function linearizes a hot transistor anemometer with less than 0.2 % error appeared first on EDN.

Analog of a thyristor with a controlled switching threshold

Mon, 12/18/2023 - 16:27

Thyristors are a semiconductor device with an S-shaped current-voltage characteristic. They are a kind of managed keys with a state memory that are turned on when a control signal is applied to the input of the thyristor. When working on direct current, you can turn off the thyristor by removing the supply voltage. A big and unavoidable disadvantage of thyristors is that they have a low input impedance and, most importantly, they do not have the ability to adjust the switching threshold.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows a simple circuit of thyristor analogues that have a high input resistance and the ability to adjust the switching voltage. Both devices contain comparators U1.1 of the LM339 chip as an active component.

Figure 1 A comparator analog of a thyristor with an adjustable switching threshold.

It works as follows. A control signal is applied to the input of the device (the inverting input of the comparator), the voltage of which exceeds the threshold of switching the comparator. This threshold is set by adjusting the potentiometer R2 in the range from 0 to 4 V. If the voltage at the comparator output is close to the supply voltage of the device (the level of the logical unit) before the input signal is supplied, then after switching the voltage at the comparator output drops to zero, the transistor Q1 2N7000 closes. The voltage on its drain is switched from zero to the supply voltage. This voltage is applied through the resistor R7 to the input of the comparator and snaps its state. You can return the thyristor analog to its original state by briefly disconnecting the supply voltage with the S1 button.

Figure 2 shows a simplified version of the comparator analog of the thyristor. The pulse of the input control signal through the Schottky diode D1 1N5817 is fed to the non-inverting input of the comparator U1.1. In the initial state, before the control signal is supplied, the voltage at the output of the comparator is zero. If the input voltage exceeds the switching voltage of the comparator set by adjusting the potentiometer R2 in the range from 0 to 1 V, the comparator will switch. A high-level voltage will appear at its output, which enters the comparator input through the resistor R5 and snaps its state.

Figure 2 A variant of the comparator analog of the thyristor.

You can return the device to its original off state by pressing the S1 button.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 800 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Analog of a thyristor with a controlled switching threshold appeared first on EDN.

Application of machine learning in power management systems

Mon, 12/18/2023 - 13:39

Data processing these days is exhibiting a split personality. ‘Cloud’ computing grabs the headlines for sheer scale and computing power, while ‘edge’ computing puts the processing at the ‘coal face’ where electronics interfaces with the real world. In the cloud, data is stored in vast quantities and processing is queued and scheduled, while at the edge, processing is targeted and immediate.

This enables rapid response to local commands and feedback from the application, while keeping the process more secure with reduced data flows. The two areas interact of course, with data passed back to the cloud for consolidation and analysis across devices or locations, with global commands and firmware updates passing the other way.

Both processing environments benefit from the latest developments in artificial intelligence (AI) and machine learning (ML). In data centers, for example, thousands of servers incorporating tens of thousands of processors, mainly GPUs, perform massively parallel computing to generate and operate large language models (LLMs) such as ChatGPT. By some measures, these platforms now perform better than humans.

At the edge, processing reacts to feedback sensors and commands according to an operating algorithm; however, with machine learning, algorithms can also effectively learn from the feedback now. This can then instigate changes to the algorithm and its calculation coefficients to make the controlled process more accurate, efficient, and safe.

Energy use difference between cloud and edge

A major practical difference exists between cloud and edge computing when it comes to the scale of energy used. In both cases, consumption must be minimized, but in a data center, it is huge, estimated by the International Energy Agency (IEA) at 240-340 TWh or 1% to 1.3% of global demand. Artificial intelligence and machine learning will only accelerate energy consumption, with the IEA predicting an increase of 20-40% in the coming years, compared with historical figures of around 3%.

Unlike on-demand data processing such as gaming and video streaming, AI has learning and inference phases. Learning uses datasets to train the model, and ChatGPT reportedly consumed over 1.2 TWh to do this. On the other hand, inference, or the operational phase of the LLM, might amount to 564 MWh per day, according to de Vries.

At the other end of the scale, edge computing incorporated in an Internet of Things (IoT) node or a wearable device might have to consume no more than milliwatts. Even industrial and electric vehicle (EV) applications, such as motor control and battery management, have a small budget for losses in the control circuitry and cannot afford large percentage increases to accommodate AI and machine learning.

Consequently, Tiny Machine Learning or tinyML has been developed as a field of applications and technologies to implement on-device sensor data analytics, optimized for extremely low power consumption.

TinyML and power management

Applying machine learning to an application such as battery management using tinyML techniques is a multi-dimensional problem, with goals to add charge as rapidly, safely, and efficiently as possible, while controlling discharge with minimum stress. Management also monitors battery health and might actively balance cells to ensure that they age equally for maximum reliability and service lifetime.

Monitored parameters are individual cell voltages, current and temperature, and the management system is typically required to predict state of charge (SOC) and state of health (SOH). These are dynamic quantities that have a complex and changing relationship to the history of use of the battery and the measured parameters.

Despite the complexity of the task, an expensive GPU implementation of AI processing is not necessary. Modern microcontrollers such as the ARM Cortex M0 and M4 families are easily up to the task of machine learning in battery management, consume little power, and have been incorporated into system-on-chips (SoCs) dedicated to the application.

Battery management ICs are common, but when powered by an MCU implementing machine learning, information and patterns of historical and current sensor data can be used to make better predictions about SOC and SOH, while ensuring high levels of safety. As with other ML applications, there is a learning phase from training data, and this can be logged across different environmental conditions and across multiple batteries with their manufacturing tolerances. Where field data is not available, synthetic data from modelling could be used.

As is the essence of AI, the model can then be updated as field data accumulates for scaling up or down of the application or for use in other similar systems. Although learning is typically an exercise before the application goes live, it can then be a background task using sensor data, processed off-line either locally or through the cloud for continuous performance improvement. This is set up by automated ML (AutoML) tools in conjunction with evaluation kits for battery management SoCs.

Machine learning models

A wide choice of models is available to use in machine learning and for edge applications such as battery management. A simple classification decision tree might suffice as it uses little resource, perhaps up to a few kilobytes of RAM. The approach results in a simple classification of a data collection set into ‘normal’ or ‘abnormal’, and an example is shown in Figure 1.

Figure 1 An example decision tree classifier shows Class 1 = normal and Class 0 = abnormal. Source: Qorvo

Here, two parameters are used to characterize a multi-cell battery during discharge: SOC for the strongest cell and voltage difference between the strongest and weakest cells. The blue and white cells represent normal data, and the classification areas are represented in blue (Class 0 = normal) and grey (Class 1 = abnormal).

To evaluate continuous values of output data, rather than just categories, a more complex regression decision tree could be used. Other common ML models include support for vector machines (SVM), kernel approximation classifiers, nearest neighbor classifiers, naïve Bayes classifiers, logistic regression, and isolation forests. Neural network modelling can be included in AutoML tools where improved performance is achieved but at the expense of complexity.

The whole process of development of a ML application is collectively called ML operations or ‘MLOps’ and includes data collection and curation, model training, analysis, deployment, and monitoring. The process is shown graphically in Figure 2 for a battery management application using the PAC25140 chip, which can monitor, control and balance up to 20 cells in a string using Li-ion, Li-polymer or LiFePO4 chemistries.

Figure 2 The above design example highlights the tinyML development flow. Source: Qorvo

Case study: Week battery cell detection

Part of SOH monitoring for a battery is detection of degraded cells. These cells might be characterized by an abnormally low cell voltage under load. However, the voltage is also affected by actual discharge current, state of charge and temperature as shown in Figure 3, which highlights example curves for strong and weak cells at different temperatures and load currents.

Figure 3 Cell discharge curves are shown for both strong and weak cells. Source: Qorvo

Figure 3 shows that the significant difference between strong and weak cell voltages occurs when the cells are nearly depleted. Detection of the weak cell at this point may be too late to avoid overheating and safety issues, so a solution is to implement ML to look for patterns in the data earlier in the discharge cycle.

The effectiveness of the ML approach was highlighted in an experiment performed by Qorvo where a weak cell was inserted in a 10-cell battery pack and compared with a good pack. Training data was generated for both types of cells as they were discharged at different constant current rates and temperatures. Monitored parameters were series current, temperature, difference between strongest and weakest cell voltage, and SOC for the strongest cell.

The parameters were sampled simultaneously every 10 seconds over 20 discharge cycles and analyzed using different models as listed in Table 1. Results were compared with independent test data over 20 discharge cycles, showing close agreement between the two methods, which would improve further with more training samples.

Table 1 Example results are extracted from training and test data for different ML models. Source: Qorvo

SoCs suffice for ML support

While current AI headlines focus on large-scale, high-power applications, its implementation ‘at the edge’ using MCUs and tinyML techniques for targeted applications such as battery monitoring can be part of a high-performance but low-power solution. Here, SoC solutions have all the processing power needed and can incorporate a wide variety of machine learning algorithms.

All necessary sensor and communications interfaces are built-in, and SoCs are additionally supported by a rich ecosystem of evaluation and design tools.

John Carpenter is a product development manager at Qorvo.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Application of machine learning in power management systems appeared first on EDN.

How to control a qubit? With a quantum error correction stack

Fri, 12/15/2023 - 14:49

There are many challenges to build a useful quantum computer. The core issue is that the quantum bits—qubits—that run the quantum calculations are incredibly unreliable, breaking down when they encounter the slightest environmental noise, causing errors.

It’s a complex, multi-disciplinary challenge and one that’s vital to tackle if we ever want to scale quantum computers to the point where they can do something useful for society. It’s also a fascinating problem to work on; it may surprise you to know that we rely on many classical engineering skills to build our quantum control systems.

Classical hardware design verification is one of the tools to ensure that we build the system right. Design verification is where you prove or test that the system meets its specifications. In other words, given the input, you get the output you expected.

Needless to say, without design verification, we cannot ensure that we’re controlling the qubits in the right way. It’s a necessary tool to ensure that we are building the right quantum error correction stack for hardware companies. It’s the tool that will ensure that we are engineering tomorrow’s fault-tolerant systems.

In a paper published at this year’s DVCon Europe in Munich, and made available on arXiv, Riverlane explains how classical device verification techniques are used to verify the control system: Deltaflow.Control.

The quantum error correction stack will verify the control system in quantum computers. Source: Riverlane

Riverlane is building a quantum error correction stack to help correct qubit errors. The effort encompasses building a scalable control and calibration system to reduce errors and create reliable qubits.

Next-generation control system

Going back to other industries and to previous endeavors, many of the components used in such control systems have been built before. These include:

  • Radio frequency (RF) signal generation (currently used across 5G networks)
  • Distributed computing (used for large-scale networks such as the Internet)
  • Real-time systems (a vital component in industrial control, autonomous vehicles and aerospace/defense applications

But what we have never done is to build a system where all these components must work together at the same time and place. This is exactly the challenge that we face when building a quantum control system.

The quantum error correction stack requires an entirely new system architecture to be designed and built—one that is scalable as qubit numbers increase. That’s a huge undertaking, and the new arXiv paper focuses on the classical hardware verification methodologies that we need to verify the Deltaflow.Control system as we move from our current Control System (called DC1, which is capable of controlling tens of qubits) to the next generation system DC2.

The paper describes how the new DC2 system balances tight power, memory, and latency constraints to create control signals that enable high-precision manipulation of the amplitude, frequency and phase of the waveforms. The more accurately we can manipulate these parameters, the better we can control the quantum state.

DC2 is a system architecture for a distributed system that offers compute at different levels of the quantum error correction stack. It enables developers to integrate their systems at the appropriate levels. Moreover, DC2 is portable across different quantum hardware types.

When it comes to verifying DC2, we use all the classical verification techniques that we have in our armour: universal verification methodology (UVM), SystemC modeling environments, golden model-based verification, formal verification, and in-lab testing.

In the paper, we also describe how we use more modern “shift left” agile software approaches such as a continuous integration to do “full stack” testing.

Samin Ishtiaq is head of software at Riverlane, a quantum computing company based in Cambridge, UK.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How to control a qubit? With a quantum error correction stack appeared first on EDN.

Pages