EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 45 min ago

Updating an unsanctioned PC to Windows 11

Mon, 06/24/2024 - 15:37

Speaking of Windows 11

Back in September 2021, I wrote about how, although Microsoft was offering free upgrades to its latest Windows 11 operating system, the new O/S’s (seemingly overly) strict system requirements were effectively obsoleting perfectly good hardware (not a Windows-only phenomenon, mind you). This particular obsolescence by design instance affected my Surface Pro 5 (SP5) tablet/laptop hybrid, which (at the time) had been unveiled only four years earlier:

More recently, last November I covered what I’d be replacing both my primary and spare SP5 with: a pair of Surface Pro 7+ (SP7+) systems:

along with their in-advance acquired successors, two Surface Pro 8s (SP8s):

both generations of which are Windows 11-compatible.

After getting the primary SP7+ fully up and running, including installing the full application software suite I needed, I donated the spare SP5 first, at the beginning of this year. A bit more than a month later, the primary SP5 followed it to my local charity; I decided I had no further need of it, as the SP7+ was working fine (and I had a just-in-case spare for it, too), and I figured I’d gift someone else the maximum amount of usable time with it until Windows 10 times out next October. But before I did, to satisfy my curiosity (and because I’d be doing a full factory reset pre-donation anyway), I decided to see what’d happen if I tried updating it to Windows 11 in spite of Microsoft’s warnings that I couldn’t…or more accurately in this case, shouldn’t.

Let me explain. To install Windows 11 “fresh” on a Microsoft-claimed Windows 11-incompatible computer you’ve either built from scratch or previously “wiped”, you need to jump through some hoops, first obtaining an O/S installation ISO either from Microsoft’s Windows Creation Tool or third-party site UUP Dump, then using another unsanctioned service, Rufus, to modify that ISO—bypassing the checks for specific CPU suppliers, families and models, the presence of TPM (trusted platform module) v2.x-generation support, Secure Boot capabilities and the like—and burn it to a USB flash drive. But in my case, since I already had Windows 10 installed on the system (a system that was already Secure Boot-capable, an important nuance), all I had to do was add a Registry entry documented by Microsoft, believe it or not:

noting the all-important warning:

Microsoft recommends against installing Windows 11 on a device that does not meet the Windows 11 minimum system requirements. If you choose to install Windows 11 on a device that does not meet these requirements, and you acknowledge and understand the risks, you can create the following registry key values and bypass the check for TPM 2.0 (at least TPM 1.2 is required) and the CPU family and model.

and then I could upgrade to Windows 11 from Windows Update via the Installation Assistant.

Before plunging in, I ran WinSAT (Microsoft’s Windows System Assessment Tool) under Windows 10 to see how the system benchmarked. Here’s what I got:

CPUScore

D3DScore

DiskScore

GraphicScore

MemoryScore

8.7

9.9

8.55

6.7

8.7

Prior to the Registry hack, here’s what I’d seen when attempting to update to Windows 11:

One note on the above: you’ll see that the only disqualifier noted this time was a too-old CPU. Back in September 2021, I’d also mentioned that its TPM generation was too immature. I’m guessing that between then and now, its software-based TPM had been updated by a firmware upgrade, although I very well could have just been mistaken roughly three years back. Anyhoo, post-Registry hack, and after being subjected to one more warning:

The upgrade process began:

When it was done, I was staring at a stable Windows 11 desktop, one even absent the “nag” watermark (I assume this was because TPM 2.x and Secure Boot support existed and only the CPU generation was officially too geriatric):

and judging from my admittedly only modest testing results, both the upgraded O/S and all the previously installed apps on top of it still ran just fine (uniquely identifying info greyed out):

The WinSAT results were even identical to what I’d seen before:

CPUScore

D3DScore

DiskScore

GraphicScore

MemoryScore

8.7

9.9

8.55

6.7

8.7

Last, but not least, I reverted it back to Windows 10:

then factory-reset it before donating it.

So, am I saying you all should go ahead, ignore Microsoft’s warning regarding minimum system requirements, and put Windows 11 on your computers? Not exactly…not even close, actually. It’s always been a bit of a mystery how and why Microsoft came up with the dividing line between what CPU suppliers, families, and models were deemed Windows 11-worthy and not. The oldest Intel CPU family fully supported by Windows 11 is the 8000-series “Kaby Lake Refresh” generation; 7th-generation “Kaby Lake” chips like the one in my Surface Pro 5 (aka Surface Pro 2017) are generally not on the supported list albeit with a few exceptions, including one (the Core i7-7820HQ) which seems to only be included for self-serving Microsoft reasons.

More generally, less than a week after I did my “unsanctioned upgrade” experiment, new test builds of Windows 11 started explicitly blocking install or upgrade attempts (even if the Registry hack was in place) if the system’s CPU didn’t support the arcane PopCnt instruction…and importantly, if other legacy workarounds were used to get around the block, the system subsequently refused to boot. More recently, the explicitly-blocked CPU list expanded further, to encompass any processor that didn’t support the full SSE4.2 instruction set. I’m guessing this has something to do with Windows 11’s burgeoning AI support, which in the absence of a dedicated-function deep learning acceleration chip or CPU-integrated core, presumably can alternatively be passably implemented via SSE (Streaming SIMD Extensions):

Streaming SIMD Extensions (SSE) is a single instruction, multiple data (SIMD) instruction set extension to the x86 architecture, designed by Intel and introduced in 1999 in their Pentium III series of central processing units (CPUs) shortly after the appearance of Advanced Micro Devices (AMD’s) 3DNow!. SSE contains 70 new instructions (65 unique mnemonics using 70 encodings), most of which work on single precision floating-point data. SIMD instructions can greatly increase performance when exactly the same operations are to be performed on multiple data objects. Typical applications are digital signal processing and graphics processing [editor note: and deep learning training and inference, a key reason why dedicated DSPs and GPUs are now also used for these additional functions].

 Intel’s first IA-32 SIMD effort was the MMX instruction set. MMX had two main problems: it re-used existing x87 floating-point registers making the CPUs unable to work on both floating-point and SIMD data at the same time, and it only worked on integers. SSE floating-point instructions operate on a new independent register set, the XMM registers, and adds a few integer instructions that work on MMX registers. SSE was subsequently expanded by Intel to SSE2, SSE3, SSSE3 and SSE4. Because it supports floating-point math, it had wider applications than MMX and became more popular. The addition of integer support in SSE2 made MMX largely redundant, though further performance increases can be attained in some situations by using MMX in parallel with SSE operations.

Granted, SSE4.2 has been widely supported in x86 CPUs for at least a decade. But this reality misses the big-picture point. The bottom line: any particular system whose specifications aren’t fully encompassed by Microsoft’s Windows 11 requirements documentation is fair game for abrupt no-boot cutoff at any point in the future. At minimum, you’ll end up with a “stuck” system, incapable of being further upgraded to newer Windows 11 releases, therefore doomed to fall off the support list at some point in the future. And if you try to hack around the block, you’ll end up with a system that may no longer reliably function, if it even boots at all. So no, don’t follow in my (experiment, to reiterate) footsteps. And I candidly don’t suggest you pay the Microsoft Windows 10 extended support extortion tax, either. But don’t just toss that legacy system in the trash. Wipe Windows and put Linux or ChromeOS Flex on it, instead.

Does this suck? Sure, especially for those of us long used to legacy hardware able to run newer Windows releases with, at most, only an upgrade license purchase. Except, to the “purchase” point and reiterating what I first noted three years back, for Microsoft, its system, and CPU (and other PC “building block”) partners, who are likely salivating at the replacement-PC-acquisition uptick to come in a bit more than a year. But them’s the breaks, I guess. Let me know your thoughts on this in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Updating an unsanctioned PC to Windows 11 appeared first on EDN.

Automotive switch serves as smart fuse

Mon, 06/24/2024 - 15:01

An automotive high-side driver from ST, the VNF9Q20F combines the company’s VIPower M0-9 MOSFET with Sti2Fuse intelligent fuse protection. Aimed at automotive power distribution applications, the VNF9Q20F drives resistive or inductive loads directly connected to ground. The part also provides a serial peripheral interface (SPI) to communicate diagnostic data, further enhancing functional safety.

The Sti2Fuse, based on I2t (current squared through time-to-fuse) functionality, responds within 100 µs to turn off the MOSFET if excessive current is detected. Performing as an intelligent circuit breaker, the VNF9Q20F improves boardnet voltage stability and prevents PCB traces, connectors, and wire harnesses from overheating.

The VNF9Q20F is outfitted with four outputs controlled via SPI or two OTP assignable direct inputs. Real-time diagnostics, including as open load, output short to VCC, overtemperature, communication error, power limitation, or latch off, are available via the SPI bus.

Prices for the VNF9Q20F in 6×6-mm QFN packages start at $2.85 each in lots of 1000 pieces.

VNF9Q20F product page

STMicroelectronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Automotive switch serves as smart fuse appeared first on EDN.

MCU offers on-chip AI/ML acceleration

Mon, 06/24/2024 - 15:01

The Ensemble E1C microcontroller from Alif combines numerous digital and analog capabilities with an on-chip neural processing unit (NPU). It’s Arm Ethos-U55 NPU delivers up to 46 GOPs, enabling the E1C to perform AI/ML tasks and application control functions at ultra-low power levels. The company anticipates that the E1C MCU will unlock new opportunities for machine learning in highly compact and power-efficient embedded systems and IoT devices.

Along with the NPU, the E1C packs a 160-MHz Arm Cortex-M55 CPU core with a Helium vector processing extension and up to 2 Mbytes of tightly coupled SRAM in its tiny 3.9×3.9-mm WLCSP. The MCU’s power management unit dynamically powers only the logic and associated memory that are in use at any given time to achieve the lowest overall system power consumption. It also implements four system-level power modes, including a stop mode that draws just 700 nA.

The E1C microcontroller offers two 12-bit SAR ADCs, a 24-bit sigma-delta ADC, a 12-bit DAC, and an internal reference voltage, allowing rapid signal processing from external sensors. Serial communication interfaces include USB 2.0, SDIO, two CAN FD, and I3C. In addition to the 90-bump WLCSP, Alif offers the E1C in 64-lead TQFP and 120-bump FBGA packages.

Ensemble E1C microcontrollers and DK-E1 development kits will be available to lead customers in August 2024, with production ramping in 4Q 2024.

Ensemble E1C product page

Alif Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCU offers on-chip AI/ML acceleration appeared first on EDN.

SP4T RF switch is assembly-friendly

Mon, 06/24/2024 - 15:00

pSemi’s PE42548 SP4T RF switch allows faster assembly in test and measurement, 5G, mmWave, and SATCOM applications up to 30 GHz. Housed in a 20-lead, 3×3-mm LGA package, the switch provides an alternative to conventional flip-chip devices. According to the company, it offers contract manufacturers a streamlined, turnkey solution that simplifies the assembly process.

The PE42548 is a reflective switch that covers a wideband frequency range of 9 kHz to 30 GHz. Based on the company’s HaRP and UltraCMOS technologies, the switch offers low insertion loss of 2.0 dB at 26 GHz, a fast switching time of 60 ns, power handling capability of 33 dBm, and isolation performance of 41 dB. It also operates efficiently in extreme environments, with a temperature range of -40°C to +105°C.

Samples of the PE42548 SP4T RF switch are available now, with commercial availability anticipated in late 2024. To request a sample, contact sales@psemi.com.

PE42548 product page

pSemi

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SP4T RF switch is assembly-friendly appeared first on EDN.

LTE Cat-1 bis modules enable global deployment

Mon, 06/24/2024 - 14:59

Swiss provider u-blox has added a global variant to its Lexi-R10 series of LTE Cat-1 bis modules to allow worldwide cellular connectivity. A second series of LTE Cat-1 bis modules, the Sara-R10 global series offers an easy migration path from legacy Sara 2G/3G designs and includes a combo variant with a GNSS receiver.

At 16×16×2 mm, the Lexi-R10 global is one of the smallest single-mode modules of its type. It can be used in space-constrained IoT applications, such as people or pet trackers and wearables, and features indoor location and a US MNO-certified core.

Similarly, the single-mode 16×26×2.2-mm Sara-R10 is also among the smallest, combining LTE Cat-1 bis and stand-alone GNSS. Capable of concurrent communication and tracking, this module is suitable for applications requiring continuous connectivity and indoor/outdoor positioning.

Both wireless communication modules cover multiple LTE frequency bands and operate over a temperature range of -40°C to +85°C. They also feature Wi-Fi scan capability and support the u-blox CellLocate positioning service.

For sales information and availability of the Lexi-R10 and Sara-R10 modules, contact u-blox using the product page links below.

Lexi-R10 product page

Sara-R10 product page

u-blox

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post LTE Cat-1 bis modules enable global deployment appeared first on EDN.

Foundry readies mmWave GaN-on-SiC process

Mon, 06/24/2024 - 14:59

Taiwanese pure-play foundry WIN Semiconductors has announced the beta release of its robust mmWave GaN-on-SiC technology, NP12-0B. At the core of this platform is a 0.12-µm gate RF GaN HEMT process with enhancements for DC and RF ruggedness and die-level moisture resistance.

Transistor improvements in NP12-0B boost ruggedness when operated under deep-saturation, high-compression pulsed and continuous-wave (CW) conditions. NP12-0B eliminates the pulse droop behavior observed in GaN HEMT power amplifiers. This improves the range and sensitivity of pulsed-mode radar systems. The Enhanced Moisture Ruggedness option ensures strong resistance to humidity in plastic packages.

NB12-0B supports full MMICs, enabling customers to develop compact pulsed or CW saturated power amplifiers for applications up to 50 GHz. The process is qualified for 28-V operation. It delivers saturated output power of 4.5 W/mm in the 29-GHz band, with 12-dB linear gain and over 40% power added efficiency.

The beta release of NP12-0B is now available for early access multi-project wafer (MPW) runs. Qualification testing is complete. Final modeling and PDK generation is expected to conclude in August 2024. The full production release is scheduled for late Q3 2024.

WIN Semiconductors 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Foundry readies mmWave GaN-on-SiC process appeared first on EDN.

Examining an environmental antonym: Tile’s Slim

Fri, 06/21/2024 - 14:00

Within my mid-2021 teardown of Tile’s Mate tracker, I admitted:

I periodically go through bouts of misplacing my keys. And my wallet. Sometimes at the same time.

That forgetfulness prevalence, as you later learned, among other things has encompassed dropping the keys in the driveway, for their eventual digestion (and subsequent disgorgement) by a snowblower. But I digress. I’d bought the 2020-version Tile Mate late that same year (2020), accompanied by a same-model-year Tile Slim for my wallet:

The Tile Slim, unlike its thicker Mate sibling, doesn’t have a user-replaceable battery, for non-coincidental svelteness reasons. Tile claims 3 years average operating life before the device needs to be replaced, and mine lasted a few months more than that; its Bluetooth beacon beamed its final signal last month (as I write these words in late April) wherein I replaced it with a 2022-model successor:

The two versions look near-identical, aside from an imprinted QR code on the back of the newer variant which assists in tracking down the owner if someone else finds it:

More generally, here’s how the three Slim versions compare per company documentation (which has typos I’ve corrected in the following table), beginning with the original 2016 edition:

Model

Dimensions (length x width x thickness)

Weight

Environmental resistance

Bluetooth range

Loudness

Battery life (non-replaceable)

2016 (T2001)

54 x 54 x 2.4 mm

9.3 g

IP57

100 ft

82 dB

1 year

2020 (T7001)

86 x 54 x 2.4 mm

14 g

IPX7

200 ft

108 dB

3 years

2022 (T1601S)

85.5 x 53.8 x 2.5 mm

14 g

IP67

250 ft

“Louder”

3 years

A few comparative comments before continuing:

  • Presumably, since the Slim line was intended from the beginning to be the same wallet-friendly thickness as a couple of credit cards, Tile decided to expand the length of the 2020 and 2022 models to full credit card size, too, thereby enabling larger internal batteries and consequent longer battery life before required device replacement.
  • The Tile-documented slight dimensional differences between the 2020 and 2022 models are, I suspect, “rounding errors”; the two models seem visually identical to me.
  • All three models are capable of tolerating 3 ft/1 m immersion in water for up to 30 minutes (per the last digit “7” in the IP rating). However, whereas 2016 and 2022 models also document accompanying dust tolerances (respectively “5” and “6”), the 2020 model takes an “X” seeming pass on this particular certification spec, for unknown reasons.
  • I can’t find a dB rating for the transducer in the 2022 model; Tile only trumpets that it’s “Louder” (than what?). But I’m betting that it’s comparable to the 2020 version’s 108 dB.
  • Tile’s claimed 50 m longer maximum Bluetooth transmission and reception range for the 2022 model, part of the company’s rationalization for its $5-higher price tag versus the 2020 precursor, isn’t seemingly born out by reviews I’ve seen.

One other note on battery life before proceeding with the teardown. Often, even with devices that come with user-replaceable batteries, I find a slim piece of paper or plastic that needs to be removed (thereby completing the circuit between one of the battery terminals and the device’s electronics) prior to as-needed setup and subsequent operation. The Tile Slim doesn’t offer any sort of similar physical barrier, which is understandable given its fully sealed nature, but problematic from warehouse and retail shelf-life standpoints. Instead, you press the front button, whereupon the device emits a little ditty and first-time setup can then proceed.

Presumably, therefore, there’s a non-zero constant current draw from the battery while the device is sitting in the box, unless that initial button press also “makes” an in-parallel permanent connection within the switch between the battery and the bulk of device circuitry (thoughts, readers?). Again, an out-of-box partially-to-fully drained battery is not so problematic with something based on a user-replaceable battery, but in this case a Tile customer would likely be unhappy if they were to buy a usable life-compromised Slim that’d previously been sitting in the store for a long time. And given that the majority of Tile’s 2022 devices (save for the Pro) have non-user-replaceable batteries, the likely potential for consumer uproar is all the more.

Enough with the prep, let’s get to tearing down. Given that the Tile Slim packaging is long gone, I’ll instead dive right in with some overview shots from various perspectives, accompanied by a 0.75″ (19.1 mm) diameter, 1.52 mm thick U.S. penny for size comparison purposes:

See the interstitial seam in that last shot? I bet you know what comes next:

Pop the seam apart, peel away some glue, and the back panel comes right off:

Flip back a metal flap, and the battery also appears:

At this point, the PCB-plus-battery assembly lifts right out, too:

In the upper right is the piezoelectric transducer (aka “speaker”) whose silver- and gold-colored regions press-mate to contacts coming from the PCB. Below it is the inside of the front panel button, which when depressed will presumably press down on a PCB-mounted switch (note the “dimple”; we haven’t yet seen the switch itself, so hold that thought). Here’s a closeup of both:

And zooming back out…so what’s with that metal shield surrounding the battery?

My guess is that it serves dual purposes. Since the Tile Slim is intended for use in a wallet, which might be in the owner’s pocket, it reinforces the battery integrity should the Tile Slim become cracked (would that be a butt crack? Sorry-not-really…) due to environmental stress. And were the battery’s integrity to be compromised anyway, it helps shield the owner’s body (or purse contents, etc.) from being exposed to the resultant “brief-but-intense burst of heat, puff of smoke, and acrid stench”. There may also be a Faraday cage angle, given that we are talking about a RF (Bluetooth, to be exact)-based product here, but the lack of any sort of electrical ground between it and the rest of the system leaves me skeptical. Readers?

Before going further, I decided to re-place the PCB (with its other side, containing the aforementioned switch and transducer contacts, among other interesting bits, now visible) back in the case’s bottom half absent the metal shield so you can see how it’s oriented:

And now let’s take the PCB-plus-battery back out and give it a closer look, beginning with the just-seen front side:

Zooming in on the PCB itself:

Note again the previously mentioned switch and transducer contacts. Note too that the areas containing test point contacts are shinier than the rest (again, hold that thought). And finally, note the “ANT2” mark along the left side. Flipping the PCB over…

And zooming in…

The PCB-embedded antenna (i.e., ANT2…although I can’t find an ANT1 reference; can you?) is obvious. Notice how much blurrier the PCB markings (along with the various components themselves, with the notable exception of the antenna) are on this side? And notice the square-border translucent piece on top of the largest IC? At this point, I’ll let you in on the surprise (which at least some of you probably already figured out). Not only is the battery bendable (for likely already-obvious reasons, given the already-noted dominant use case):

So too is the PCB itself:

What we’ve essentially got here, aside from the flex-PCB and non-coin-battery variances, is a clone of the hardware design found in the 2020-model Tile Mate I tore down three years ago, whose front and back PCB closeups I’ll again show for easy-comparison purposes:

The main system chip underneath the square protective border this time is once again Nordic Semiconductor’s nRF52810 Bluetooth 5.2/BLE control SoC, based on an Arm Cortex-M4. And although I can’t discern the other primary chip’s identity from the flex PCB’s murky translucency, I’d be willing to bet that it’s once again Micro Analog Systems Oy’s MAS6240 piezo driver IC.

In closing, the 2020-model Tile Mate’s FCC ID is 2ABXLT7001, for anyone who’d like to delve further into it in an absolute sense and/or relative to its Tile Mate sibling (FCC ID: 2ABXLT9001) and/or 2022-model Tile Mate successor (FCC ID: 2ABXLT1601S). And with that, I’ll await readers’ thoughts in the comments!

 Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Examining an environmental antonym: Tile’s Slim appeared first on EDN.

Five technologies reshaping electronics manufacturing

Fri, 06/21/2024 - 11:22

Electronics keep getting smaller in both consumer and commercial applications. As the demand for minuscule form factors rises, electronics designers face the increasingly difficult task of embracing this trend while ensuring manufacturability.

Smaller electronics leave less room for error. Their materials may also be more prone to breaking and contamination at this scale. However, this doesn’t mean the microelectronics trend is unsustainable. Several technological innovations have arisen to meet these growing challenges.

  1. 3D printed circuits

Conventional machining poses challenges on the micro and nano scales, thanks to its vibrations, friction and general lack of precision. 3D printing is a promising alternative, especially now that it’s possible to print circuitry.

3D printing doesn’t risk breaking any fragile materials because it doesn’t cut any item away. It’s also mostly automated—removing human error—and can print structures a fraction of the width of a human hair. Newer printing materials make it possible to lay traces directly instead of cutting channels to then fill with a conductor. Consequently, they reduce production steps, leaving fewer chances for mistakes.

  1. Roller transfer printing

Other printing methods have emerged as promising micro-manufacturing solutions, too. Researchers at the University of Strathclyde found it’s possible to use roller transfer printing to adhere micro-LEDs to semiconductors at scale with minimal errors.

Roller transfer printing itself is far from new but applying it to electronics manufacturing can yield significant accuracy and production scale improvements. The researchers successfully aligned over 75,000 devices with deviations no larger than a micrometer through this continuous rolling process.

  1. Electrical discharge machining

Electrical discharge machining (EDM) is another production method with vast potential in electronics manufacturing. Unlike conventional machining, EDM involves no physical contact with the cutting surface, instead using electrical arcs to cut material. This lack of friction makes it ideal for manufacturing microscale electronics components out of sensitive materials.

Micro-EDM wires can be as small as 20 microns in diameter, enabling precise cutting tolerances. That scale is difficult to achieve with conventional machining or even laser-cutting, making this an optimal micro-engineering method.

  1. Onsite nanocrystal growth

In other microelectronics applications, machining isn’t as much of a concern as component alignment. Placing materials onto microscale semiconductors and PCBs can be difficult, given tight tolerances and the risk of breaking them through unnecessary pressure. Researchers at MIT found a solution in growing nanocrystals directly on the device.

By fostering onsite perovskite growth, the researchers positioned these materials with sub-50-nanometer accuracy and no risk of breaking the fragile nanocrystals. LEDs, lasers and solar panels would all benefit from this production method.

  1. Automation and AI

Across all these innovations, automation and artificial intelligence (AI) play an increasingly central role in electronics design. Eliminating errors is the key to overcoming many micro-machining challenges, and automating mistake-prone tasks is often the best way to do so.

3D printing, EDM and roller transfer printing are all highly automated processes. In the design stages, AI can suggest changes or simulate real-world performance to ensure manufacturability and functionality. As demands for smaller electronics rise, these technologies will become standard in the industry.

New technology makes micro-machining electronics possible

Today’s smaller electronics require ultra-precise measurements and control. The only way to manage these challenges effectively is to capitalize on new technologies. These innovations showcase how the electronics industry is evolving to meet these new demands.

Staying abreast of changes like this is key to remaining competitive in this industry.

Ellie Gabel is freelance writer as well as associate editor at Revolutionized.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Five technologies reshaping electronics manufacturing appeared first on EDN.

Solar day-lamp with active MPPT and no ballast resistors

Thu, 06/20/2024 - 14:00

When the Sun is shining and illumination is needed inside a dimly lit interior space, a popular, proven, and highly efficient solution is to utilize the energy of available sunlight in the simplest and most sustainable fashion conceivable: Opening a window!

Wow the engineering world with your unique design: Design Ideas Submission Guide

Sometimes, however, details of access to the outdoors make this traditional solution inconvenient, impractical, or downright impossible. Then, a more topologically flexible approach may be needed, even if it’s more complex and less efficient than the window gambit. Enter the solar day lamp.

A solar day lamp is an illumination system comprising a solar photovoltaic panel mounted outside—sustainably converting sunlight into electrical power—a run of wire to conduct said power into the interior, then suitable circuitry and LEDs to re-convert the delivered power back into a useful light source. 

It’s admittedly more complicated than a window, but still better than stumbling around in the dark!

For such a double-conversion scheme, converting light into electricity and then back into light, to work with a reasonable size (and cost!) solar panel and still be bright enough to be useful, puts a premium on achieving high efficiency for both conversion steps. This design idea (see the figure) presents some ways to achieve these design imperatives.

Solar day lamp with maximum power point tracking and high voltage, constant-current LED drive.

By definition photovoltaic panels work by converting light into electrical power. It follows that the amount of power a panel can produce depends on the amount of light shining on it. Duh! What’s perhaps less obvious is that a panel’s power output also depends on the voltage to which it’s loaded, and that the voltage of maximum conversion efficiency and power output (maximum power point voltage = MPPV) varies significantly with the amount of light and (to a lesser degree) temperature.

For example, the spec’ sheet for the panel illustrated rates it for “30 Watts” and “12 Volts”. But this should never be read as saying it can source 30 W into a 12 V load, because it won’t—not even in full direct sunlight. In fact, the most it could ever deliver into 12 V is barely 20 W. To hope to get the rated 30 W, the load voltage must be allowed to rise to 156% of the nominal 12 V rating—to 18.7 V (the so-called maximum power voltage = MPV). What’s going on?

This situation is actually typical of solar panel specifications. The rated output voltage is usually deliberately underrated. This accommodates the fact that panels seldom get to bask in full perpendicular sunlight, and that a user would rather get something rather than nothing in the way of usable output (e.g., enough to charge his 12 V battery) in less than perfect conditions. 

And in fact, nothing is about all this panel actually would output into an 18.7 V load if, for example, anything less than about 20% of full Sun were shining on it.

In order to extract maximum power from the panel, optimum loading must vary with incident illumination and temperature.  This stratagem is typically called maximum power point tracking (MPPT) and is the purpose of U2, A1 and surrounding components. 

U2a and U2b oscillate to generate a ~100 Hz “perturbation” square-wave that is summed with the duty-cycle control signal applied to U1. This results in periodic variation of the solar panel loading voltage. Panel power efficiency therefore also varies, generating a signal at synchronous rectifier U2c pin 4, where it is sampled and applied to feedback integrator A1. The resulting MPPT signal is accumulated, becoming feedback to 25 kHz voltage-multiplier oscillator U1 that increases or decreases U1’s duty cycle in the correct direction to maximize power accepted from the solar panel.

A generalized description of how “perturb-and-observe” active MPPT works is detailed in “Solar-array controller needs no multiplier to maximize power”.

The power extracted from the panel must then, of course, be input to the LED array and used to generate useful light. The usual way this is usually done is to connect the LEDs in a low voltage serial/parallel matrix. This topology unfortunately incurs inherent inefficiency due to the need for current-balancing ballast resistors that compensate for unavoidable mismatch between LED forward voltages. About 10% or more of total available power is typically lost in this way. 

The circuitry shown avoids this inefficiency by boosting panel voltage to a value high enough (~90 V) to accommodate a pure series connection of thirty1-W LEDs. Hence the need for ballast resistors is eliminated along with their undesirable power losses, resulting in a significant further improvement in lamp efficiency.

A complication arises, however. What if continuity of the LED series string is lost and the current delivered by D1 has nowhere to go?

If this should happen and nothing were provided to safely control the accumulation of charge on C8, the voltage there would rise dangerously (theoretically without limit) until destruction, perhaps violent, of many components including Q1, D1, and C8, became inevitable. Voltage comparator transistor Q2 is configured to prevent this catastrophe, setting U1’s RESET input low and shutting down Q1 drive should a hazardous overvoltage condition threaten to occur.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Solar day-lamp with active MPPT and no ballast resistors appeared first on EDN.

Overcoming V2G implementation challenges

Wed, 06/19/2024 - 14:00

Vehicle-to-grid (V2G) technology is touted as the next frontier for electric vehicles (EV) but turning cars into an extension of the power grid creates new engineering challenges across the energy ecosystem. Engineers from automotive original equipment manufacturers (OEM) and grid operators must overcome various implementation challenges to capitalize on V2G growth drivers. Breaking away from traditional test methods will play a pivotal role in bringing this transformative technology to market.

Recognizing the driving forces behind V2G

Achieving net zero emissions is at the core of government initiatives worldwide, with most countries aiming to reach this goal by 2050. To achieve this objective, the transportation and energy industries will need to overcome various business and technical challenges as they transition to EVs and renewable energy generation. With EV adoption increasing rapidly, these activities are already having a ripple effect throughout the power grid.

For example, EVs are expected to add significant load on the power grid, especially late in the day as owners plug them in to recharge. The International Energy Agency (IEA), a reference for insights into the energy sector, estimates that electricity demand from EVs will reach almost 2,200 TWh by 2035 taking into consideration the current policies and measures put in place by governments around the world. The demand could be 23% higher (2,700 TWh) if accounting for the ambitions and targets announced by these entities [1]. By comparison, the global cumulative electricity consumed by charging EVs only totaled 130 TWh in 2023.

Greater electricity consumption from EVs, combined with the variability of wind and solar—the world’s two fastest-growing sources of energy generation—and the surge in electricity consumption from compute-intensive data centers for AI creates a perfect storm for grid operators who face the challenge of balancing electricity supply with demand.  

V2G is emerging as the answer to this challenge and more. The technology provides grid operators with immense capacity of dispatchable energy resources to stabilize the grid. By turning EVs into intelligent, communication-capable mobile energy storage assets, aggregate pools of EVs can become virtual power plants (VPP) and export power back to the grid when it is most needed, helping to balance the supply and demand of electricity, regulate grid voltage and frequency, and increase the overall reliability and resiliency of energy infrastructure while also reducing electricity costs for consumers.

V2G can also provide a slew of additional cost-saving benefits to utilities such as enabling the deferral of expensive grid infrastructure upgrades otherwise required to meet their forecasted load growth. The technology also supports decarbonization initiatives by providing a robust storage medium for accommodating higher penetration of variable renewable energy generation.

Tapping into the energy stored in EV batteries when vehicles are connected to the grid but idle is a game changer for grid operators. The storage capacity of millions of EVs can provide the energy needed to balance electricity supply with demand and avoid dreaded power outages. A case study for the city of Munich in Germany found that V2G technology could provide 200 MW of power to the city in 2030, representing 20% of its peak load during the summer [2]. Such power levels could help the city reduce its use of non-renewable energy sources, generate infrastructure savings, and help it achieve its sustainability goals.

Utilities foresee the integration of renewable energy sources as a major benefit from the mass adoption of V2G. Overall, many industry players expect the adoption rate of V2G to grow in the future. Earlier this year, a poll of senior decision makers in the automotive and power industries revealed that they expect V2G adoption to reach 20% to 50% in the next decade, putting the onus on research and development now [3].

The future of V2G technology looks bright, but its seamless integration is crucial for fulfilling its potential. Transforming EVs into mobile distributed energy resources (DER) for the power grid requires extensive conformance testing of communications and power flow.

Understanding V2G implementation challenges

Strained power grids, financial incentives, and the mass adoption of EVs are the primary catalysts for V2G growth and implementation. Benefiting from government support, V2G is witnessing significant uptake in China, Europe, and the U.S.

Connecting V2G-enabled EVs to the power grid does increase complexity, though. EV and EV supply equipment (EVSE) engineers must ensure conformance to various cross-domain standards for charging, communication, and grid interconnection purposes.

Key V2G standards to know

The combination of a V2G-capable EV with a V2G-capable EVSE creates a DER. DERs must comply with multiple standards and undergo lengthy and expensive certification processes to be allowed to export power to the grid. The requirements span electrical/power transfer and communications with DER managing entities such as utilities, aggregators, and V2G charging network operators (CNO). Standards compliance is the most difficult technical challenge facing engineers working on V2G today as standards evolve rapidly and differ by region, country, and even state.

In North America, compliance with IEEE standards 1547.1-2020 and 1547-2018 is essential. Most U.S. states have adopted these standards or announced their intent to adopt them. These standards provide the technical requirements and conformance test procedures for equipment interconnecting DERs with the power grid and the specifications and testing requirements for interconnection and interoperability with the power grid.

IEEE standard 1547/1547.1 specifies additional standards that need to be implemented for compliance including communication protocols IEEE 2030.35, SunSpec Modbus, and IEEE 1815 (DNP3). The standard requires the implementation and testing of at least one of these protocols for interoperability. IEC 61850 and the Open Charge Point Protocol (OCPP) are other protocols under consideration.

In Europe, EN 50549 is the reference for national standards. EN 50549-1/-2 provides the technical requirements while EN 50549-10 covers the test requirements. This standard does not cover interoperability/communications test but specifies the protection functions and capabilities for DERs to operate with the grid.

Being familiar with OCPP is also important when working on EV charging stations and/or charging network management software. This standard is gaining momentum as a preferred medium of communication between CNOs and EVSE for charging infrastructure management. The most recent version of the standard does not support bidirectional charging, but the next one (OCPP 2.1) is expected to support it as well as harmonize with IEEE 1547 for V2G / EV-as-a-DER use cases. With CNOs playing the role of a DER aggregator, OCPP 2.1 can potentially serve as another method for ensuring efficient communications between charging stations from different vendors and grid management systems.

In addition, being knowledgeable about IEC 63110-1:2022 is helpful when working on V2G applications as this standard establishes a common communication framework for the EV ecosystem. Managing both EV and EVSE charging and discharging, it covers various aspects including energy transfer management, EVSE asset management, as well as payment and cybersecurity to ensure all systems involved in the V2G process can communicate effectively and securely.

For China, the China Compulsory Certificate (CCC) mark represents product compliance with standards. There are different standards for DERs, depending on the category. For example, GB/T 34708-2019 refers to photovoltaic grid-connected inverters while GB/T 36547-2018 and GB/T 36548-2018 cover electromechanical energy storage systems. These standards include sections on communications tests for interoperability purposes.

V2G test and certification procedures

In addition to various standards, multiple V2G architectures are possible depending on the location of the grid-connected inverter and its controller, either onboard the EV or EVSE. The V2G architecture then defines the applicable standards for certification [4].

The DC-V2G architecture (Figure 1) adopts a configuration with the smart inverter as well as the control and communications located in the EVSE. This configuration requires engineers to verify that the EVSE meets grid code requirements.

Figure 1 DC-V2G architecture adopts a configuration with the smart inverter as well as the control and communications located in the EVSE. In this architecture,  the EVSE must meet grid code requirements. Source: Keysight

This is in contrast with the AC-V2G architecture seen in Figure 2, where these components reside in the EV. As a result, engineers need to ensure that the EV meets grid code requirements for this architecture.

Figure 2 AC-V2G architecture where the smart inverter as well as the control and communications reside within the EV. In this case, the EV must meet grid code requirements. Source: Keysight

The split AC-V2G architecture in Figure 3 presents yet another configuration with the inverter in the EV and the control and communications in the EVSE. This hybrid approach requires evaluation of the paired system.

Figure 3 Split AC-V2G architecture with the smart inverter in the EV and the control/communications in the EVSE, requiring evaluation of the paired system to meet grid requirements. Source: Keysight

Overcoming V2G implementation challenges

Engineers can use traditional methods to test DERs using real EVs and charging stations, but this approach is cumbersome and time-consuming. Typically, it requires months of testing.

Emulation is an appealing alternative. These systems can mimic the communications and power flow of V2G-capable EV, EVSE and the grid, enabling engineers to test their EV or EVSE against various standards and verify communications and power transfer between the two much faster.

Using a DC power source with an emulator to function as an EV or EVSE, engineers can assess the charging and communications performance between their EV and any EVSE and vice versa. An AC emulator can replace the utility power grid, enabling them to test various interconnection standards.

The emulation method affords engineers the flexibility they need while enabling them to conduct the testing faster. With this approach, they can repeat and iterate tests more rapidly compared to using traditional methods.

A more sustainable future with V2G

V2G technology stands at the cusp of revolutionizing the automotive and energy industry by transforming EVs into dynamic energy storage solutions and presenting a promising avenue for their integration into the energy ecosystem.

The potential benefits for grid operators are immense, offering a means to stabilize the grid and increase its resilience. However, the path to widespread V2G implementation is not without challenges. The breadth of standards and the existence of multiple V2G architectures to cater to different use cases require considerable efforts from engineers to ensure standards compliance.

With the right investment in research, development and testing, V2G will play a pivotal role in achieving a sustainable, low-carbon future.

Jessy Cavazos is part of Keysight’s Industry Solutions Marketing team.

Related Content

References

  1. Global EV Outlook 2024, International Energy Agency
  2. Smoothing the Wave: EVs Enable Significant Peak Shaving, Siemens
  3. Exploring the Future Vehicle-to-Grid (V2G) World: Driving Forces, Challenges, and Strategic Insights for Automotive and Energy Leaders, Reuters in partnership with Keysight Technologies
  4. Electric Vehicles As Distributed Energy Resources eBook, Keysight Technologies
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Overcoming V2G implementation challenges appeared first on EDN.

The advent of all-solid-state battery for wearables

Wed, 06/19/2024 - 10:08

While promising to revolutionize energy storage, all-solid-state battery technology has been facing massive challenges in large-scale mass production. Now, a new battery material breakthrough at TDK could pave the way for the widespread adoption of solid-state technology.

TDK has developed a new material for solid-state batteries with a significantly higher energy density than conventional mass-produced solid-state batteries. This material boasts an energy density of 1,000 Wh/L, approximately 100 times greater than the energy density of TDK’s conventional solid-state battery.

Source: TDK

TDK’s new solid-state battery, developed with all-ceramic materials, aims to replace coin cell batteries in small portable devices such as smartwatches, wearables, and wireless earphones. The solid-state battery built around multi-layer ceramic chip capacitors offers high energy density, miniaturization, and greater safety without a risk of electrolyte leakage.

The all-ceramic material, including an oxide-based solid electrolyte and a lithium alloy negative anode, enables smaller battery sizes and longer operating times. The oxide-based solid-state electrolyte eliminates the safety risks associated with flammable electrolytes, which is a vital consideration in wearable and other devices that come in direct contact with the human body.

What TDK is doing here is enhance the capacity of the batteries through multi-layer lamination technology and expand its operating temperature range by applying the production engineering technology that the Japanese company has accumulated in the electronic components business.

As a result, TDK has managed to develop a material for the new solid-state battery with a significantly higher energy density than its conventional mass-produced solid-state batteries, CeraCharge. The battery’s intricate layered structure and charge storage mechanism show astute phase transitions within its active materials.

Compared to traditional liquid electrolyte batteries, all-solid-state batteries are safer, lighter, and offer longer life and faster charging. They could also be potentially cheaper in the future, paving the way for their use in smartphones and even electric vehicles.

However, using ceramic material in these solid-state batteries means that larger batteries could be more fragile. That, in turn, will lead to insufficient performance, poor durability, and safety issues. Still, TDK’s design breakthrough represents an important step in the commercial realization of solid-state batteries.

For now, TDK is moving ahead to develop the battery cells and package structure design and then advance toward mass production of solid-state batteries that will replace existing coin-shaped batteries found in wearables and other small portable devices. Meanwhile, we’ll keep an eye on the development of larger solid-state batteries for smartphones and electric vehicles.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The advent of all-solid-state battery for wearables appeared first on EDN.

Ultra-low distortion oscillator, part 2: the real deal

Tue, 06/18/2024 - 14:00

Editor’s Note: This DI is a two-part series.

Part 1 discusses audio oscillators, namely the Wien bridge and biquad, state-variable, or two-integrator-loop configuration.

Part 2 will add distortion-free feedback to the biquad to produce a pure sine wave.

In Part 1 of this article, we briefly looked at the Wien bridge oscillator before homing in on the bi-quad filter as our best candidate for an oscillator capable of giving <0.0001%/-120dB distortion, and showed its full circuit. Treating it as a module, it’s ready for adding distortion-free feedback in the form of a linear limiter.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Trying a J-FET for that limiting proved disappointing. Even when the circuit was optimised to minimise its inherent non-linearity, it added about -92dB/0.0025% of third harmonic to the feedback signal. As noted in Part 1, the attenuation of a third harmonic from the input to the low-pass output is ~22 dB, so we ended up with 0.0002% or -114 dB distortion at the output. Close, but no cigar.

Let’s return to the photoconductive opto-isolator which we used to stabilise the Wien bridge circuit in Part 1. The LDR or photo-resistor part of it is of course linear, but the LED needs careful driving to prevent any significant feed-through of ripple which would modulate the feedback and thus add distortion. Figure 1, showing the control loop added to the basic bi-quad module, incorporates a neat way of minimising ripple while keeping reasonable loop dynamics.

Figure 1 Using a decent control loop for the feedback stabilises the oscillation level without adding significant distortion.

Because the bi-quad has two outputs (HP and LP) which are in anti-phase, we can easily get full-wave rectification, but we can do much better. The BP output is at 90°/270° to those, so we can also use both that and its inverse to get 4-phase rectification, cutting the ripple to a quarter of the single-phase value. The ripple will also be at four times the base frequency, so we are (roughly) sixteen times better off than we were with the Wien bridge. 

With accurately-matched time constants in the bi-quad, all three outputs have identical signal levels at resonance, but any offsets or mismatches will introduce ripple as sub-harmonics of the 4× component (if that makes sense). The diodes must be well-matched, and the op-amps need to have low voltage offsets, or at least lower than any diode mismatches. Good tracking between the tuning pot sections is needed; often an extra resistor paralleled with the higher-value half gives adequate results.

R16, C3, and C4 form the loop filter needed for stable operation, while R17 and C5 give extra filtering of the 4× component. These values are compromises; the loop is somewhat underdamped but gives decent performance over the entire tuning range and takes less than 500 ms to stabilise. A5 translates the filtered voltage into a current to drive the LED, thus controlling the LDR’s resistance. The opto-isolator used was a Silonex NSL-32SR3; a home-brew device made from a (recycled) NSL-19M51, a clear white T-1 LED, and thick black heatshrink worked well; though with about half the sensitivity. (I used that when experimenting with squashed tri-waves, even though it wasn’t needed in the final cut.) R18—the only adjustment needed—sets the LED drive, and thus the AF output level.

The feedback loop is closed through the network of R10, R11, and the LDR. At startup, the LDR has a high resistance, but there is enough feedback to start oscillation, after which it progressively shorts R11 to give the desired signal level.

LDRs are fairly sluggish in their response times. This one has a resistance of about 1.7k at our drive level we, responding to light in ~6 ms and to dark in ~30 ms (measured 63% figures). This gives us some useful extra ripple filtering.

All critical op-amps are shown as LM4562s, which are my current favourites for general audio work, given their balance of low noise, distortion, and offset figures, coupled with easy availability as DIP-8s. (But what do they sound like, you say? Dunno; can’t even hear eight of them, chained between phono input and mixer output.) Their quoted THD+N of 0.00003%/-130dB will set the limit for our performance: time to look at some results (Figure 2).

Figure 2 The spectrum from the low-pass output, after unity-gain buffering.

Not very impressive! But remember from Part 1: I don’t trust my FFT if the input dynamic range is >~90+ dB, so try to remove most of the fundamental first. (Is it a coincidence that 96 dB ≈ 216:1?) Passing the signal through the—now deeper—notch filter, shows Figure 3.

Figure 3 The spectrum after notching out most of the fundamental, showing the harmonics much more cleanly.

That’s better! Note that these spectra involved very long runs, averaging the signal over tens of thousands of samples. This was needed to avoid missing valid peaks or eliminate spurious ones as well as just letting us see what would otherwise have stayed buried in noise. All tests were run with power from a 12-V accumulator—no mains hum or other nasties—with an op-amp as a rail-splitter, and in an earthed Faraday cake-tin.

I chose to use a working level of 20 dBV as being a good compromise between distortion and usability. My final unit has extra output gain, given by a virtual-earth/pseudo-log-pot stage (LM4562, of course). Figure 4 shows the notched spectrum from that, measured at +6dBu (~+4 dBV, or ~1.54V RMS, or ~4.4V pk-pk), showing a THD of close to -120dB, or 1ppm, most of that being second harmonic (source as yet unidentified).

I think we’re there, as far as distortion goes.

Figure 4 Spectrum (notched) after amplification to +6dBu. Note the altered scale.

Because I used sockets for A1‒A4, this being a re-build of a defunct unit, trying some other op-amps was easy. Figure 5 shows the results for the KA5532, formerly well-regarded for audio work, TL072/TL082 (or TL0n4 quad-packs), LM358 (with extra 10k resistors to Vs- tacked onto the outputs), and even the venerable MC1458—essentially twin 741s. Frequency and output level were trimmed for each run to allow proper comparisons. The LM358 surprised me; had to double-check it. Never did like the sound of them, and now I know why.

Figure 5 Distortion spectra for various other devices.

All this work was at a nominal 1 kHz (actually 1003.4 Hz). I cannot speak for other frequencies, lacking suitable notch filters, though their un-notched spectra look much the same as that for 1 kHz. As drawn, the oscillator will tune from <500 Hz to >5 kHz in a single range, which makes it a useful bit of kit in its own right. For other ranges, the loop filters would need to be changed to maintain adequate loop stability while retaining good filtering.

These results may show THD levels below 140 dBc, or 0.00001%, or 100 ppb, but they will still be buried in noise, and the THD+N figure—which has conveniently been ignored up to now—looks much worse than the simple THD one. Calculations using the datasheet figures for the LM4562 under our conditions imply noise from the output buffer (inverting, unity-gain) of ~-114 dBV or -112 dBu in a 20 kHz bandwidth, with (resistive) Johnson noise dominant, so we may be left with a THD+N of “only” about 92 dB, or 0.0025%, or 25 ppm. An AC microvoltmeter (BW = 10 kHz) connected to the output, with R5/6 in the bi-quad disconnected and C2 shorted, measured 113 dBu, which is in line with the calculations. Using different op-amps may help slightly with current noise but can never reduce the resistors’ noise. Analog Devices publish a good, basic tutorial on op-amp noise, as well as several much more detailed analyses.

Obviously, when using this as a source to measure the THD in an audio chain, averaging will be needed to extract the harmonics from the noise, exactly as we used throughout this DI—but make sure you can trust your FFT or, use notch filtering to reduce the fundamental.

We now have an oscillator capable of delivering a sinewave with distortion (alone) measurable in parts per billion—OK, lots and lots of them, but hey! Who’s counting? That sounds good—and which, given the appropriate parts, can be built in an afternoon.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Ultra-low distortion oscillator, part 2: the real deal appeared first on EDN.

Modern UPSs: Their creative control schemes and power sources

Mon, 06/17/2024 - 15:00

Within my mid-2023 teardown of a malcontent APC Back-UPS ES BE550G 550VA UPS, I wrote:

What I like about these units (and conceptually similar ones from both APC and competitors) is that the batteries are user-replaceable, and they’re standard-size and -spec. A SLA (sealed lead acid) [editor note: a correction from the original “Ni-Cd” per reader feedback] cell will sooner-or-later go kaput, whether it’s due to repeated premises power loss dependency on it or just extended trickle-charging-induced storage-capability decay, but you can then just pop in a replacement battery, and it’ll be good as new.

Increasingly nowadays, however, I’m learning that this longstanding assumption of battery standardization is unfortunately falling by the wayside. Take the Amazon Basics ABST600 600VA UPS that I replaced the BE550G with:

I got it as a backup in the first place because it was an Amazon Warehouse-sourced open box unit sold at a $20 discount from the brand-new price and ended up looking (and working) like brand new, too. But, although Internet research confirmed that it was a rebranded CyberPower unit, it turned out to use a non-standard 12V 5Ah battery (a similar issue to one I’d discovered a few years earlier with another in-service CyberPower UPS below and to my right as I type this). I ended up finding one, labeled as being intended for a garage door opener (believe it or not):

but it wasn’t easy, and Amazon/CyberPower also don’t make replacement easy, either. There’s no user-accessible slot devoted to the battery; instead, you need to take the entire UPS apart, which is presumably intentional. When the battery dies a few years down the road, they’d prefer that you buy a brand-new UPS instead (with obvious environmental and landfill impacts).

Where else do I have UPSs begging for replacement? Well, long-term readers may recall that the furnace room beneath my office is the home network nexus. An ancient APC Back-UPS 650, the BK650MC, historically provided backup power for my QNAP TS-328 and TS-453Be NASs:

Yes, that’s a COM (serial port) for to-computer connectivity on the back panel, along with legacy POTS pass-through connectivity for surge protection purposes!

I honestly don’t know how long I’ve owned this thing, but it’s still chugging along (periodically fed by replacement backup batteries, of course). Reflective of its likely geriatric status, here’s a two-part review of it from the Ars Technica website archive, published in December 1999.

The other UPS historically in the furnace room, providing backup power for my cable modem, router and main switch, was a twin of the APC Back-UPS ES BE550G that had died last year:

Part of the motivation for (proactively, this time) replacing it—both of them, in fact—can be found in the last sentence of paragraph from which the earlier quote came:

That said, eventually the internal circuitry itself may fail, as seems to be the case with my device, with the UPS then destined only for dissection-then-discard.

More specifically, we’ve recently gone through a spate of brownouts and longer-duration blackouts here. I don’t know if anything specific is going on with Xcel Energy of late, or if it’s just coincidence, but given that my wife and I both work full-time from home, keeping the WAN and at least key portions of the LAN “up” as long as possible is a big deal. With the 550VA UPS, broadband would typically drop within around an hour. And I was also getting tired of rushing downstairs (inevitably in the middle of the night, awakened by a multi-UPS beeping chorus) to stably power off the NASs before backup power drained, potentially corrupting their multi-HDD RAID arrays as a result of the subsequent abrupt power loss.

I thought I’d hit pay dirt (and I actually did; keep reading) when I came across APC’s BX1500M 1500VA UPS:

APC sells the BX1500M for $219.99 on its website, although retailers such as Amazon typically list the UPS for around $184. And notably, Woot! recently had it for $149.99. Even better, the retailer was offering a few open box units for $124.87. And better still, a one-day 10%-off promotion further dropped the open-box price to $112.38. I grabbed the last two. When they arrived, one of them had something rattling around inside. Although it still seemed to work fine, I sent it back for full refund, among other reasons because as I later realized, I only needed one.

All was not perfect with the BX1500M, however, at least at first. Its replacement battery pack, the APCRBC124, sure looks proprietary (not to mention pricey), doesn’t it?

Not to worry, it turns out, as this video demonstrates:

Turns out the APCRBC124 is just two conventional 12V 9Ah SLA cells connected in series (24V result) by a three-wire and connector-inclusive plastic bracket, along with some tape to hold the whole thing together. I snagged the following photos from a bracket-only for-sale post on eBay:

The setup’s pretty slick, actually. You can insert the batteries-plus-harness assemblage upside-down (with the red-color sticker “up”), which doesn’t connect the battery pack to the UPS, for storage. Pull it out, flip it so the green-color sticker is “up”, put it back in and you’re good to go.

So why didn’t I end up needing two UPSs? Here are the BX1500M-displayed stats when both NASs, plus the networking gear, are all powered up:

86W of total power load, 9% of the total possible power supplied by the battery pack, which would only last a bit more than an hour on battery power alone.

Now, let’s shut down the NASs:

15W of total load. Only 1% of the total possible power supplied by the battery pack. And nearly four hours of estimated operating life on battery power alone.

Ok, so I still need to frantically run downstairs and stably power off the NASs each time premises power goes down, right? Nope. Turns out both NASs run QNAP-developed and -supported implementations of the Network UPS Tools (NUT) software suite. I’ve got the TS-453Be connected to the UPS via an APC-provided USB Type A-to-RJ50 cable and configured as the NUT server. Five (user-configurable) minutes after premises power goes down and the BX1500M switches to battery backup, it signals the TS-453Be to initiate a stable shutdown sequence. And the TS-453Be-as-NUT server then also then sends a command to the TS-231 NUT client, LAN-connected over the same (BX1500M battery-backed) GbE switch, to stably shut itself down, too (static IP assignments for both NASs are obviously necessary to ensure the desired outcome).

From then on, only the LAN equipment is pulling (much less than before) power from the UPS. Slick, huh? By the way, both QNAP NASs alternatively support something called “auto-protection” mode, which spins down and parks the HDDs and holds the NAS in standby while on battery power, auto-rebooting it when premises AC is restored. As QNAP’s documentation notes, this option is “recommended for business and enterprise users”…which doesn’t keep the NAS’s UI from recommending that I switch to it each time I log into the web browser-based UI. But it’s not necessary in my more modest setup, and I’ll take the extra battery life incurred by the full-shutdown alternative. Now I just need to set up similar USB-cabled schemes for the other UPS-backed computers in my stable, whose O/Ss either support UPS control natively or in conjunction with a UPS vendor-supplied utility…

Questions? Other thoughts? Let me know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Modern UPSs: Their creative control schemes and power sources appeared first on EDN.

Alphawave Semi’s quest for open chiplet ecosystem

Mon, 06/17/2024 - 13:48

The open chiplet ecosystem is steadily taking shape, one design demonstration at a time. Take, for instance, Alphawave Semi, which has announced the tape-out of what it claims to be the industry’s first off-the-shelf multi-protocol I/O connectivity chiplet on TSMC’s 7-nm process node.

This multi-standard I/O chiplet employs an IP portfolio complaint with Ethernet, PCIe, CXL, and Universal Chiplet Interconnect Express (UCIe) Revision 1.1 standards. It delivers a total bandwidth of up to 1.6 Tbps with up to 16 lanes of multi-standard PHY supporting silicon-proven PCIe 6.0, CXL 3.x, and 800G Ethernet IPs.

Figure 1 The tape-out of the off-the-shelf, multi-protocol I/O connectivity chiplet demonstrates the integration of advanced interfaces. Source: Alphawave Semi

A couple of months ago, Alphawave Semi announced the development of a chiplet connectivity platform on TSMC’s 3-nm process node. It’s a UCIe subsystem comprising PHY and controller which can deliver 24 Gbps data rates. The 24-Gbps UCIe subsystem is compliant with the UCIe Revision 1.1 specification and includes a highly configurable die-to-die controller that supports streaming, PCIe/CXLTM, AXI-4, AXI-S, CXS, and CHI protocols.

Figure 2 The UCIe subsystem features bit error rate (BER) health monitoring to ensure reliable operation. Source: Alphawave Semi

Alphawave Semi demonstrated the above two designs at the Chiplet Summit 2024 in Santa Clara, California, earlier this year.

In its quest to pave the way for open chiplet ecosystems, Alphawave Semi has also joined hands with Arm on the compute side. It has recently announced the development of a compute chiplet built on Arm Neoverse Compute Subsystems (CSS) for artificial intelligence (AI) and machine learning (ML), high-performance compute (HPC), data center, and 5G/6G networking infrastructure applications.

Such a collaboration brings a chiplet connectivity specialist like Alphawave Semi a portfolio that includes IO extension chiplets, memory chiplets, and compute chiplets. Combining Arm’s compute building blocks with Alphawave Semi’s connectivity IP will also bolster the creation of an open chiplet ecosystem.

Figure 3 The compute chiplet combines the Arm Neoverse CSS platform with Alphawave Semi’s connectivity IPs for UCIe, 112/224G Ethernet, and HBM subsystems. Source: Alphawave Semi

The chiplet design examples outlined above mark a clear trend: the developments of key chiplet building blocks are steadily taking shape, also bolstering its multi-protocol ecosystem. With the maturation of chiplet standards like UCIe and the availability of silicon-proven chiplet subsystems, design engineers can reduce development time, lower costs, and create greater synergy with their existing hardware ecosystems.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Alphawave Semi’s quest for open chiplet ecosystem appeared first on EDN.

What GAA and HBM restrictions mean for South Korea

Fri, 06/14/2024 - 18:14

The next frontier in U.S. semiconductor restrictions for Chinese companies is gate-all-around (GAA) chip manufacturing technology. According to a Bloomberg report, measures are being discussed to limit Chin’s access to this advanced technology, widely considered a successor to the FinFET technology currently used in manufacturing cutting-edge semiconductor devices.

GAA, also known as gate-all-around field-effect transistor (GAAFET), replaces the vertical fins used in FinFET technology with a stack of horizontal sheets. This GAA structure further reduces leakage while increasing drive current, thus bolstering transistor density and delivering power and performance benefits.

Figure 1 GAA turns FinFET transistors sideways to make channels horizontal instead of vertical to extend semiconductor device scaling and reduce power consumption. Source: Samsung

In March this year, the U.K. imposed controls over GAA transistor technology on companies in China. Now, a source in Bloomberg report claims that the United States and other allies are expected to follow the U.K. in imposing controls on GAA technology this summer.

However, these access controls haven’t been finalized yet, mainly because the early version is considered very broad. It doesn’t make a clear distinction between whether these restrictions are aimed at stopping China from developing its own GAA technology or blocking chipmakers from the United States and its allies from selling GAA-based chips to companies in China.

Among the U.S. allies, South Korea is notable in this affair because Samsung Foundry is a pioneer in commercializing the GAA manufacturing technology in its 3-nm process node. Intel is expected to implement GAA transistor architecture in its 20A node which will be unveiled later this year. TSMC plans to employ GAA technology in its 2-nm process node to be made available in 2026.

That shows Samsung is ahead of the curve in GAA chip manufacturing architecture, so it’ll be interesting to see South Korea’s take on this matter. It’s worth noting that the Bloomberg report quotes anonymous sources and stresses that deliberations are private.

While South Korea and its tech star Samsung are likely to be at the center of this affair, the Bloomberg report also revealed some early-stage discussions about limiting exports of high-bandwidth memory (HBM) chips to China. That will put South Korea at the center of another technology export conflict as two of the three companies supplying HBM chips are from South Korea.

Figure 2 HBM is a high-end memory that stacks DRAMs using vertical channels called through-silicon vias (TSVs). Source: Samsung

Samsung and SK hynix, along with U.S. memory chip maker Micron, currently produce these high-end memory chips, which are considered crucial in AI applications while being paired with artificial intelligence (AI) processors. New restrictions on HBM chips, like GAA, could significantly impact South Korean tech-related exports.

The U.S. semiconductor technology export restrictions imposed on companies in China have mostly impacted chip vendors in the United States, with the exception of lithography expert ASML, which is based in the Netherlands. Now, South Korea could bear the brunt of these potential restrictions on GAA and HBM technologies.

As the Bloomberg report points out, no final decision has been made yet. But it’d be interesting to see how South Korean technology and trade officials respond to such export restrictions, especially, regarding the export of HBM chips, for which Samsung and SK hynix command nearly 90% of the market.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post What GAA and HBM restrictions mean for South Korea appeared first on EDN.

A look at spot welding

Fri, 06/14/2024 - 16:03

Joining one flat piece of metal to another flat piece of metal is a common requirement, but sometimes the choice of method lies open. For a case in point, please consider these two kitchen spatulas in Figure 1 and Figure 2.

Figure 1 One side of a pair of kitchen spatulas, one with two rivets and one with two spot welds. Source: John Dunn

Figure 2 The back side of the pair of kitchen spatulas. Source: John Dunn

Handle attachments to the spatula blade are made with two rivets in the tool on the left while the attachments are made in the other tool using spot welds. The fixture used for doing spot welding can be roughly sketched as follows in Figure 3.

Figure 3 A diagram of the fixture used to spot weld where a large current is passed through the junction of two pieces of metal, creating a high enough DC resistance between the electrodes to melt some of the metal. This then cools off and solidifies, fusing the flat pieces of metal to each other in a specific “spot”. Source: John Dunn

The welding process passes a very large current, AC or DC, through the junction of two pieces of metal being joined so that the DC resistance in between the two electrodes gets hot enough to melt some of the metal which then cools off and solidifies to fuse the two pieces of metal to each other at that “spot”, hence the name “spot welding”.

This process can be scaled for very small pieces of work like the two welds on this flashlight D-cell (Figure 4):

Figure 4 Two Very small spot welds on a flashlight D-cell. Source: John Dunn

to very large pieces as in automotive spot welding like Figure 5:

Figure 5 Two spot welds on an automobile door. Source: John Dunn

The more detailed scenario in spot welding involves:

  • When to apply the physical force
  • When to turn on the welding current
  • When to turn it off
  • How long to let the work pieces cool before releasing the physical force
  • Whether the two contacting electrodes need to be given extra cooling measures such as water flow within

The technology is quite sophisticated.

There are also personal cautions to bear in mind. One is that this procedure makes some very strong magnetic fields. If/when the work pieces melt, molten metal can be sprayed out.

“Danger, Will Robinson!”

The other thing is that magnetic fields can do a destructive number on some wristwatches as well as on credit card strips and the like, so if you are operating such a fixture, pay attention to what you may be wearing or carrying on your person.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A look at spot welding appeared first on EDN.

GaN power module improves inverter efficiency

Thu, 06/13/2024 - 23:59

A 650-V GaN intelligent power module (IPM) from TI enables up to 99% inverter efficiency for major home appliances and HVAC systems. The DRV7308 IPM integrates 650-V, 205-mΩ e-mode GaN FETs in a half H-bridge configuration, capable of driving three-phase BLDC/PMSM motors with up to 450-V DC rails.

Worldwide efficiency standards for appliances and HVAC systems, such as SEER, MEPS, Energy Star, and Top Runner, are becoming increasingly stringent. TI reports that the DRV7308 helps engineers meet these standards by leveraging GaN technology to deliver enhanced efficiency and thermal performance, with 50% reduced power losses compared to existing solutions. It also achieves low dead time and low propagation delay, both less than 200 ns. This allows higher PWM switching frequencies, which reduce audible noise and system vibration.

Housed in a 12×12-mm, 60-pin QFN package, the DRV7308 is one of the industry’s smallest IPMs for motor drive applications ranging from 150 W to 250 W. Its high efficiency eliminates the need for an external heatsink, shrinking motor drive inverter PCB size by up to 55%. The integrated current sense amplifier, protection features, and inverter stage further reduce solution size and cost.

Preproduction quantities of the DRV7308 IPM are available for purchase on TI.com. Prices start at $5.50 each in lots of 1000 units. An evaluation module is also available for $250.

DRV7308 product page 

Texas Instruments  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN power module improves inverter efficiency appeared first on EDN.

Reference design covers GaN e-mobility charger

Thu, 06/13/2024 - 23:58

Transphorm has released a 300-W DC/DC GaN-based reference design for 2-wheel and 3-wheel electric vehicle (EV) battery chargers. The TDDCDC-TPH-IN-BI-LLC-300W-RD design guide employs four TP65H150G4PS 650-V, 150-mΩ SuperGaN FETs in TO-220 packages to form an isolated bidirectional battery charger. Although the power level of this reference design is only 300 W, it features a full-bridge LLC topology with the potential for significantly higher power levels.

The reference design illustrates a fully analog implementation without processor firmware for the power stage. Simple jumpers manage power flow in this design, but these will need to be replaced with controls that a battery management system can operate in a real product.

Key specifications of the reference design include:

In addition to EV onboard chargers, the reference design can be used for renewable energy systems, backup power supplies, and vehicle-to-everything (V2X) applications. The design guide and BOM can be downloaded by following the product page link below.

TDDCDC-TPH-IN-BI-LLC-300W-RD product page

Transphorm

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Reference design covers GaN e-mobility charger appeared first on EDN.

MATLAB library ignites 6G innovation

Thu, 06/13/2024 - 23:58

The 6G Exploration Library from MathWorks empowers engineers to explore, model, and simulate 6G-enabling technologies with MATLAB. Extending the capabilities of the company’s 5G Toolbox, the add-on library allows users to configure and generate 6G waveform candidates. It efficiently bridges MATLAB to RF instruments or software-defined radios, facilitating over-the-air waveform transmission and precise signal quality measurements.

The 6G Exploration Library contains reference designs and examples, enabling users to:

  • Generate waveforms with parameters extending beyond the limits of 5G NR specifications.
  • Simulate 6G candidate links, including transmitter operations, channel models, RF impairments, and reference receiver algorithms.
  • Explore the impact of hardware impairments at sub-THz carrier frequencies.
  • Model reconfigurable intelligent surfaces (RIS) and experiment with propagation scenarios with and without the presence of blockages.
  • Apply AI techniques to solve 6G wireless communications problems.

MathWorks will showcase the 6G Exploration Library at next week’s IMS 2024 conference. It will also participate in multiple workshops and educational seminars focused on 6G and artificial intelligence.

6G Exploration Library product page

MathWorks

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MATLAB library ignites 6G innovation appeared first on EDN.

Isolated probe limits common-mode noise

Thu, 06/13/2024 - 23:57

An isolated probing system, the R&S RT-ZISO measures fast switching signals in environments with high common-mode voltages and currents. Its power-over-fiber architecture galvanically isolates the DUT from the measurement setup, providing a higher common-mode rejection ratio (CMRR) than conventional differential probes.

The oscilloscope probe delivers accurate differential measurements, offering an input and offset range of ±3 kV, a common mode range of ±60 kV, and a rise time of less than 450 ps. It suppresses fast common-mode signals that can distort and interfere with accuracy, achieving a CMRR of greater than 90 dB (30,000:1) at 1 GHz. Upgradeable bandwidth options for the probing system include 100 MHz, 200 MHz, 350 MHz, 500 MHz, and 1 GHz.

The RT-ZISO probing system connects to any oscilloscope with a BNC or SMA interface. However, it offers seamless operation and control when connected to an R&S oscilloscope. With a CAT III 1000-V safety rating, the RT-ZISO ensures reliable electrical measurements. Additionally, its safe-attach feature allows for easy and secure swapping of probe tips.

To request a price quote for the RT-ZISO isolated probing system, click on the link to the product page below.

RT-ZISO product page

Rohde & Schwarz 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and mo

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Isolated probe limits common-mode noise appeared first on EDN.

Pages