-   Українською
-   In English
EDN Network
The advent of co-packaged optics (CPO) in 2025
Co-packaged optics (CPO)—the silicon photonics technology promising to transform modern data centers and high-performance networks by addressing critical challenges like bandwidth density, energy efficiency, and scalability—is finally entering the commercial arena in 2025.
According to a report published in Economic Daily News, TSMC has successfully integrated CPO with advanced semiconductor packaging technologies, and sample deliveries are expected in early 2025. Next, TSMC is projected to enter mass production in the second half of 2025 with 1.6T optical transmission offerings.
Figure 1 CPO facilitates a shift from electrical to optical transmission to address the interconnect limitations such as signal interference and overheating. Source: TrendForce
The report reveals that TSMC has successfully trialled a key CPO technology—micro ring modulator (MRM)—at its 3-nm process node in close collaboration with Broadcom. That’s a significant leap from electrical to optical signal transmission for computing tasks.
The report also indicates that Nvidia plans to adopt CPO technology, starting with its GB300 chips, which are set for release in the second half of 2025. Moreover, Nvidia plans to incorporate CPO in its subsequent Rubin architecture to address the limitations of NVLink, the company’s in-house high-speed interconnect technology.
What’s CPO
CPO is a crucial technology for artificial intelligence (AI) and high-performance computing (HPC) applications. It enhances a chip’s interconnect bandwidth and energy efficiency by integrating optics and electronics within a single package, which significantly shortens electrical link lengths.
Here, optical links offer multiple advantages over traditional electrical transmission; they lower signal degradation over distance, reduce susceptibility to crosstalk, and offer significantly higher bandwidth. That makes CPO an ideal fit for data-intensive AI and HPC applications.
Furthermore, CPO offers significant power savings compared to traditional pluggable optics, which struggle with power efficiency at higher data rates. The early implementations show 30% to 50% reductions in power consumption, claims an IDTechEx study titled “Co-Packaged Optics (CPO): Evaluating Different Packaging Technologies.”
This integration of optics with silicon—enabled by advancements in chiplet-based technology and 3D-IC packaging—also reduces signal degradation and power loss and pushes data rates to 1.6T and beyond.
Figure 2 Optical interconnect technology has been gaining traction due to the growing need for higher data throughput and improved power efficiency. Source: IDTechEx
Heterogeneous integration, a key ingredient in CPO, enables the fusion of optical engine (OE) with switch ASICs or XPUs on a single package substrate. Here, the optical engine includes both photonic ICs and electronic ICs. The packaging in CPO generally employs two approaches. The first one involves the packaging of optical engine itself and the second one focuses on the system-level integration of the optical engine with ICs like ASICs or XPUs.
A new optical computing era
TSMC’s approach involves integrating CPO modules with advanced packaging technologies such as chip-on-wafer-on-substrate (CoWoS) or small outline integrated circuit (SOIC). It eliminates traditional copper interconnects’ speed limitations and puts TSMC at the forefront of a new optical computing era.
However, challenges such as low yield rates in CPO module production might lead TSMC to outsource some optical-engine packaging orders to other advanced packaging companies. This shows that the complex packaging process encompassing CPO fabric will inevitably require a lot of fine-tuning before commercial realization.
Still, it’s a breakthrough that highlights a tipping point for AI and HPC performance, wrote Jeffrey Cooper in his LinkedIn post. Cooper, a former sourcing lead for ASML, also sees a growing need for cross-discipline expertise in photonics and semiconductor packaging.
Related Content
- Optical interconnects draw skepticism, scorn
- TSMC crunch heralds good days for advanced packaging
- Intel and FMD’s Roadmap for 3D Heterogeneous Integration
- Heterogeneous Integration and the Evolution of IC Packaging
- CEA-Leti Develops Active Optical Interposers to Connect Chiplets
- Road to Commercialization for Optical Chip-to-Chip Interconnects
The post The advent of co-packaged optics (CPO) in 2025 appeared first on EDN.
PWM power DAC incorporates an LM317
Instead of the conventional approach of backing up a DAC with an amplifier to boost output, this design idea charts a less traveled by path to power. It integrates an LM317 positive regulator with a simple 8-bit PWM DAC topology to obtain a robust 11-V, 1.5-A capability. It thus preserves simplicity while exploiting the built-in fault protection features (thermal and overload) of that time proven Bob Pease masterpiece. Its output is proportional to the guaranteed 2% precision of the LM317 internal voltage reference, making it securely independent of vagaries of both the 5-V logic supply rail and incoming raw DC supply.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 diagrams how it works.
Figure 1 LM317 regulator melds with HC4053 CMOS switch to make a 16-W PWM power DAC.
CMOS SPDT switches U1b and U1c accept a 10-kHz PWM signal to generate a 0 V to 9.75 V “ADJ” control signal for the U2 regulator via feedback networks R1, 2, and R3. The incoming PWM signal is AC coupled so that U1 can “float” on U2’s output. U1c provides an inverse of the PWM signal, implementing active ripple cancellation as described in “Cancel PWM DAC ripple with analog subtraction.” Note that R1||R2 = R3 to optimize ripple subtraction and DAC accuracy.
This feedback arrangement does, however, make the output voltage a nonlinear function of PWM duty factor (DF) as given by:
Vout = 1.25 / (1 – DF(1 – R1/(R1 + R2))
= 1.25 / (1 – 0.885*DF)
This is graphed in Figure 2.
Figure 2 The Vout (1.25 V to 11 V) versus PWM DF (0 to 1) where Vout = 1.25 / (1 – 0.885*DF).
Figure 3 plots the inverse of Figure 2, yielding the PWM DF required for any given Vout.
Figure 3 The inverse of Figure 2 where PWM DF = (1 – 1.25/Vout)/0.885.
The corresponding 8-bit PWM setting works out to: Dbyte = 255 (1 – 1.25 / Vout) / 0.885
Vfullscale = 1.25 / (R1/(R1 + R2)), so design choices other than 11 V are available. 11 V is the maximum consistent with HC4053’s ratings, but up to 20 V is feasible if the metal gate CD4053B is substituted for U1. Don’t forget, however, the requirement that R3 = R1||R2.
The supply rail V+ can be anything from a minimum of Vfullscale+3V to accommodate U2’s minimum headroom dropout requirement, up to the LM317’s absmax 40-V limit. DAC accuracy will be unaffected due to this chip’s excellent PSRR, although of course efficiency may suffer.
U2 should be heatsunk as dictated by heat dissipation caused by required output currents multiplied by the V- to Vout differential. Up to double-digit watts is possible at high currents and low Vout.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction
- A faster PWM-based DAC
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- Cancel PWM DAC ripple with analog subtraction but no inverter
The post PWM power DAC incorporates an LM317 appeared first on EDN.
2024: A year’s worth of interconnected themes galore
As any of you who’ve already seen my precursor “2025 Look Ahead” piece may remember, we’ve intentionally flipped the ordering of my two end-of-year writeups once again this year. This time, I’ll be looking back over 2024: for historical perspective, here are my prior retrospectives for 2019, 2021, 2022 and 2023 (we skipped 2020).
As I’ve done in past years, I thought I’d start by scoring the topics I wrote about a year ago in forecasting the year to come:
- Increasingly unpredictable geopolitical tensions
- The 2024 United States election
- Windows (and Linux) on Arm
- Declining smartphone demand, and
- Internal and external interface evolutions
Maybe I’m just biased but I think I nailed ‘em all, albeit with varying degrees of impactfulness. To clarify, by the way, it’s not that if the second one would happen was difficult to predict; the outcome, which I discussed a month back, is what was unclear at the time. In the sections that follow, I’m going to elaborate on one of the above themes, as well as discuss other topics that didn’t make my year-ago forecast but ended up being particularly notable (IMHO, of course).
Battery transformationsI’ve admittedly written quite a lot about lithium-based batteries and the devices they fuel over the past year, as I suspect I’ll also be doing in the year(s) to come. Why? My introductory sentence to a recent teardown of a “vape” device answers that question, I think:
The ever-increasing prevalence of lithium-based batteries in various shapes, sizes and capacities is creating a so-called “virtuous circle”, leading to lower unit costs and higher unit volumes which encourage increasing usage (both in brand new applications and existing ones, the latter as a replacement for precursor battery technologies), translating into even lower unit costs and higher unit volumes that…round and round it goes.
Call me simple-minded (as some of you already may have done a time or few over the years!) but I consistently consult the same list of characteristics and tradeoffs among them when evaluating various battery technologies…a list that was admittedly around half its eventual length when I first scribbled it on a piece of scrap paper a few days ago, until I kept thinking of more things to add in the process of keyboard-transcribing it (thereby eventually encouraging me to delete the “concise” adjective I’d originally used to describe it)!
- Volume manufacturing availability, translating to cost (as I allude to in the earlier quote)
- Form factor implementation flexibility (or not)
- The required dimensions and weight for a given amount of charge-storage capacity
- Both peak and sustained power output
- The environmental impacts of raw materials procurement, battery manufacturing, and eventual disposal (or, ideally, recycling)
- Speaking of “environmental”, the usable operating temperature range, along with tolerance to other environment variables such as humidity, shock and vibration
- And recharge speed (both to “100% full” and to application-meaningful percentages of that total), along with the number of recharge cycles the battery can endure until it no longer can hold enough anode electrons to be application-usable in a practical sense.
Although plenty of lithium battery-based laptops, smartphones and the like are sold today, a notable “driver” of incremental usage growth in the first half of this decade (and beyond) has been various mobility systems—battery-powered drones (and, likely in the future, eVTOLs), automobiles and other vehicles, untethered robots, and watercraft (several examples of which I’ll further elaborate on later in this writeup, for a different reason). Here, the design challenges are quite interconnected and otherwise complex, as I discussed back in October 2021:
Li-ion battery technology is pretty mature at this point, as is electric motor technology, so in the absence of a fundamental high-volume technology breakthrough in the future, to get longer flight time, you need to include bigger batteries…which leads to what I find most fundamentally fascinating about drones and their flying kin: the fundamental balancing act of trading off various contending design factors that is unique to the craft of engineering (versus, for example, pure R&D or science). Look at what I’ve just said. Everyone wants to be able to fly their drone as long as possible, before needing to land and swap out battery packs. But in order to do so, that means that the drone manufacturer needs to include larger battery cells, and more of them.
Added bulk admittedly has the side benefit of making the drone more tolerant of wind gusts, for example, but fundamentally, the heavier the drone the beefier the motors need to be in order to lift it off the ground and fly it for meaningful altitudes, distances, and durations. Beefier motors burn more juice, which begs for more batteries, which make the drone even heavier…see the quagmire? And unlike with earth-tethered electricity-powered devices, you can’t just “pull over to the side of the road” if the batteries die on you.
Now toss in the added “twist” that everyone also wants their drone to be as intelligent as possible so it doesn’t end up lost or tangled in branches, and so it can automatically follow whatever’s being videoed. All those image and other sensors, along with the intelligence (and memory, and..) to process the data coming off them, burns juice, too. And don’t forget about the wireless connectivity between the drone and the user—minimally used for remote control and analytics feedback to the user…How do you balance all of those contending factors to come up with an optimum implementation for your target market?
Although the previous excerpt was specifically about drones, many of the points I raised are also relevant at least to a degree in the other mobility applications I mentioned. That said, an electric car’s powerplant size and weight constraints aren’t quite as acute as an airborne system’s might be, for example. This application-defined characteristics variability, both in an absolute sense and relative to others on my earlier list, helps explain why, as Wikipedia points out, “there are at least 12 different chemistries of Li-ion batteries” (with more to come). To wit, developers are testing out a diversity of both anode and cathode materials (and combinations of them), increasingly aided by AI (which I’ll also talk more about later in this piece) in the process, along with striving to migrate away from “wet” electrolytes, which among other things are flammable and prone to leakage, toward safer solid-state approaches.
Another emerging volume growth application, as I highlighted throughout the year, are battery generators, most frequently showcased by me in their compact portable variants. Here, while form factor and weight remain important, since the devices need to be hauled around by their owners, they’re stationary while in use. Extrapolate further and you end up with even larger home battery-backup banks that never get moved once installed. And extrapolate even further, to a significant degree in fact, and you’re now talking about backup power units for hospitals, for example, or even electrical grid storage for entire communities or regions. One compelling use case is to smooth out the inherent availability variability of renewable energy sources such as solar and wind, among other reasons to “feed” the seemingly insatiable appetites of AI workload-processing data centers in a “green”-as-possible manner. And in all these stationary-backup scenarios, installation space is comparatively abundant and weight is also of lesser concern; the primary selection criteria are factors such as cost, invulnerability, and longevity.
As such, non-lithium-based technologies will likely become increasingly prominent in the years to come. Sodium-ion batteries (courtesy of, in part, sodium’s familial proximity to lithium in the Periodic Table of Elements) are particularly near-term promising; you can already buy them on Amazon! The first US-based sodium-ion “gigafactory” was recently announced, as was the US Department of Energy’s planned $3 billion in funding for new sodium-ion (and other) battery R&D projects. Iron-based batteries such as the mysteriously named (but not so mysterious once you learn how they work) iron-air technology tout raw materials abundance (how often do you come across rust, after all?) translating into low cost. Vanadium-based “flow” batteries also hold notable promise. And there’s one other grid-scale energy storage candidate with an interesting twist: old EV batteries. They may no longer be sufficiently robust to reliably power a moving vehicle, but stationary backup systems still provide a resurrecting life-extension opportunity.
For ongoing information on this topic, in addition to my and colleagues’ periodic coverage, market research firm IDTechEx regularly publishes blog posts on various battery technology developments which I also commend to your inspection. I have no connection with the firm aside from being a contented consumer of their ongoing information output!
Drones as armamentsAs a kid, I was intrigued by the history of warfare. Not (at all) the maiming, killing and other destruction aspects, mind you, instead the equipment and its underlying technologies, their use in conflicts, and their evolutions over time. Three related trends that I repeatedly noticed were:
- Technologies being introduced in one conflict and subsequently optimized (or in other cases disbanded) based on those initial experiences, with the “success stories” then achieving widespread use in subsequent conflicts
- The oft-profound advantages that adopters of new successful warfare technologies (and equipment and techniques based on them) gained over less-advanced adversaries who were still employing prior-generation approaches, and
- That new technology and equipment breakthroughs often rapidly obsoleted prior-generation warfare methods
Re point #1, off the top of my head, there’s (with upfront apologies for any United States centricity in the examples that follow):
- Chemical warfare, considered (and briefly experimented with) during the US Civil War, with widespread adoption beginning in World War I (WWI)
- Airplanes and tanks, introduced in WWI and extensively leveraged in WWII (and beyond)
- Radar (airplanes), sonar (submarines) and other electronic surveillance, initially used in WWII with broader implementation in subsequent wars and other conflicts
- And RF and other electronics-based communications methods, including cryptography (and cracking), once again initiated in WWII
And to closely related points #2 and #3, two WWII examples come to mind:
- I still vividly recall reading as a kid about how the Polish army strove, armed with nothing but horse cavalry, to defend against invading German armored brigades, although the veracity of at least some aspects of this propaganda-tainted story are now in dispute.
- And then there was France’s Maginot Line, a costly “line of concrete fortifications, obstacles and weapon installations built by France in the 1930s” ostensibly to deter post-WWI aggression by Germany. It was “impervious to most forms of attack” across the two countries’ shared border, but the Germans instead “invaded through the Low Countries in 1940, passing it to the north”. As Wikipedia further explains, “The line, which was supposed to be fully extended further towards the west to avoid such an occurrence, was finally scaled back in response to demands from Belgium. Indeed, Belgium feared it would be sacrificed in the event of another German invasion. The line has since become a metaphor for expensive efforts that offer a false sense of security.”
I repeatedly think of case studies like these as I read about how the Ukrainian armed forces are, both in the air and sea, now using innovative, often consumer electronics-sourced approaches to defend against invading Russia and its (initially, at least) legacy warfare techniques. Airborne drones (more generally: UAVs, or unmanned aerial vehicles) have been used for surveillance purposes since at least the Vietnam War as alternatives to satellites, balloons, manned aircraft and the like. And beginning with aircraft such as the mid-1990s Predator, UAVs were also able to carry and fire missiles and other munitions. But such platforms were not only large and costly, but also remotely controlled, not autonomous to any notable degree. And they weren’t in and of themselves weapons.
That’s all changed in Ukraine (and elsewhere, for that matter) in the modern era. In part hamstrung by its allies’ constraints on what missiles and other weapons it was given access to and how and where they could be used, Ukraine has broadened drones’ usage beyond surveillance into innate weaponry, loading them up with explosives and often flying them hundreds of miles for subsequent detonation, including all the way to Moscow. Initially, Ukraine retrofit consumer drones sourced from elsewhere, but it now manufactures its own UAVs in high volumes. Compared to their Predator precursors, they’re compact, lightweight, low cost and rugged. They’re increasingly autonomous, in part to counteract Russian jamming of wireless control signals coming from their remote operators. They can even act as flamethrowers. And as the image shown at the beginning of this section suggests, they not only fly but also float, a key factor in Ukraine’s to-date success both in preventing a Russian blockade of the Black Sea and in attacking Russia’s fleet based in Crimea.
AI (again, and again, and…)AI has rapidly grown beyond its technology-coverage origins and into the daily clickbait headlines and chyrons of even mainstream media outlets. So it’s probably no surprise that this particular TLA (with “T” standing for “two” this time, versus the the usual) is a regular presence in both my end-of-year and next-year-forecast writeups, along with plenty of ongoing additional AI coverage in-between each year’s content endpoints. A month ago, for example, I strove to convince you that multimodal AI would be ascendant in the year(s) to come. Twelve months ago, I noted the increasing importance of multimodal models’ large language model (LLM) precursors over the prior year, and the month(-ish) before that, I’d forecasted that generative AI would be a big deal in 2023 and beyond. Lather, rinse and repeat.
What about the past twelve months; what are the highlights? I could easily “write a book” on just this topic (as I admittedly almost already did earlier re “Battery Transformations”). But with the 3,000-word count threshold looming, and always mindful of Aalyia’s wrath (I kid…maybe…), I’ll strive to practice restraint in what follows. I’m not, for example, going to dwell on OpenAI’s start-of-year management chaos and ongoing key-employee-shedding, nor on copyright-infringement lawsuits brought against it and its competitors by various content-rights owners…or for that matter, on lawsuits brought against it and its competitors (and partners) by other competitors. Instead, here’s some of what else caught my eye over the past year:
- Deep learning models are becoming more bloated with the passage of time, despite floating point-to-integer conversion, quantization, sparsity and other techniques for trimming their size. Among other issues, this makes it increasingly infeasible to run them natively (and solely) on edge devices such as smartphones, security cameras and (yikes!) autonomous vehicles. Imagine (a theoretical case study, mind you) being unable to avoid a collision because your car’s deep learning model is too dinky to cover all possible edge and corner cases and a cloud-housed supplement couldn’t respond in time due to server processing and network latency-and-bandwidth induced delays…
- As the models themselves grow, the amount of processing horsepower (not to mention consumed power) and time needed to train them increases as well…exponentially so.
- Resource demands for deep learning inference are also skyrocketing, especially as the trained models referenced become more multimodal and otherwise complex, not to mention the new data the inference process is tasked with analyzing.
- And semiconductor supplier NVIDIA today remains the primary source of processing silicon for training, along with (to a lesser but still notable market segment share degree) inference. To the company’s credit, decades after kicking off its advocacy of general-purpose graphics processing (GPGPU) applications, its longstanding time, money and headcount investments have borne big-time fruit for the company. That said, competitors (encouraged by customers aspiring for favorable multi-source availability and pricing outcomes) continue their pursuit of the “Green Team”.
- To my earlier “consumed power” comments, along with my even earlier “seemingly insatiable appetites of AI workload-processing data centers” comments, and as my colleague (and former boss) Bill Schweber also recently noted, “AI-driven datacenter energy demand could expand 160 percent over the next two years, leaving 40 percent of existing facilities operationally constrained by power availability,” to quote recent coverage in The Register. In response to this looming and troubling situation, in the last few days alone I’ve come across news regarding Amazon (“Amazon AI Data Centers To Double as Carbon Capture Machines”) and Meta (“Meta wants to use nuclear power for its data centers”). Plenty of other recent examples exist. But will they arrive in time? And will they only accelerate today’s already worrying global warming pace in the process?
- But, in spite of all of this spiraling “heavy lifting”, researchers continue to conclude that AI still doesn’t have a coherent understanding of the world, not to mention that the ROI on ongoing investments in what AI can do may be starting to level off (at least to some observers, albeit not a universally held opinion).
- One final opinion; deep learning models are seemingly already becoming commodities, a trend aided in part by increasingly capable “open” options (although just what “open” means has no shortage of associated controversy). If I’m someone like Amazon, Apple, Google, Meta or Microsoft, whose deep learning investments reap returns in associated AI-based services and whose models are “buried” within these services, this trend isn’t so Conversely, however, for someone whose core business is in developing and licensing models to others, the long-term prognosis may be less optimistic, no matter how rosy (albeit unprofitably so) things may currently seem to be. Heck, even AMD and NVIDIA are releasing open model suites of their own nowadays…
I’m writing this in early December 2024. You’ll presumably be reading it sometime in January 2025. I’ll split the difference and wrap up by first wishing you all a Happy New Year!
As usual, I originally planned to cover a number of additional topics in this piece, such as (in no particular order save for how they came out of my noggin):
- Matter and Thread’s misfires and lingering aspirations
- Much discussed (with success reality to follow?) chiplets
- Plummeting-cost solar panels
- Iterative technology-related constraints on China (and its predictable responses), and
- Intel’s ongoing, deepening travails
But (also) as usual I ended up with more things that I wanted to write about than I had a reasonable wordcount budget to do so. Having now passed through 3,000 words, I’m going to restrain myself and wrap up, saving the additional topics (as well as updates on the ones I’ve explored here) for dedicated blog posts to come in the coming year(s). Let me know your thoughts on my top-topic selections, as well as what your list would have looked like, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- 2023: Is it just me, or was this year especially crazy?
- A tech look back at 2022: We can’t go back (and why would we want to?)
- A 2021 technology retrospective: Strange days indeed
- 10 consumer technology breakthroughs from 2019
- 2025: A technology forecast for the year ahead
The post 2024: A year’s worth of interconnected themes galore appeared first on EDN.
Ternary gain-switching 101 (or 10202, in base 3)
This design idea is centered on the humble on/off/on toggle switch, which is great for selecting something/nothing/something else, but can be frustrating when three active options are needed. One possibility is to use the contacts to connect extra, parallel resistors across a permanent one (for example), but the effect is something like low/high/medium, which just looks wrong.
That word “active” is the clue to making the otherwise idle center position do some proper work, like helping to control an op-amp stage’s gain, as shown in Figure 1.
Figure 1 An on/off/on switch gives three gain settings in a non-inverting amplifier stage and does so in a rational order.
Wow the engineering world with your unique design: Design Ideas Submission Guide
I’ve used this principle many times, but can’t recall having seen it in published circuits, and think it’s novel, though it may be so commonplace as to be invisible. It’s certainly obvious when you think about it.
A practical applicationThat’s the basic idea, but it’s always more satisfying to convert such ideas into something useful. Figure 2 illustrates just that: an audio gain-box whose amplification is switched in a ternary sequence to give precise 1-dB steps from 0 to +26 dBs. As built, it makes a useful bit of lab kit.
Figure 2 Ternary switching over three stages gives 0–26 dB gain in precise 1-dB steps.
Three gain stages are concatenated, each having its own switch. C1 and C2 isolate any DC, and R1 and R12 are “anti-click” resistors, ensuring that there’s no stray voltage on the input or output when something gets plugged in. A1d is the usual rail-splitter, allowing use on a single, isolated supply.
The op-amps are shown as common-or-garden TL074/084s. For lower noise and distortion, (a pair of) LM4562s would be better, though they take a lot more current. With a 5-V supply, the MCP6024 is a good choice. For stereo use, just duplicate almost everything and use double-pole switches.
All resistor values are E12/24 for convenience. The resistor combinations shown are much closer to the ideal, calculated values than the assumed 1% tolerance of actual parts, and give a better match than E96s would in the same positions.
Other variations on the themeThe circuit of Figure 2 could also be built for DC use but would then need low-offset op-amps, especially in the last stage. (Omit C1, C2, and other I/O oddments, obviously.)
Figure 1 showed the non-inverting version, and Figure 3 now employs the idea in an inverting configuration. Beware of noise pick-up at the virtual-earth point, the op-amp’s inverting input.
Figure 3 An inverting amplifier stage using the same switching principle.
The same scheme can also be used to make an attenuator, and a basic stage is sketched in Figure 4. Its input resistance changes depending on the switch setting, so an input buffer is probably necessary; buffering between stages and of the output certainly is.
Figure 4 A single attenuation stage with three switchable levels.
Conclusion: back to binary basicsYou’ve probably been wondering, “What’s wrong with binary switching?” Not a lot, except that it uses more op-amps and more switches while being rather obvious and hence less fun.
Anyway, here (Figure 5) is a good basic circuit to do just that.
Figure 5 Binary switching of gain from 0 to +31 dB, using power-of-2 steps. Again, the theoretical resistor values are much closer to the ideal than their actual 1% tolerances.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- To press on or hold off? This does both.
- Latching power switch uses momentary pushbutton
- A new and improved latching power switch
- Latching power switch uses momentary-action pushbutton
The post Ternary gain-switching 101 (or 10202, in base 3) appeared first on EDN.
A Bluetooth receiver, an identity deceiver
In mid-October 2015, EDN ran my teardown of Logitech’s Bluetooth Audio Adapter (a receiver, to be precise) based on a CSR (now Qualcomm) BlueCore CSR8630 Single Chip Audio ROM.
The CSR module covers the bulk of the bottom half of the PCB topside, with most of the top half devoted to discretes and such for implementing the audio line-level output amp and the like:
A couple of weeks later, in a follow-up blog post, I mentioned (and briefly compared) a bunch of other Bluetooth adapters I’d come across. Some acted as both receivers and transmitters, for example, while others embedded batteries for portable usage. They implemented varying Bluetooth profiles and specification levels, and some even supported aptX and other optional audio codecs. Among them were three different Aukey models; here’s what I said about them:
I recently saw Aukey’s BR-C1 on sale for $12.99, for example (both black and white color scheme options are available), while the BR-C2 was recently selling for $1 less, and the even fuller-featured BT-C2 was recently special-priced at $24.99.
Logitech’s device is AC-powered via an included “wall wart” intermediary and therefore appropriate for adding Bluetooth input-source capabilities to an A/V receiver, as discussed in my teardown. Aukey’s products conversely contain built-in rechargeable batteries and are therefore primarily intended for mobile use, such as converting a conventional pair of headphones into wireless units. Recharging of the Aukey devices’ batteries occurs via an included micro-USB cable and not-included 5V USB-output power source.
All of the Aukey products can also act as hands-free adapters, by virtue of their built-in microphones. The BR-C1 and BR-C2’s analog audio connections are output-only, thereby classifying them as Bluetooth receivers; the more expensive BT-C2 is both a Bluetooth transmitter and receiver (albeit not both at the same time). But the Bluetooth link between all of them and a wirelessly tethered device is bi-directional, enabling not only speakerphone integration with a vehicle audio subsystem or set of headphones (via analog outputs) but also two-way connectivity to a smartphone (via Bluetooth).
The fundamental difference between the BR-C1 and BR-C2, as far as I can tell, is the form factor; the BR-C1 is 2.17×2.17×0.67 inches in size, while the BR-C2 is 2×1×0.45 inches. All other specs, including play and standby time, seem to be identical. None of Aukey’s devices offer dual RCA jacks as an output option; they’re 3.5 mm TRS-only. However, as my teardown writeup notes, the inclusion of a TRS-to-dual RCA adapter cable in each product’s kit makes multiple integrated output options a seemingly unnecessary functional redundancy.
As time passed, my memory of the specifics of that latter piece admittedly faded, although I’ve re-quoted the following excerpt a few times in comparing a key point made then with other conceptually reminiscent product categories: LED light bulbs, LCDs, and USB-C-based devices:
Such diversity within what’s seemingly a mature and “vanilla” product category is what prompted me to put cyber-pen to cyber-paper for this particular post. The surprising variety I encountered even during my brief period of research is reflective of the creativity inherent to you, the engineers who design these and countless other products. Kudos to you all!
Fast forward to early December 2023, when I saw an Aukey Bluetooth audio adapter intended specifically for in-vehicle use (therefore battery powered, and with an embedded microphone for hands-free telephony), although usable elsewhere too. It was advertised at bargains site SideDeal (a sibling site to same-company Meh, who I’ve also mentioned before) for $12.99.
No specific model number was documented on the promo page, only some features and specs:
Features
- Wireless Audio Stream
- The Bluetooth 5 receiver allows you to wirelessly stream audio from your Bluetooth enabled devices to your existing wired home or car stereo system, speakers, or headphones
- Long Playtime
- Built-in rechargeable battery supports 18 hours of continuous playback and 1000 hours of standby time
- Dual Device Link
- Connect two bluetooth devices simultaneously; free to enjoy music or answer phone call from either of the two paired devices
- Easy Use
- Navigate your music on the receiver with built-in controls which can also be used to manage hands-free calls or access voice assistant
Specifications
- Type: Receiver
- Connectivity: 3.5mm
- Bluetooth standard: Bluetooth v5.0
- Color: Black
- To fit: Audio Receivers
- Ports: 3.5 mm Jack
I bit. I bought three, actually; one each for my and my wife’s vehicles, and a third for teardown purposes. When they arrived, I put the third boxed one on the shelf.
Fast forward nearly a year later, to the beginning of November 2024 (and a couple of weeks prior to when I’m writing these words now), when I pulled the box back off the shelf and prepared for dissection. I noticed the model number, BR-C1, stamped on the bottom of the box but didn’t think anything more of it until I remembered and re-read that blog post published almost exactly nine years earlier, which had mentioned the exact same device:
(I’ve saved you from the boring shots of the blank cardboard box sides)
Impressive product longevity, eh? Hold that thought. Let’s dive in:
The left half of the box contents comprises three cables: USB-A to micro-USB for recharging, plus 3.5 mm (aka, 1/8”) TRS to 3.5 mm, and 3.5 mm to dual RCA for audio output connections:
And a couple of pieces of documentation (a PDF of the user manual is available here):
On the right, of course, is our patient (my images, this time, versus the earlier stock photos), as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
The other three device sides, like the earlier box sides, are bland, so I’ve not included images of them. You’re welcome.
Note, among other things, the FCC ID, 2AFHP-BR-C1. Again, hold that thought. By the way, it’s 2AFHP-BR-C1, not the 2AFHPBR-C1 stamped on the underside, which as it turns out is a different device, albeit, judging from the photos, also an automobile interior-tailored product.
From past experience, I’ve learned that the underside of a rubber “foot” is often a fruitful path inside a device, so once again I rolled the dice:
Bingo: my luck continues to hold out!
With all four screws removed (or at least sufficiently loosened; due to all that lingering adhesive, I couldn’t get two of them completely out of the holes), the bottom popped right off:
And the first thing I saw staring back at me was the 3.7-V, 300 mAh Li-polymer “pouch” cell. Why they went with this battery form factor and formulation versus the more common Li-ion “can” is unclear; there was plenty of room in the design for the battery, and flexibility wasn’t necessary:
In pulling the PCB out of the remaining top half of the case:
revealing, among other things, the electret microphone above it:
I inadvertently turned the device on, wherein it immediately went into blue-blinking-LED standby mode (I fortuitously quick-snapped the first still photo while the LED was illuminated; the video below it shows the full blink cadence):
Why standby, versus the initial alternating red/blue pairing-ready sequence that per the user manual (not to mention common sense) it was supposed to first-time power up in? I suspect that since this was a refurbished (not brand new) device, it had been previously paired to something by the prior owner and the factory didn’t fully reset it before shipping it back out to me. A long-press of the topside button got the device into the desired Bluetooth pairing mode:
And another long-press powered the PCB completely off again:
The previously seen bottom side of the PCB was bare (the glued-on battery doesn’t count, in my book) and, as usual for low cost, low profit margin consumer electronics devices like this one, the PCB topside isn’t very component-rich, either. In the upper right is the 3.5 mm audio output jack; to its left, and in the upper left, is the micro-USB charging connector, with the solder sites for the microphone wiring harness between them. Below them is the system’s multi-function power/mode switch. At left is the three-wire battery connector. Slightly below and to its right (and near the center) is the main system processor, Realtek’s RTL8763BFR Bluetooth dual mode audio SoC with integrated DAC, ADC (for the already-seen mic), DSP and both ROM and RAM.
To the right is of the Realtek RTL8763BFR is its companion 40 MHz oscillator, with a total of three multicolor LEDs in a column both above and below it. In contrast, you may have previously noted five light holes in the top of the device; the diffusion sticker in the earlier image of the inside of the top half of the chassis “bridges the gaps”. Below and to the left of the Realtek RTL8763BFR is the HT4832 audio power amplifier, which drives the aforementioned 3.5 mm audio output jack. The HT4832 comes from one of the most awesome-named companies I’ve yet come across: Jiaxing Heroic Electronic Technology. And at the bottom of the PCB, perhaps obviously, is the embedded Bluetooth antenna.
After putting the device back together, it seemingly still worked fine; here’s what the LEDs look like displaying the pairing cadence from the outside:
All in all, a seemingly straightforward teardown, right? So, then, what’s with the “Identity Deceiver” mention in this writeup’s title? Well, before finishing up, I as-usual hit up the FCC certification documentation, final-action dated January 29, 2018, to see if I’d overlooked anything notable…but the included photos showed a completely different device inside. This time, the bottom side of the PCB was covered with components. And one of them, the design’s area-dominant IC, was from ISSC Technologies, not Realtek. See for yourself.
Confused, I hit up Google to see if anyone else had done a teardown of the Aukey BR-C1. I found one, in video form, published on October 30, 2015. It shows the same design version as in the FCC documentation:
The Aukey BR-C1 product review from the same YouTube creator, published a week-plus earlier, is also worth a view, by the way:
Fortuitously, the YouTube “thumbnail” video for the first video showcases the previously mentioned ISSC Technologies chip:
It’s the IS1681S, a Bluetooth 3.0+EDR multimedia SOC. Here’s a datasheet. ISSC Technologies was acquired by Microchip Technology in mid-2014 and the IS1681S presumably was EOL’d sometime afterward, thereby prompting Aukey’s redesign around Realtek silicon. But how was Aukey able to take the redesign to production without seeking FCC recertification? I welcome insights on this, or anything else you found notable about this teardown, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Teardown: Bluetooth audio dongle keeps it simple
- Bluetooth audio adapters and their creative developers
- Teardown: Tile Mate Bluetooth tracker relies on software
- Teardown: Bluetooth-enhanced LED bulb
- Teardown: Bluetooth smart dimmer
- Teardown: OBD-II Bluetooth adapter
The post A Bluetooth receiver, an identity deceiver appeared first on EDN.
Software-defined vehicle (SDV): A technology to watch in 2025
Software-defined vehicle (SDV) technology has been a prominent highlight in the quickly evolving automotive industry. But how much of it is hype, and where is the real and tangible value? CES 2025 in Las Vegas will be an important venue to gauge the actual progress this technology has made with a motto of bringing code on the road.
Elektrobit will demonstrate its cloud-based virtual development, prototyping, testing, and validation platform for digital cockpits and in-vehicle infotainment (IVI) at the show. The company’s SDV solutions encompass AMD’s automotive-grade hardware, Google’s Android Automotive and Gemini AI, Epic Games’ Unreal Engine for 3D rendering, and Here navigation.
Figure 1 SDV is promising future-proof cockpit agnostic of hardware and software. Source: Elektrobit
Moreover, at CES 2025, Sony Honda Mobility will showcase its AFEELA prototype for electric vehicles (EVs), which employs Elektrobit’s digital cockpit built around a software-defined approach. Elektrobit’s other partners demonstrating their SDV solutions at the show include AWS, Cognizant, dSPACE, Siemens, and Sonatus.
SDV’s 2024 diary
Earlier, in April 2024, leading automotive chipmaker Infineon joined hands with embedded software specialist Green Hills to jointly develop SDV architectures for EV drivetrains. Infineon would combine its microcontroller-based processing platform AURIX TC4x with safety-certified real-time operating system (RTOS) µ-velOSity from Green Hills.
Figure 2 Real-time automotive systems are crucial in SDV architectures. Source: Infineon Technologies
Green Hills has already ported its µ-velOSity RTOS to the AURIX TC4x microcontrollers. The outcome of this collaboration will be safety-critical real-time automotive systems capable of serving SDV designs and features.
Next, Siemens EDA has partnered with Arm and AWS to accelerate the creation of virtual cars in the cloud. The toolmaker has announced the availability of its PAVE360-based solution for automotive digital twin on AWS cloud services.
Figure 3 The digital twin solution on the AWS platform aims to create a virtual car in the cloud. Source: Siemens EDA
“The automotive industry is facing disruption from multiple directions, but the greatest potential for growth and new revenue streams is the adoption of the software-defined vehicle,” said Mike Ellow, executive VP of EDA Global Sales, Services and Customer Support at Siemens Digital Industries Software. “The hyper-competitive SDV industry is under immense pressure to quickly react to consumer expectations for new features.”
That’s driving the co-development of parallel hardware and software and the move toward the holistic digital twin, he added. Dipti Vachani, senior VP and GM of Automotive Line of Business at Arm, went a step ahead by saying that the software-defined vehicle is survival for the automotive industry.
Hype or reality
The above recap of 2024 activities shows that a lot is happening in the SDV design space. A recent IDTechEx report titled “Software-Defined Vehicles, Connected Cars, and AI in Cars 2024-2034: Markets, Trends, and Forecasts” claims that the cellular connectivity within SDVs can provide access to Internet of Things (IoT) features such as over-the-air (OTA) updates, personalization, and entertainment options.
It also explains how artificial intelligence (AI) within an SDV solution can work as a digital assistant to communicate and respond to the driver and make interaction more engaging using AI-based visual characters appearing on the dashboard. BMW is already offering a selection of SDV features, including driving assistants and traffic camera information.
Figure 4 SDV is promising new revenue streams for car OEMs. Source: IDTechEx
At CES 2025, automotive OEMs, Tier 1’s, chip vendors, and software suppliers are expected to present their technology roadmaps for SDV products. This will offer good visibility on how ready the present SDV technology is for the cars of today and tomorrow.
Related Content
- Redefining Mobility with Software-Defined Vehicles
- Unveiling the Transformation of Software-Defined Vehicles
- The Future of Radar Architecture in Software-Defined Vehicles
- Understanding the Architecture of Software-Defined Vehicles (SDVs)
- The Role of Edge Computing in Evolving Software-Defined Vehicle Architectures
The post Software-defined vehicle (SDV): A technology to watch in 2025 appeared first on EDN.
2024: The year when MCUs became AI-enabled
Artificial intelligence (AI) and machine learning (ML) technologies, once synonymous with large-scale data centers and powerful GPUs, are steadily moving toward the network edge via resource-limited devices like microcontrollers (MCUs). Energy-efficient MCU workloads are being melded with AI power to leverage audio processing, computer vision, sound analysis, and other algorithms in a variety of embedded applications.
Take the case of STMicroelectronics and its STM32N6 microcontroller, which features neural processing unit (NPU) for embedded inference. It’s ST’s most powerful MCU and carries out tasks like segmentation, classification, and recognition. Alongside this MCU, ST offers software and tools to lower the barrier to entry for developers to take advantage of AI-accelerated performance for real-time operating systems (RTOSes).
Figure 1 The Neural-ART accelerator in STM32N6 claims to deliver 600 times more ML performance than a high-end STM32 MCU today. Source: STMicroelectronics
Infineon, another leading MCU supplier, has also incorporated a hardware accelerator in its PSOC family of MCUs. Its NNlite neural network accelerator aims to facilitate new consumer, industrial, and Internet of Things (IoT) applications with ML-based wake-up, vision-based position detection, and face/object recognition.
Next, Texas Instruments, which calls its AI-enabled MCUs real-time microcontrollers, has integrated an NPU inside its C2000 devices to enable fault detection with high accuracy and low latency. This will allow embedded applications to make accurate, intelligent decisions in real-time to perform functions like arc fault detection in solar and energy storage systems and motor-bearing fault detection for predictive maintenance.
Figure 2 C2000 MCUs integrate edge AI hardware accelerators to facilitate smarter real-time control. Source: Texas Instruments
The models that run on these AI-enabled MCUs learn and adapt to different environments through training. That, in turn, helps systems achieve greater than 99% fault detection accuracy to enable more informed decision-making at the edge. The availability of pre-trained models further lowers the barrier to entry for running AI applications on low-cost MCUs.
Moreover, the use of a hardware accelerator inside an MCU offloads the burden of inferencing from the main processor, leaving more clock cycles to service embedded applications. This marks the beginning of a long journey for AI hardware-accelerated MCUs, and for a start, it will thrust MCUs into applications that previously required MPUs. The MPUs in the embedded design realm are also not fully capable of controlling design tasks in real-time.
Figure 3 The AI-enabled MCUs replacing MPUs in several embedded system designs could be a major disruption in the semiconductor industry. Source: STMicroelectronics
AI is clearly the next big thing in the evolution of MCUs, but AI-optimized MCUs have a long way to go. For instance, software tools and their ease of use will go hand in hand with these AI-enabled MCUs; they will help developers evaluate the embeddability of AI models for MCUs. Developers should also be able to test AI models running on an MCU in just a few clicks.
The AI party in the MCU space started in 2024, and 2025 is very likely to witness more advances for MCUs running lightweight AI models.
Related Content
- Smarter MCUs Keep AI at the Edge
- Profile of an MCU promising AI at the tiny edge
- 32-bit Microcontrollers Need a Major AI Upgrade
- AI algorithms on MCU demo progress in automated driving
- An MCU approach for AI/ML inferencing in battery-operated designs
The post 2024: The year when MCUs became AI-enabled appeared first on EDN.
Wide-creepage switcher improves vehicle safety
A wide-creepage package option for Power Integrations’ InnoSwitch 3-AQ flyback switcher IC enhances safety and reliability in automotive applications. According to the company, the increased primary-to-primary creepage and clearance distance of 5.1 mm between the drain and source pins of the InSOP-28G package eliminates the need for conformal coating, making the IC compliant with the IEC 60664-1 reinforced isolation standard in 800-V vehicles.
The new 1700-V CV/CC InnoSwitch3-AQ devices feature an integrated SiC primary switch delivering up to 80 W of output power. They also include a multimode QR/CCM flyback controller, secondary-side sensing, and a FluxLink safety-rated feedback mechanism. This high level of integration reduces component count by half, simplifying power supply implementation. The wider drain pin enhances durability, making the ICs well-suited for high-shock and vibration environments, such as eAxle drive units.
These latest members of the InnoSwitch3-AQ family start up with as little as 30 V on the drain without external circuitry, critical for functional safety. Devices achieve greater than 90% efficiency and consume less than 15 mW at no-load. Target automotive applications include battery management systems, µDC/DC converters, control circuits, and emergency power supplies in the main traction inverter.
Prices for the 1700 V-rated InnoSwitch3-AQ switching power supply ICs start at $6 each in lots of 10,000 units. Samples are available now, with full production in 1Q 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wide-creepage switcher improves vehicle safety appeared first on EDN.
R&S boosts GMSL testing for automotive systems
Rohde & Schwarz expands testing for automotive systems that employ Analog Devices’ Gigabit Multimedia Serial Link (GMSL) technology. Designed to enhance high-speed video links in applications like In-Vehicle Infotainment (IVI) and Advanced Driver Assistance Systems (ADAS), GMSL offers a simple, scalable SerDes solution. The R&S and ADI partnership aims to assist automotive developers and manufacturers in creating and deploying GMSL-based systems.
Physical Medium Attachment (PMA) testing, compliant with GMSL requirements, is now fully integrated into R&S oscilloscope firmware, along with a suite of signal integrity tools. These include LiveEye for real-time signal monitoring, advanced jitter and noise analysis, and built-in eye masks for forward and reverse channels.
To verify narrowband crosstalk, the offering includes built-in spectrum analysis on the R&S RTP oscilloscope. In addition, cable, connector, and channel characterization can be performed using R&S vector network analyzers.
R&S will demonstrate the application at next month’s CES 2025 trade show. To learn more about ADI’s GMSL technology click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post R&S boosts GMSL testing for automotive systems appeared first on EDN.
Gen3 UCIe IP elevates chiplet link speeds
Alphawave Semi’s Gen3 UCIe Die-to-Die (D2D) IP subsystem enables chiplet interconnect rates up to 64 Gbps. Building on the successful tapeout of its Gen2 36-Gbps UCIe IP on TSMC’s 3-nm process, the Gen3 subsystem supports both high-yield, low-cost organic substrates and advanced packaging technologies.
At 64 Gbps, the Gen3 IP delivers over 20 Tbps/mm in bandwidth density with ultra-low power and latency. The configurable subsystem supports multiple protocols, including AXI-4, AXI-S, CXS, CHI, and CHI-C2C, enabling high-performance connectivity across disaggregated systems in HPC, data center, and AI applications.
The design complies with the latest UCIe specification and features a scalable architecture with advanced testability, including live per-lane health monitoring. UCIe D2D interconnects support a variety of chiplet connectivity scenarios, including low-latency, coherent links between compute chiplets and I/O chiplets, as well as reliable optical I/O connections.
“Our successful tapeout of the Gen2 UCIe IP at 36 Gbps on 3-nm technology builds on our pioneering silicon-proven 3-nm UCIe IP with CoWoS packaging,” said Mohit Gupta, senior VP & GM, Custom Silicon & IP, Alphawave Semi. “This achievement sets the stage for our Gen3 UCIe IP at 64 Gbps, which is on target to deliver high performance, 20-Tbps/mm throughput functionality to our customers who need the maximization of shoreline density for critical AI bandwidth needs in 2025.”
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Gen3 UCIe IP elevates chiplet link speeds appeared first on EDN.
UWB radar SoC enables 3D beamforming
Hydrogen, an ultra-wideband (UWB) radar SoC from Aria Sensing, delivers 3D MIMO beamforming with programmable pulse bandwidths ranging from 500 MHz to 1.8 GHz. Its advanced waveforms support single-pulse and pulse-compression modes, enabling precise depth perception and spatial resolution. The chip optimizes signal-to-noise ratios for various detection tasks while maintaining low radiated power.
Equipped with two integrated RISC-V microprocessors, Hydrogen accommodates up to four transmitting and four receiving antenna channels with flexible and scalable array configurations to enhance cross-range resolution. Offering 1D, 2D, and 3D sensing, the SoC detects presence, position, vital signs, and gestures, serving automotive, industrial automation, and smart home markets.
“Hydrogen represents a paradigm shift in radar technology, combining cutting-edge UWB advancements with compact SoC design. We are excited to see how this innovation will redefine radar sensing applications,” said Alessio Cacciatori, Aria founder and CEO.
The Hydrogen UWB radar SoC supports multiple center frequencies for global operation without sacrificing resolution. It consumes 90 mA at 1.8 V and is housed in a 9×9-mm QFN64 package.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post UWB radar SoC enables 3D beamforming appeared first on EDN.
GPU IP powers scalable AI and cloud gaming
Vitality is VeriSilicon’s latest GPU IP architecture targeting cloud gaming, AI PCs, and both discrete and integrated graphics cards. According to the company, Vitality offers advancements in computation performance and scalability. With support for Microsoft DirectX 12 APIs and AI acceleration libraries, the GPU architecture suits performance-intensive applications and complex workloads.
Vitality integrates a configurable Tensor Core AI accelerator and 32 Mbytes to 64 Mbytes of Level 3 cache. Capable of handling up to 128 cloud gaming channels per core, it meets demands for high concurrency and image quality in cloud-based entertainment while enabling large-scale desktop gaming and Windows applications.
“The Vitality architecture GPU represents the next generation of high-performance and energy-efficient GPUs,” said Weijin Dai, chief strategy officer, executive VP and GM of VeriSilicon’s IP Division. “With over 20 years of GPU development experience across diverse market segments, the Vitality architecture is built to support the most advanced GPU APIs. Its scalability enables widespread deployment in fields such as automotive systems and mobile computing devices.”
A datasheet was not available at the time of this announcement.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post GPU IP powers scalable AI and cloud gaming appeared first on EDN.
Metamaterial’s mechanical maximization enhances vibration-energy harvesting
The number of ways to harvest energy that would otherwise go unused and wasted is extraordinary. To cite a few of the many examples, there’s the heat given off during almost any physical or electronic process, ambient light which is “just there,” noise, and ever-present vibration. Each of these has different attributes along with pros and cons which are fluid with respect to consistency, reliability, and, of course, useful output power in a given situation.
For example, the harvesting of vibration-sourced energy is attractive (when available) as it is unaffected by weather or terrain conditions. However, most of the many manifestations of such energy are quite small. It requires attention to details and design to extract and squeeze out a useful amount in the energy chain from a raw source to the harvesting transducer.
Most vibrations in daily life are tiny and often not “focused” but spread across a wide area or volume. To overcome this significant issue, numerous conversion devices, typically piezoelectric elements, are often installed in multiple locations that are exposed to relatively large vibrations.
Addressing this issue, a research effort lead by a team at KRISS—the Korea Research Institute of Standards and Science in the Republic of Korea (South Korea) —has developed a metamaterial that traps and amplifies micro-vibrations into small areas. The behavior of the metamaterials enhances and localizes the mechanical-energy density level at a local spot in which a harvester is installed.
The metamaterial has a thin, flat structure roughly the size of an adult’s palm, allowing it to be easily attached to any surface where vibration occurs, Figure 1. The structure can be easily modified to fit the object to which it will be attached. They expect that the increase in the power output will accelerate its commercialization.
Figure 1 The metamaterial developed by the KRISS-led team is flat and easy to position. Source: KRISS
The metamaterial developed by KRISS traps and accumulates micro-vibrations within it and amplifies it. This allows the generation of large-scale electrical power relative to the small number of piezoelectric elements that are used. By applying vibration harvesting with the developed metamaterial, the research team has succeeded in generating more than four times more electricity per unit area than conventional technologies.
Their metasurface structure can be divided into three finite regions, each with a distinct role: metasurface, phase-matching, and attaching regions. Their design used what is called “trapping” physics with carefully designed defects in structure to simultaneously achieve the focusing and accumulation of wave energy.
They validated their metasurface using experiments, with results showing an amplification factor of the input flexural vibration amplitude by a factor of twenty. They achieved this significant amplification largely due to the intrinsic negligible damping characteristic of their metallic structure, Figure 2.
Figure 2 (right) Schematic of the proposed metasurface attachment and (left) a conceptual illustration of the attachment installed on a vibrating rigid structure for flexural wave energy amplification. Source: KRISS
Their phase-gradient metasurfaces (also called metagratings in the acoustic field) feature intrinsic wave-trapping behavior. (Here, the term “metasurfaces” refers to structures that diffract waves, primarily through spatially-varying phase accumulations within the constituent wave channels.)
Constructs, analysis, and modeling are one thing, but a proposal such as theirs requires and is very conducive to validation. Their experimental setup used a vibration shaker and a laser Doppler vibrometer (LDV) sensor to excite and then measure the flexural vibration inside the specimen, Figure 3. For convenience, the specimen was firmly clamped to the shaker instead of being directly attached onto the shaker using a jig and a bolted joint.
Figure 3 (a) Schematic illustration and (b) photographs to demonstrate the experimental setup in order to validate the flexural-vibration amplifying performance of the fabricated metasurface attachment. Using a specially-configured jig and a bolted joint, the metasurface structure is firmly clamped to a vibration shaker. The surface region covering a unit supercell (denoted as M1) and the interfacial line (M2) between the metasurface strips and phase-matching plate are measured using laser Doppler vibrometer equipment. Source: KRISS
The shaker was set to constantly vibrate at frequencies between 3 kHz and 5 kHz at arbitrary weak amplitudes set by a function generator and an RF power amplifier. The phase-matching plate (somewhat analogous to impedance-matching circuit) was another essential component in the structure. It dramatically improved the amplifying performance by assisting coherent phases of scattering wave fields to constantly develop within the metasurface strips in the steady state.
It would be nice to have a summary of before-and-after performance using their design. Unfortunately, their published paper is too much of a good thing: it has a large number of such graphs and tables under different conditions, but no overall summary other than a semi-quantitative image, Figure 4 (top right).
Figure 4 This conceptual illustration graphically demonstrates the nature of the vibration amplification performance of the metamaterial developed by the KRISS-lead team. Source: KRISS
If you want to see more, check out their paper “Finite elastic metasurface attachment for flexural vibration amplification” published in Elsevier’s Mechanical Systems and Signal Processing. But I’ll warn you that at 32 pages, the full paper (main part, appendix, and references) is the longest I have seen by far in an academic journal!
Have you had any personal experience with vibration-based energy harvesting? Was the requisite modeling difficult and valid? Did it meet or exceed your expectations? What sort of real-work problems or issues did you encounter?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- Nothing new about energy harvesting
- Clever harvesting scheme takes a deep dive, literally
- Energy harvesting gets really personal
- Lightning as an energy harvesting source?
- What’s that?…A fuel cell that harvests energy from…dirt?
The post Metamaterial’s mechanical maximization enhances vibration-energy harvesting appeared first on EDN.
Save, recall, and script oscilloscope settings
Digital oscilloscopes have a great thing going for them: they are digital. Instrument settings, waveforms, and screen images can be saved as digital files either internally or to external devices. Not only can they be saved, but they can be recalled to the oscilloscope or an offline program to review the data and, in some cases, for additional analysis and measurements.
The ability to save setups is one of the great benefits of digital oscilloscopes. It saves lots of time setting up measurements, allowing settings of previous work sessions to be recalled and work resumed in seconds. A series of recalled settings can even be the basis for a comprehensive test procedure.
Digital oscilloscopes preserve the last settings when powered down and restore them when power is restored. That can be a problem if that state is not what you need. For instance, If the previous user set the oscilloscope to trigger on an external signal and you want to trigger on one of the internal channels there will be a problem unless you check first and update the settings. The easiest way to ensure the state of the oscilloscope when first powered on is to recall its default setup. The default setup is a known state defined by the manufacturer. The default state is generally helpful in getting data on the screen. It usually places the instrument in an auto-trigger mode so there will be a trace on the screen. Starting with the default state the instrument can be set to make the desired measurement. When that state is reached simply saving that setup state means that it can be recalled at need.
Setup filesSetup file formats vary between oscilloscope suppliers. Teledyne LeCroy uses Visual Basic for setup files. Most other suppliers use Standard Commands for Programmable Instruments (SCPI) for settings. Both use ASCII text which is easy to read and edit.
Figure 1 shows part of a typical setup file for a mid-range Teledyne LeCroy oscilloscope.
Figure 1 Part of a setup file for a Teledyne LeCroy Windows-based oscilloscope using ASCII text-based Visual Basic script. The command for setting the vertical scale of channel 1 is highlighted. Source: Art Pini
The setup files in this oscilloscope are a complete Visual Basic Script. This script can be thought of as a program that when executed sets up the oscilloscope in the state described. When a setting file is saved, it contains a Visual Basic program to restore the instrument settings upon execution. Visual Basic scripts allow the user to incorporate all the power and flexibility of the Visual Basic programming language, including looping and conditional branching.
The control statements for each function of the oscilloscope are based on a hierarchical structure of oscilloscope functions, which is documented in the automation and remote-control manual as well as in a software application called Maui Browser (formerly XStream Browser), which is included with every Windows oscilloscope. The manual includes detailed instructions on using the Maui Browser. The browser connects to the oscilloscope, either locally or remotely, and exposes the automation components as seen in Figure 2.
Figure 2 A view of the Maui Browser, connected locally to an oscilloscope, showing the control selections for channel C1 under the Acquisition function. The vertical scale setting is highlighted. Source: Art Pini
Each functional category of the oscilloscope’s operation is listed in the left-hand column. Acquisition, one of the high-level functions, has been selected in this example. Under that selection is a range of sub-functions related to the acquisition function, including Channel 1 (C1), which has been selected. The table on the right lists all the controls associated with channel 1. Note that the Vertical Scale (Ver Scale) setting has been selected and highlighted. The current setting of 200 mV per division is shown. To the right is a summary of the range of values available for the vertical scale function. The value can be changed on the connected oscilloscope by highlighting the numeric value and changing it to one of the appropriate values within the range.
An example of a simple command is setting the vertical scale of channel 1 (C1) to 200 mV per division. The command structure for the selected command is at the bottom of the figure. All that has to be added is the parameter value, 0.2 in this case- “app.Acquisition.C1.VerScale=0.2”
The Maui Browser is a tool for looking up the desired setting command without the need for a programming manual. It is also helpful for verifying selected commands and associated parameters. The browser program is updated with the oscilloscope firmware and is always up to date, unlike a paper manual.
ScriptingWith Visual Basic scripts being used internally to program the oscilloscope and automate the settings operations, the logical step is to have Visual Basic scripts control and automate scope operations. This operation happens within the oscilloscope itself; there is no need for an external controller. Visual Basic scripting uses Windows’ built-in text editor (Notepad) and the Visual Basic Script interpreter (VBScript), which is also installed in this family of oscilloscopes.
The Teledyne LeCroy website has many useful scripts for their oscilloscopes posted on the website, they perform tasks like setting up a data logging operation, saving selected measurements to spreadsheet files, or using cursors to set measurement gate limits. These can be used as written, but they can also serve as examples on which to base your script. Consider the following example. Figure 3 shows a settings script that allows a zoom trace to be dynamically centered on the position of the absolute horizontal cursor. As the cursor is moved the zoom tracks the movement.
Figure 3 A Visual Basic script that centers a zoom trace on the current horizontal cursor location. Source: Art Pini
The script is copied to the oscilloscope and either recalled using the recall setup function of the oscilloscope or executed by highlighting the script file in Windows File Explorer and double-clicking on it. The script turns on the cursor and the zoom trace and adjusts the center of the zoom trace to match the current cursor’s horizontal location as seen in Figure 4.
Figure 4 The script centers the zoom trace on the absolute horizontal cursor location and tracks it as it is moved. Source: Art Pini
The script operates dynamically; as the cursor is moved, the zoom trace tracks the movement instantly. The script runs continuously and is stopped by turning off the cursor. The message, “Script running; turn off cursor to stop,” appears in the message field in the lower left corner of the screen.
CustomDSOTeledyne LeCroy oscilloscopes incorporate the advanced customization option, including the CustomDSO feature, which allows user-defined graphical interface elements to be called Visual Basic scripts. The basic mode of CustomDSO creates a simple push-button interface used to run setup scripts. The touch of a single button within the oscilloscope user interface can recall scripts. The recalled setups can include other nested setups. This allows users to create a complex series of setups. CustomDSO Plug-In mode will enable users to create an ActiveX Plug-In designed in an environment like Visual Studio and merge this graphical user interface with the scope user interface.
Figure 5 shows the CustomDSO user interface.
Figure 5 The CustomDSO basic mode setup links a user interface push button to a specific setup script file. Source: Art Pini
In basic mode, CustomDSO links eight user interface push buttons with setup scripts. A checkbox enables showing the CustomDSO menu on powerup when no other menu is being displayed.
Figure 6 shows the CustomDSO user interface with the first pushbutton linked to the script to have the zoom center track the cursor.
Figure 6 The user interface for the basic CustomDSO mode with the leftmost pushbutton linked to the zoom tracking script. Source: Art Pini
The basic user interface has eight push buttons that can be linked with setup scripts. In this example, the leftmost push button, which is highlighted, has been linked to the script “Track Zoom.lss”. The oscilloscope uses the root name of the script as the push button label. This capability allows test designers to allow users with less training to recall all the elements of a test procedure.
Some other oscilloscopes can store several setups and then sequence through them as a macro program. This is similar but lacks any flow control when executing the macro.
The Plugin mode of CustomDSO is an even more powerful feature that allows user-programmed ActiveX controls to create a custom graphical user interface. The plugins are powered by routines written in Visual Basic, Visual C++, or other ActiveX-compatible programming languages. Many interactive devices are available, including buttons, a check box, a radio button, a list box, a picture box, and a common dialogue box. A detailed description of plugin generation is beyond the scope of this article.
Recall instrument setupsThe use of Visual Basic scripts enables these oscilloscopes to recall instrument setups easily and enhances this process with the ability to program a series of setups into a test procedure. It also offers the ability to use custom user graphical interfaces to simplify operations.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
- Customize your oscilloscope to simplify operations
- The scope…from Hell!
- Oscilloscope articles by Arthur Pini
- Basic jitter measurements using an oscilloscope
The post Save, recall, and script oscilloscope settings appeared first on EDN.
Perceiving the insides of a wireless camera flash receiver
In a recent teardown, I disassembled and documented the insides of a Godox wireless camera flash transmitter that ended up being in much better shape than had been advertised when I’d first acquired it. I was therefore highly motivated to return it to fully functional shape afterwards, albeit not for personal-usage reasons—it supported Fujifilm cameras, which I don’t own—but instead so that I could subsequently donate for another’s use, keeping it out of the landfill in the process.
This time around, the situation’s reversed. Today we’ll be looking at an “as-is” condition wireless camera flash receiver, from the same manufacturer (Godox’ X1R-C). And this time, I do have a personal interest, because it supports Canon cameras (“-C”), several of which I own. But given the rattling I heard inside whenever I picked it up, I was skeptical that it’d work at all, far from deluding myself that I could fix whatever ailed it. That said, it only cost $4.01 pre-15% promo discount, $3.40 after, in March 2024 from KEH Camera on the same order as its X1T-F sibling.
Here’s the sticker on the baggie that it came shipped in:
And here are a few stock photos of it:
Stepping back for a minute before diving into the teardown minutia: why would someone want to buy and use a standalone wireless camera receiver at all? Assuming a photographer wanted to sync up multiple camera flashes (implementing the popular three-point lighting setup or other arrangement, for example), as I’ve written about before, why wouldn’t they just leverage the wireless connectivity built into their camera supplier’s own flash units, such as (in my case) Canon’s EOS flash system?
Part of the answer might be that with Canon’s system, for example, “wireless” only means RF-based for newer units; older implementations were infrared- (also sometimes referred to as “optical”-) based, which requires line-of-sight between a transmitter and each receiver, has limited range, and is also prone to ambient light interference. Part of the reason might be that a given flash unit doesn’t integrate wireless receiver functionality (Godox’s entry-level flashes don’t support the company’s own 2.4 GHz X protocol, for example), or there might be a protocol mismatch between the separate transmitter and the built-in receiver. And part of the reason might be because the strobe illumination source you’re desiring to sync to doesn’t even have a hot shoe; you’ll shortly see how the Godox receiver handles such situations…normally, at least.
Let’s dive in, beginning with some overview shots, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (per B&H Photo’s website, the Godox X1R-C has dimensions of 2.8 x 2.6 x 1.9″ / 70 x 65 x 47 mm and weighs 2.5 oz / 70.9 g). Back:
Rattling aside, it still powers up and outputs seemingly meaningful display info!
Left (as viewed from the front) side, including the power switch:
Bland front (no need for an infrared optical module with this particular receiver!):
Right side (you’ll see what’s importantly behind, and not behind, that rubberized panel shortly):
Top; you can tell from the extra contacts that this hot shoe’s not only actually “hot” but also Canon control protocol-cognizant:
And bottom; this particular shoe’s “cold”, intended only for mounting purposes:
Underneath that removable panel, unsurprisingly, is the two-AA battery compartment:
Look closely and you’ll see two screw heads inside it at the top corners, along with two more holes at the lower device corners in the photo. You know what comes next, right?
And inside we go:
Disconnect the cable harness mating the topside hot shoe to the PCB, and the separation of the two halves is complete:
Here’s a standalone overview of the inside of the top half, along with a hot shoe closeup:
And now for the (bottom) half we all care more about, because it contains the PCB:
Remember that rubberized flap I earlier mentioned? It got jostled out of position at this point, and eventually fell out completely. Notice anything odd behind it? If not, don’t feel bad; I still hadn’t, either:
Those two screws holding the PCB in place within the chassis are next to depart:
Before continuing, I’ll highlight a few notable (to me, at least) aspects of this side of the PCB. The connector in the lower left corner, again, goes to the cable harness which ends up at the hot shoe. The large IC at center is, perhaps obviously, the system “brains”, but as with other Godox devices I’ve already torn apart, its topside marking has been obliterated, so I unfortunately can’t ID it (I can’t help but wonder, though, if it’s a FPGA?). Above it is Texas Instruments’ CC2500, a “low cost, low-power 2.4 GHz RF transceiver designed for low-power wireless apps in the 2.4 GHz ISM band”: translation, Godox’s X wireless sync protocol. And above that, at the very top of the PCB, is the associated embedded antenna.
Onward. As I began to lift the PCB out of the chassis, the display popped out of position:
And at this point, I was also able to dislodge what had been rattling around underneath the PCB. Do you recognize it?
It’s the 2.5 mm sync connector, which acts as a comparatively “dumb” but still baseline functional alternative to the hot shoe for connecting the receiver to a strobe or other flash unit. It’s normally located next to the USB-C connector you recently saw behind the rubberized flap.
At this point, after all the shaking to get the sync connector out of the chassis, the power switch’s plastic piece also went flying:
I was initially only able to lift the PCB partway out of the chassis before it got “stuck”…that is, until I remembered (as with the earlier Godox transmitter) the two battery tabs connected to the PCB underside and sticking through the chassis to the battery compartment underneath:
Pushing them through the chassis from the battery compartment got to the desired end point:
The 2.5-mm sync connector site in the lower right corner of the PCB, below the USB-C connector, is obvious now that I knew what to look for! Rough handling by the Godox X1R-C’s prior owner had apparently snapped it off the board. I could have stopped at that point, but those screw heads visible atop the smaller PCB for the monochrome LCD were beckoning to me:
Removing them didn’t get me anywhere, until I got the bright idea to look underneath the ribbon cable, where I found one more:
That’s more like it:
The two halves of the display assembly also came apart at this point:
That pink-and-black strip is an elastomeric connector (also known by the ZEBRA trademark). They’re pretty cool, IMHO. Per the Wikipedia summary, they…:
…consist of alternating conductive and insulating regions in a rubber or elastomer matrix to produce overall anisotropic conductive properties. The original version consisted of alternating conductive and insulating layers of silicone rubber, cut crosswise to expose the thin layers. They provide high-density redundant electrical paths for high reliability connections. One of the first applications was connecting thin and fragile glass liquid-crystal displays (LCDs) to circuit boards in electronic devices, as little current was required. Because of their flexibility, they excel in shock and anti-vibration applications. They can create a gasket-like seal for harsh environments.
Calculator Zebra Elastomeric Connector.black conductor center to center distance 180 microns (7 mils)
Numbers on ruler are centimeters
Released to Public Domain by Wikipedia user Caltrop
May 13, 2009
Here’s a standalone view of the backplane (with LEDs and switches alongside it), once again showing the contacts that the elastomeric connector’s conductive layers mate up with:
And here are a few shots of the remainder of the monochrome LCD, sequentially ordered as I disassembled it, and among other things faintly revealing the contacts associated with the other end of the elastomeric connector:
Last, but not least, I decided to try reversing my teardown steps to see if I could reassemble the receiver back to its original seeming-functional (sync connector aside) condition:
Huzzah! The display backlight even still works. I’ll hold onto the sync connector, at least for now:
I might try soldering it back in place, although I don’t anticipate using anything other than the alternative hot shoe going forward. For now, I welcome your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Scrutinizing a camera flash transmitter
- The Godox V1 camera flash: Well-“rounded” with multiple-identity panache
- Multi-source vs proprietary: more “illuminating” case studies
- Disclosing the results of a webcam closeup
The post Perceiving the insides of a wireless camera flash receiver appeared first on EDN.
Zig-zag transformers
Three phase power transformer secondaries that are set up in a delta configuration do not have an earthing or grounding point. By contrast, a wye configuration of windings would provide such a point, but delta windings are frequently the transformer designer’s choice (Figure 1).
Figure 1 Wye versus delta transformer secondaries.
Where the three coils of the wye configuration meet, a ground or earth connection can be established, but the three secondary coils of the delta configuration offer no such point.
In such cases, an earthing point can be established using a zig-zag transformer as in the following sketch in Figure 2.
Figure 2 A zig-zag transformer with an established earthing point.
The origin of the phrase “zig-zag” would seem to be self-evident. The underlying theory of zig-zag transformers and additional discussions of its characteristics have been written up extensively as shown in its Wikipedia page.
Looking at this device feeding just a single load (Figure 3), we can see how earthing can be achieved when power is fed from delta secondaries.
Figure 3 A zig-zag transformer with an earthed load with power fed from delta secondaries.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Method sets voltage in multiple-output converters
- Old transformer repurposed with new windings
- Autotransformers – Part 2
- Homebrew loop gain test transformer
- A pole, a zero and a transformer
The post Zig-zag transformers appeared first on EDN.
Who will get Altera in 2025?
While the news about Altera being up for grabs isn’t new, there are fresh bytes on its sale to either an FPGA industry player like Lattice Semiconductor or private equity firms such as Francisco Partners, Silver Lake Management, and Bain Capital. Altera’s transition from Intel’s lap to an independent entity is all set, and the only hiccup is money.
Start with Lattice, whose market value is $8 billion. So, to acquire Altera, Lattice will inevitably need a financial partner. On the other hand, proposals from private equity firms value Altera at $9 billion to $12 billion, far below the $17 billion Intel paid to acquire it.
Altera, which once made the FPGA duopoly along with Xilinx, was acquired by then-cash-rich Intel in 2015. This sparked a guessing game in the semiconductor industry regarding why the CPU kingpin had grabbed an FPGA player. Archrival AMD followed suit by snapping Xilinx in 2020.
However, while industry watchers were mulling over the ultimate objectives of CPU makers acquiring the FPGA business and how it could potentially relate to their server and data center roadmaps, trouble started brewing at Intel. Next, we heard about Intel considering to spinning off Alter to deal with its capital crunch. The decision was made by then-CEO Pat Gelsinger.
Figure 1 Sandra Rivera has been named the CEO of Altera. Source: Intel
According to a new Bloomberg report, Intel has shortlisted several buyout firms for the next phase of bids and has set a deadline of the end of January for bidders to formalize their offers. However, while the Santa Clara, California-based chipmaker seems committed to executing Altera’s spin-off, the price tag has become a stumbling block.
Intel’s co-CEO and former CFO David Zinsner has hinted about a way out if Intel doesn’t get a financially viable offer. He mentioned the possibility of a deal like IMS Nanofabrication, an industry leader in multi-beam mask writing tools required to develop extreme ultraviolet lithography (EUV).
In June 2023, Intel sold 20% of its stake in IMS to Bain Capital in a deal that valued IMS at around $4.3 billion. Three months later, Intel sold an additional 10% stake in IMS to TSMC. We’ll see in 2025 which way things go, but it’s worth remembering that Intel doesn’t have an enviable history regarding acquisitions.
Figure 2 Altera continues to launch an array of FPGA hardware, software, and development tools to make its programmable solutions more accessible across a broader range of use cases and markets. Source: Intel
Founded in 1983, Altera is an important company. So, at a time when the AMD-plus-Xilinx combo is doing well, it’s crucial to watch how the future of Altera 2.0 is shaped in 2025. A successful outcome will provide Intel with a much-needed cash boost and offer Altera greater independence to proactively innovate in the FPGA design realm.
Related Content
- Intel to Buy Altera for $16.7B
- Intel, Altera End Acquisition Talks
- Intel to Accelerate Altera, Says CEO
- The State of the FPGA Union is Uncertain
- How will Intel’s purchase of Altera affect embedded space?
The post Who will get Altera in 2025? appeared first on EDN.
The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile
Last time, I covered one half of the Energizer Ultimate Powersource Pro Solar Bundle that I first introduced you to back at the beginning of August and purchased for myself at the beginning of September (and, ironically, is for sale again as I write these words on November 6, at Meh’s companion SideDeal site):
If you haven’t yet read that premier post in this series, where I detailed the pros and cons of the Energizer PowerSource Pro Battery Generator, I encourage you to pause here and go back and peruse it first before proceeding with this one. This time I’ll be discussing the other half of the bundle, Energizer’s 200W Portable Solar Panel. And as before, I’ll start out with “stock” images from the bundle’s originator, Battery-Biz (here again is the link to the user manual, which covers both bundled products…keep paging through until you get to the solar panel section):
Here’s another relevant stock image from Meh:
Candidly, there’s a lot to like about this panel, model number ENSP200W (and no, I don’t know who originally manufactured it, with Energizer subsequently branding it), reflective of the broader improvement trend in solar panels that I previously covered back in mid-September. The following specs come straight from the user manual:
Solar Cells
- Solar Cell Material: Monocrystalline PERC M6-166mm
- Solar Cell Efficiency: 22.8%
- Solar Cell Surface Coating: PET
Output Power
- Max Power Output – Wattage (W): 200W
- Max Power Output – Voltage(Vmp): 19.5V
- Max Power Output – Current (Imp): 10.25A
- Power Tolerance: ±3%
- Open Circuit Voltage (Voc): 23.2V
- Short Circuit Current (Isc): 11.38A
Operating Temperatures
- Operating Temp (°C): -20 to 50°C / -4 to 122°F
- Nominal Operating Cell Temp (NOCT): 46°± 2° C
- Current Temp Coefficient: 0.05% / °C
- Voltage Temp Coefficient: – 0.35% / °C
- Power Temp Coefficient: – 0.45% / °C
- Max Series Fuse Rating: 15A
Cable
- Anderson Cable Length: 5M / 16.5 ﬞ
- Cable Type: 14AWG dual conductor, shielded
- Output Connector: Anderson Powerpole 15/45
Dimensions and Weight
- Product Dimensions – Folded: 545 x 525 x 60 mm/21.5″ x 20.7″ x 2.4″
- Product Dimensions – Open: 2455 x 525 x 10 mm/96.7″ x 20.7″ x 0.4″
- Product Net Weight: 5.9 kgs/ 13.0 lbs
As you can see from the last set of specs, the “portable” part of the product name is spot-on; this solar panel is eminently tote-able and folds down into an easily stowed form factor. Here’s what mine looked like unfolded:
Unfortunately, as with its power station bundle companion, the solar panel arrived with scuffed case cosmetics and ruffled-through contents indicative of pre-used, not brand new, condition:
Although, I was able to clip a multimeter to the panel’s Anderson Powerpole output connector and, after optimally aligning the panel with the cloud-free direct sunlight, I got close to the spec’d max open circuit output voltage out of it:
the connector itself had also arrived pre-mangled by the panel’s prior owner (again: brand new? Really, Battery-Biz?), a situation that others had also encountered, and which prevented me from as-intended plugging it into the PowerSource Pro Battery Generator:
Could I have bought and soldered on a replacement connector? Sure. But in doing so, I likely would have voided the factory warranty terms. And anyway, after coming across not-brand-new evidence in the entire bundle’s constituents, I was done messing with this “deal”; I was offered an exchange but requested a return-and-refund instead. As mentioned last time, Meh was stellar in their customer service, picking up the tab on return shipping and going as far as issuing me a full refund while the bundle was still enroute back to them. And to be clear, I blame Battery-Biz, not Meh, for this seeming not-as-advertised bait-and-switch.
A few words on connectors, in closing. Perhaps obviously, the connector coming out of a source solar panel and the one going into the destination power station need to match, either directly or via an adapter (the latter option with associated polarity, adequate current-carrying capability and other potential concerns). That said, in my so-far limited to-date research and hands-on experiences with both solar panels and power stations, I’ve come across a mind-boggling diversity of connector options. That ancient solar panel I mentioned back in September, for example:
uses these:
to interface between it and the solar charge controller:
The subsequent downstream connection between the controller and my Eurovan Camper’s cargo battery is more mainstream SAE-based:
The more modern panel I showcased in that same September writeup:
offered four output options: standard and high-power USB-A, USB-C and male DC5521.
My SLA battery-based Phase2 Energy PowerSource Power Station, on the other hand:
(Duracell clone shown)
like the Lithium NMC battery-based Energizer PowerSource Pro Battery Generator:
expects, as already mentioned earlier in this piece, an Anderson Powerpole (PP15-45, to be precise) connector-based solar panel tether:
To adapt the male 5521 to an Anderson Powerpole required both the female-to-female DC5521 that came with the Foursun F-SP100 solar panel and a separate male DC5521-to-Anderson adapter that I bought off Amazon:
What other variants have I encountered? Well, coming out of the EcoFlow solar panels I told you about in the recent Holiday Shopping Guide for Engineers are MC4 connectors:
Conversely, the EcoFlow RIVER 2:
and DELTA 2 portable power stations:
both have an orange-color XT60i solar input connector:
the higher current-capable (100 A vs 60 A), backwards-compatible successor to the original yellow-tint XT60 used in prior-generation EcoFlow models:
EcoFlow sells both MC4-to-XT60 and MC4-to-XT60i adapter cables (note the connector color difference in the following pictures):
along with MC4 extension cables:
and even a dual-to-single MC4 parallel combiner cable, whose function I’ll explore next time:
The DELTA 2 also integrates an even higher power-capable XT150 input, intended for daisy-chaining the power station to a standalone supplemental battery to extend runtime, as well as for recharging via the EcoFlow 800W Alternator Charger:
Ok, now what about another well-known portable power station supplier, Jackery? The answer is, believe it or not, “it depends”. Older models integrated an 8 mm 7909 female DC plug:
which, yes, you could mate to a MC4-based solar panel via an adapter:
Newer units switched to a DC8020 input; yep, adapters to the rescue again:
And at least some Jackery models supplement the DC connector with a functionally redundant, albeit reportedly more beefy-current, Anderson Powerpole input:
How profoundly confusing this all must be to the average consumer (not to mention this techie!). I’m sure if I did more research, I’d uncover even more examples of connectivity deviance from other solar panel and portable power station manufacturers alike. But I trust you already get my point. Such non-standardization might enable each supplier to keep its customers captive, at least for a while and to some degree, but it also doesn’t demonstrably grow the overall market. Nor is it a safe situation for consumers, who then need to blindly pick adapters without understanding terms such as polarity or maximum current-carrying capability.
Analogies I’ve made before in conceptually similar situations, such as:
- It’s better to have a decent-size slice of a sizeable pie versus a tiny pie all to yourself, and
- A rising tide lifts all boats
remain apt. And as with those conceptually similar situations on which I’ve previously opined, this’ll likely all sort itself out sooner or later, too (via market share dynamics, would be my preference, versus heavy-handed governmental regulatory oversight). The sooner the better, is all I’m saying. Let me know your thoughts on this in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Experimenting with a modern solar cell
- SLA batteries: More system form factors and lithium-based successors
- Experimenting with a modern solar cell
- Then and Now: Solar panels track the sun
- Solar-mains hybrid lamp
- Solar day-lamp with active MPPT and no ballast resistors
- Beaming solar power to Earth: feasible or fantasy?
The post The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile appeared first on EDN.
Innovative manufacturing processes herald a new era for flexible electronics
New and repurposed fabrication techniques for flexible electronic devices are proliferating rapidly. Some may wonder if they are better than traditional methods and at what point they’ll be commercialized. Will they influence electronics design engineers’ future creations?
Flexibility is catching on. Experts forecast the flexible electronics market value will reach $63.12 million by 2030, achieving a compound annual growth rate of 10.3%. As its earning potential increases, more private companies and research groups turn their attention to novel design approaches.
Flexible electronics is a rapidly developing area. Source: Institute of Advanced Materials
As power densification and miniaturization become more prominent, effective thermal management grows increasingly critical—especially for implantable and on-skin devices. So, films with high in-plane thermal conductivity are emerging as an alternative to traditional thermal adhesives, greases, and pads.
While polymer composites with high isotropic thermal conductivity (k) are common thermal interface materials, their high cost, poor mechanics, and unsuitable electrical properties leave much to be desired.
Strides have been made to develop pure polymer film with ultrahigh in-plane k. Electronics design engineers use stretching or shearing to enhance molecule chain alignment, producing thin, and flexible sheets with desirable mechanical properties.
Here, it’s important to note that the fabrication process for pure polymer films is complex and uses toxic solvents, driving costs, and impeding large-scale production. A polyimide and silicone composite may be the better candidate for commercialization, as silicone offers high elasticity and provides better performance in high temperatures.
Novel manufacturing techniques for flexible electronics
Thermal management is not the only avenue for research. Electronics engineers and scientists are also evaluating novel techniques for transfer printing, wiring, and additive manufacturing.
Dry transfer printing
The high temperatures at which quality electronic materials are processed effectively remove flexible or stretchable substrates from the equation, forcing manufacturers to utilize transfer printing. And most novel alternatives are too expensive or time-consuming to be suitable for commercial production.
A research team has developed a dry transfer printing process, enabling transferring thin metal and oxide films to flexible substrates without risk of damage. They adjusted the sputtering parameters to control the amount of stress, eliminating the need for post-processing. As a result, the transfer times were shortened. This method works with microscale or large patterns.
Bubble printing
As electronics design engineers know, traditional wiring is too rigid for flexible devices. Liquid metals are a promising alternative, but the oxide layer’s electrical resistance poses a problem. Excessive wiring size and patterning restrictions are also issues.
One research group overcame these limitations by repurposing bubble printing. It’s not a novel technique but has only been used on solid particles. They applied it to liquid metal colloidal particles—specifically a eutectic gallium-indium alloy—to enable high-precision patterning.
The heat from a femtosecond laser beam creates microbubbles that guide the colloidal particles into precise lines on a flexible substrate. The result is wiring lines with a minimum width of 3.4 micrometers that maintain stable conductivity even when bent.
4D printing
Four-dimensional (4D) printing is an emerging method that describes how a printed structure’s shape, property or function changes in response to external stimuli like heat, light, water or pH. While this additive manufacturing technique has existed for years, it has largely been restricted to academics.
4D-printed circuits could revolutionize flexible electronics manufacturing by improving soft robotics, medical implants, and wearables. One proof-of-concept sensor converted pressure into electric energy despite having no piezoelectric parts. These self-powered, responsive, flexible electronic devices could lead to innovative design approaches.
Impact of innovative manufacturing techniques
Newly developed manufacturing techniques and materials will have far-reaching implications for the design of flexible electronics. So, industry professionals should pay close attention as early adoption could provide a competitive advantage.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- Flexible electronics tech shows progress
- Fab-in-a-Box: Flexible Electronics Scale Up
- Printed electronics enhance device flexibility
- Flexible electronics stretch the limits of imagination
- Printed Electronics to Enhance both Exteriors and Interiors in EVs
The post Innovative manufacturing processes herald a new era for flexible electronics appeared first on EDN.
Touch controller eases user interface development
Microchip’s MTCH2120 turnkey touch controller offers 12 capacitive touch sensors configured via an I2C interface. Backed by Microchip’s unified ecosystem, it simplifies design and streamlines transitions from other turnkey and MCU-based touch interface implementations.
The MTCH2120 delivers reliable touch performance, unaffected by noise or moisture. Its touch/proximity sensors can work through plastic, wood, or metal front panels. The controller’s low-power design enables button grouping, reducing scan activity and power consumption while keeping buttons fully operational.
Easy Tune technology eliminates manual threshold tuning by automatically adjusting sensitivity and filter levels based on real-time noise assessment. An MPLAB Harmony Host Code Configurator plug-in eases I2C integration with Microchip MCUs and allows direct connection without host-side protocol implementation. Design validation is facilitated through the MPLAB Data Visualizer, while built-in I2C port expander capabilities allow for the addition of three or more unused touch input pins.
In addition, access to Microchip’s touch library minimizes firmware complexity, helping to shorten design cycles. For rapid prototyping, the MTCH2120 evaluation board includes a SAM C21 host MCU for out-of-the-box integration.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Touch controller eases user interface development appeared first on EDN.