-   Українською
-   In English
EDN Network
SoC simplifies smart home connectivity
Leveraging Qorvo’s ConcurrentControl technology, the QPG6200L SoC supports Matter, Zigbee, and Bluetooth LE networks. As the first product based on this low-power wireless technology, it enables coexistence across different radios and channels while offering antenna diversity.
The SoC ensures seamless communication and interoperability between wireless standards, meeting the needs of evolving smart home environments. It supports multiple protocols on separate channels simultaneously, serving a wide range of IoT applications, such as smart lighting, sensors, and home hubs. Additionally, the chip has a built-in secure element and is PSA Certified to Level 2 for enhanced IoT security.
Qorvo’s QPG6200L is powered by an Arm Cortex-M4F processor with a floating-point unit (FPU) running at up to 192 MHz. The chip includes 2 MB of nonvolatile memory and 336 kB of RAM. With a sleep current below 1 µA, the SoC is well-suited for battery-operated sensors and energy-harvesting devices. It also delivers transmit power of 10 dBm with a transmit current of 21 mA.
Samples of the QGP6200L SoC are available now, with full production planned for early next year.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post SoC simplifies smart home connectivity appeared first on EDN.
The Colpitts oscillator
Consider the following illustration in Figure 1 of what we shall see is an oscillator, specifically, a Colpitts oscillator.
Figure 1 The Colpitts oscillator where the passive components are arranged on the right-hand side for easier viewing.
There is an R-L-C network of passive components and an active gain block. For purposes of this model, the input impedance of the gain block is high enough for its loading effect to be ignored, the output impedance of the gain block is zero and the value of its gain, A, is nominally unity or perhaps a little less than unity. The resistance R1 models the output impedance that a real-world gain block might present.
To analyze this circuit, we take the passive components, redraw them as on the right and begin using node analysis (Figure 2). The term G1 = 1 / R1 and the term S = j / (2*π*F).
Figure 2 Node analysis used for circuit presented on the right-hand side of Figure 1.
Having gotten the relationship between Eo and E1 to the exclusion of E2, we do several algebraic steps to get that relationship into a useful form as shown in Figure 3.
Figure 3 An algebraic rearrangement where the transfer function is brought into a more useful form.
Note that the denominator of this last equation is cubic. It is a third order polynomial because there are three independent reactive elements in the circuit, L1, C1 and C2.
Please also note that the order of the polynomial MUST match the number of independent reactive elements in the circuit. If we had come up with an algebraic expression of some other order, we would know we’d made a mistake somewhere.
Graphing the ratio of E1/Eo versus frequency, we see the following in Figure 4.
Figure 4 A plot of E1/Eo versus frequency from algebraic analysis in Figure 3.
The transfer function of the passive R-L-C network has a pronounced peak at a frequency of 1.59 MHz. When we run a SPICE simulation of that transfer function, we find the same result (Figure 5).
Figure 5 A SPICE analysis of the passive R-L-C network showing E1/Eo versus frequency with a defined peak of almost 40 dB at 1.59 MHz.
When we let our gain block be a voltage follower, a JFET source follower as shown in Figure 6, we see oscillation at very nearly the frequency of that transfer function peak.
Figure 6 Colpitts oscillator built from passive network shown in Figure 5 by introducing a JFET source follower, that reflects the frequency of the transfer function peak.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Emitter followers as Colpitts oscillators
- Simulation trouble: Bode plotting an oscillator
- Oscillator has voltage-controlled duty cycle
- JFETs offer LC oscillators with few components
The post The Colpitts oscillator appeared first on EDN.
The power of practical positive feedback to perfect PRTDs
Frequent contributor Nick Cornford recently published a delightfully clever design idea using a platinum RTD calibrated to output a 1 mV/oC signal that’s perfect for direct readout via a standard DMM…
Wow the engineering world with your unique design: Design Ideas Submission Guide
I thought Nick’s idea was so cool I just had to try and cobble up my own version of it. The initial effort is shown in Figure 1.
Figure 1 PRTD circuit shamelessly copies Nick C’s idea for making an ordinary DMM into a precision digital thermometer.
Figure 1’s circuit is conceptually identical to Nicks’s in putting the PRT into a basic bridge topology with constant current excitation of the PRT. It differs, however in one detail. Only the PRT half of the bridge is actively regulated with constant current while the other (zero adjust) half is just a passive voltage divider. This ploy reduces the parts count somewhat (saving two transistors, an op-amp, and maybe a resistor or two). But it doesn’t make the circuit work significantly worse or better. The calibration process is the same very-well-explained procedure in Nick’s DI as is achievable accuracy. I certainly won’t try to compete with Nick’s well written writeup in that regard.
In fact, I suppose you might legitimately ask if such a similar circuit really merits separate publication in the first place. Fortunately, this is not quite the end of our story.
Because of the 10% attenuation of the PRT signal inflicted by the passive side of my bridge, in order to duplicate Nick’s terrific feature of a 1 mV/oC direct-readout, I had to boost the PRT excitation current Iprt by that same 10% to make the bigger signal. So, I made Iprt = 110% x 1mV/oC / 0.03851 = 2.857 mA instead of the 2.597 mA used by Nick in his double-constant-current-source circuit. So far, so good.
But then this got me musing about what effect further multiples of Iprt might have. This was very interesting, of course, because platinum’s tempco is not exactly constant with temperature, a fact described by the Callendar-Van Dusen polynomial. It predicts platinum’s tempco declines steadily from the 0oC value as temperature T increases. Note the pesky quadratic ‘B’ term.
R(T) = R(0) [1 + (A T) – (B T2)]
A = 3.9083 10-3
B = 5.775 10-7
So, I calculated the circuit’s output over 0oC to 100oC while gradually bumping Iprt. The interesting results are plotted in Figure 2. X axis = actual temp, red = reading error in degrees.
Figure 2 The Callendar-Van Dusen polynomial used here to predict that for any given temperature, an excitation current increment exists that will give an accurate readout, e.g., 0.5% for 33oC, 1% for 67oC, and 1.5% for 100oC.
All that’s required to utilize this effect to continuously and automatically fix the reading is the addition of R8 and R9 to generate the positive feedback provided in Figure 3. Now:
Iprt(T) = Iprt(0)(1 + 0.15(Vprt(T) – Vprt(0))
Thus, as the readout voltage goes from 0 to 100 mV, the Iprt excitation current increases by the 0 to +1.5% needed to accurately linearize the reading. The residual error with Figure 3’s positive feedback can be seen in Figure 4.
Figure 3 The 40 mV of positive feedback via R8 to reference U1 increases PRT excitation current with increasing temperature and thus linearizes the temperature reading, making the thermometer accurate to +/-0.1oC.
Figure 4 Residual error with Figure 3’s positive feedback.
And that, I thought, was worth its own writeup. I hope Nick will agree.
Postscript: As per my usual habit, I did research on PRTD linearization with positive feedback only AFTER I’d already blundered my way to this solution on my own. But having done it, I wanted to see if anybody else was using the method. Yes. They are.
Guess who? I’m actually now kind of glad I didn’t look before I leaped. If I’d already seen the complexity of Jim’s circuit, I might not have attempted it!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- DIY RTD for a DMM
- Designing with temperature sensors, part three: RTDs
- Designing with temperature sensors, part three: RTDs
- Switch mode hotwire thermostat
The post The power of practical positive feedback to perfect PRTDs appeared first on EDN.
Diagnosing and resuscitating a set of DJI drone batteries
A couple of months ago, spurred into action by U.S. House of Representatives passage of legislation banning ongoing sales of DJI drones (and the still-unclear status of corresponding legislation in the Senate), along with aggressively-priced promotions on latest-generation products (likely related to the aforementioned legislative action), I decided to splurge on an upgrade to the Mavic Air that I mentioned back in October 2021 that I’d recently bought.
Truth be told, the Mavic Air was still holding its own feature set-wise, more than six years after its January 2018 introduction. It supports, for example, both front and rear collision avoidance and accompanying auto-navigation to dodge objects in its flight path (APAS, the Advanced Pilot Assistance System), along with a downward-directed camera to aid in takeoff and landing.
And its 3-axis gimbal-augmented front camera shoots video at up to 4K resolution at a 30-fps frame rate with a 100-Mbps bitrate.
That all said, its diminutive image sensor leads to notable image quality shortcomings in low ambient light settings, for example. Newer drones also extend collision avoidance capabilities to the sides, along with augmenting conventional cameras with LiDAR. And other recent government regulatory action, details of which I’ll save for a dedicated writeup another day, has compelled me to purchase additional hardware in order continue legally flying the Mavic Air in a variety of locations, along with needing to officially register it with the FAA per its >249 g weight.
Still, I plan for the Mavic Air to remain in my aviation stable for some time to come, alongside its newer sibling, details of which I’ll also save for a dedicated writeup (or few) another day (or few). After all, with greater weight frequently also comes greater wind resistance. And as regular readers already know, I’m loath to ditch perfectly good hardware. As such, after pressing “purchase” on my new toy, I decided to pull the Mavic Air out of storage, dust it off and fire it up again (I’d admittedly barely used it since acquiring it more than three years back). At the time I bought it, I’d also purchased three factory-refurbished spare batteries (a key addition for long operating sessions given its ~15 minute per-battery flight time) from a reputable eBay dealer.
The original battery installed in the drone still held a modicum of charge, I was happily surprised to see when I fired it up, and it also recharged back to “full” just fine. Here’s what it looks like:
Here’s a closeup of those markings:
See: still alive!
And here’s where it fits in the drone body cavity:
The three additional batteries, on the other hand (knowledgeable readers are already shaking their heads in shared sorrow at this point)…dead as a doornail (or, if you prefer, a parrot), all three of ‘em, with no LED illumination whatsoever in response to front button presses. And, for what it’s worth, I’m not alone in my woe.
My subsequent research (PDF) informed me that per Battery Storage list entry #3, “the battery will enter hibernation mode if depleted and stored for a long period…leave the battery unattended for 5 minutes, and then…recharge the battery to bring it out of hibernation.” Unfortunately, that didn’t work for me, even after an hour’s worth of charger tethering. Which led to Battery Storage list entry #2: “DO NOT store the battery for an extended period after fully discharging it. Doing so may over-discharge the battery and cause irreparable battery cell damage.” Readers of my recent post on SLA and lithium-based batteries may be feeling more than a bit of déjà vu at this point.
What I initially figured had happened was something analogous to a situation I’d diagnosed more than six years ago, involving a wireless charging-capable battery case for my wife’s iPhone that wasn’t able to recharge itself over Qi because it its integrated battery cells had drained beyond the point where they could even power the charging circuitry itself. Reality, it turns out, was even more complicated than that due to the DJI battery pack’s multi-cell structure and the consequent need for an integrated battery management system (BMS) analogous to, albeit somewhat simpler than, the one discussed in a recent EDN writeup by another author.
How did I learn about this quirk? Reluctant to just toss the batteries and replace them with (pricey) new(er) ones, I came across someone online who was offering a resurrection service. My eventual savior, who I’ve agreed to only refer to by his first name Erik (and I’m referencing anonymously otherwise because, in his own words, “I don’t want to bring any attention to what I do because I don’t want to risk DJI unleashing their might upon me”), explained it this way:
The cells have likely been discharged below the threshold where the BMS sets a fault and keeps you from charging them. I don’t know how low they’ve drained until I open them up but, more often than not, they end up being just fine after a nice balanced charge and a few charge/discharge cycles before the BMS is reprogrammed. There’s no design fault in these batteries. The BMS is pretty standard and the cells are actually very high quality.
Upfront, I paid $34.99 per battery (minus a modest quantity discount; also note that he offers a full per-battery fee refund for any batteries that he can’t fix), plus $9 for quantity-independent return shipping, along with $22.80 to ship the batteries to him in the first place. They arrived on a Friday, and he’d successfully resuscitated, tested and shipped them back to me by the next Monday. Here’s how he described the background and the process in his own words (with only light-touch editing by yours truly…note that he in-advance agreed that I could republish them):
They all have been successfully repaired, bench tested and test-flown.
Getting a bit of the battery’s history and the problem doesn’t hurt, but it’s unnecessary. The BMS stores all the information I need. It’s like pulling codes from your car with an OBD2 computer, and like repairing a car, it’s up to the technician to interpret them.
When a battery arrives, I physically inspect it and make sure I didn’t get a battery that works fine. It’s happened before.
After that, I heat the back cover using a heating mat to loosen up the glue that secures it. I use 40°C for 20 or 30 minutes. The cover is also secured with a half dozen tabs that need to be worked on. An X-Acto knife and a flat plastic spatula do the job. I try to use plastic tools in order to not puncture the battery contents or short-circuit anything.
Once it’s all opened, the cells need to be tested. I made an adapter so I can connect all three cells to a LiPo balance/charger, bypassing the circuit board. I use it to cycle the cells twice by charging them at 0.25C the first time and 1C the second. Discharges are done at 1C the first time and 4C the second time. I use 4C the second time because it’s how much current the drone “pulls” while at flight. [Editor note: he’s referring here to the so-called C-rate]
If the cells check out ok, I proceed to working on the BMS. I look to see if the cells balance correctly within 0.2 – 0.3v and if there isn’t much voltage sag. These cells seem to be of good quality and are therefore very resilient, but again remember that your and others’ cells haven’t been cared for and are not brand new, so I do not expect them to work as if they were.
To connect, gain access, and communicate with the BMS, I use the EV2400 from Texas Instruments. Communication is done via SMBus. That said, any SMBus-capable interface adapter could conceivably be used.
Access to the proprietary BMS is protected by passwords. You either already have them or you need to “break in” in order to gain access. I can’t tell you my passwords, but I know of programs out there that will “generate” codes and let you in.
I use BQStudio, also by Texas Instruments, to tweak parameters and code as needed to “erase” the permanent faults (PF). Doing so enables the associated MOSFETs to open or close, thereby once again successfully enabling charges and discharges.
After reprogramming it’s time to charge the battery, now using the DJI charger, to see if it’s successful. If it charges as it should, good!
Now, I put the battery in the drone and check for firmware updates via the relevant DJI application.
Finally, it’s time for the first flight. I like to fly low and slow for the first couple of minutes. After that, I fly normally and, if it all checks out, I fly enough until the battery reaches 50 – 60% charge left. I like to ship these batteries at about 50% charge which is about 3.85 V per cell. That’s the manufacturer’s recommended storage voltage.
It’s also recommended that these batteries be recharged every 4 or 6 months when not in use so that the voltage doesn’t fall below the BMS’s threshold, which would cause a PF flag to once again be set by the BMS. This goes for all “smart batteries”, i.e., batteries with built-in BMSs.
Safety is a must. Lithium batteries are flammable, so great care must be taken and common sense must be used when attempting to repair, charge, or use these batteries, and they must never be charged unattended. I’ve had two incidents happen, and it’s a little scary. I didn’t get hurt, but I had to buy a few new instruments for my lab.
I’ll augment Erik’s wise words with two additional quotes from the earlier referenced DJI documentation:
Discharge the battery to 40%-65% if it will NOT be used for 10 days or more. This can greatly extend the battery life. The battery automatically discharges to below 65% when it is idle for more than 10 days to prevent it from swelling. It takes approximately one day to discharge the battery to 65%. It is normal that you may feel moderate heat emitting from the battery during the discharge process.
and
Fully charge and discharge the battery at least once every 3 months to maintain battery health.
Likely unsurprisingly to many of you from what you’ve already read, Erik also noted in a subsequent message:
The components on the [battery] board are off-the-shelf. Both the system chip and microcontroller are from Texas Instruments.
And Erik even shared some photos with me, to subsequently reshare with all of you, of the various disassembly, rejuvenation and reassembly steps:
As mentioned upfront, I’m not the only person who’s been struck by “dead DJI battery” lightning. And, as it turns out, Erik’s not the only one who’s figured out how to revive ‘em. Check out, for example, this chronologically ordered series of videos from an outfit in New Zealand who specializes in such services (among other drone-related things):
In one of the videos, the technician even uses TI’s EV2400; the others, to Erik’s comments that “any SMBus-capable interface adapter could conceivably be used,” showcase an inexpensive USB-to-SMBus bridge board based on Silicon Labs’ CP2112 IC. And as far as software goes, the most common approach leverages a freeware utility called “DJI Battery Killer”. The tool’s lineage is unknown, although it seems to have originally come from Ukraine, and it’s seemingly disappeared from sites that originally hosted it (although in at least one case, I happen to know that it’s still downloadable via the Internet Archive Wayback Machine intermediary), so consider yourself duly warned, and no, I’m not going to provide any direct links. I’ll only note that the utility’s documentation (which, yes, I now have a copy of) specifically mentions that it supports several battery management controllers: the BQ30Z55, BQ40Z50 and BQ40Z307 (BQ9003).
In closing, I’ll repeat for emphasis one phrase from Erik’s earlier in-depth commentary:
It’s also recommended that these batteries be recharged every 4 or 6 months when not in use so that the voltage doesn’t fall below the BMS’s threshold, which would cause a PF flag to once again be set by the BMS. This goes for all “smart batteries”, i.e., batteries with built-in BMSs.
That last-sentence reality check is both validating and deeply disturbing. To my right as I type these words, in addition to various drones’ batteries, is my expensive set of multi-cell D-Tap video batteries that I mentioned at the end of last year and again a month later, along with a variety of cameras and proprietary batteries for them. In front of me, of course, is my integrated multi-cell battery-based laptop computer. And to my left are multiple wireless headphone sets. More generally, scattered around the house (including in long-term storage) are any number of devices that run on Li-ion, LiPo and other battery chemistries, with most of those cells fully integrated and difficult-to-impossible to replace.
All of them, it seems based on this and past experiences, are essentially ticking time bombs, just sitting there waiting to die if they’re not regularly topped off. While I’m not so cynical as to knee-jerk label them all as examples of “obsolescence by design”, I’m also not a Pollyanna inclined to completely discount this possibility in all cases. Regardless of whether it’s an upfront intention or just an unfortunate side effect of today’s dominant power source technology, it’s a sooner-or-later source of outrage for consumers, not to mention a root reason for premature, excessive landfill donations. I welcome others’ perspectives, including insights into up-and-coming battery technologies and implementation techniques that don’t exhibit this Achilles heel, in the comments.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Oh little drone, how you have grown…
- SLA batteries: More system form factors and lithium-based successors
- Solving a wireless charging mystery
- Automate battery management system (BMS) test with a digital twin
- An assortment of tech-hiccup tales
- Prosumer and professional cameras: High quality video, but a connectivity vulnerability
- Battery life management: the software assistant
- Obsolescence by design: The earbuds edition
The post Diagnosing and resuscitating a set of DJI drone batteries appeared first on EDN.
The life and chip works of Marvell co-founder Sehat Sutardja
Amid the steadily increasing talk about Moore’s Law of transistor scaling hitting the wall, Marvell co-founder and CEO Sehat Sutardja presented the idea of modular chips (MoChi) at the International Solid State Circuits Conference (ISSCC) in 2015. This eventually culminated into what’s now widely known as chiplets.
He later discussed the idea of these daisy-chain multiple chips with AMD’s CTO Mark Papermaster, who thought the name was too complicated, calling them chiplets. Sehat’s passion and dedication to cobbling different pieces of silicon into a single package eventually led him to cofound the first chiplet foundry Silicon Box in 2021.
Sehat Sutardja, co-founder of Marvell and Silicon Box and investor and backer of several semiconductor startups, passed away on 18 September 2024. He was known as one of the pioneers of the modern semiconductor industry. “I am a bit narrow-minded. I only see things in terms of electronics,” he was quoted in a profile story, “Sehat Sutardja: An Engineering Marvell,” published in IEEE Spectrum in October 2010.
Born in a Chinese family in Jakarta, Indonesia, in 1961, he was drawn to the wonders of electronics early in his childhood. While in sixth grade, he visited his younger brother Pantas Sutardja, who lived with their grandparents in Singapore. During this time, he got hold of Pantas’ hobbyists DIY books and magazines and was fascinated by the idea of building a Van de Graaff generator. The two siblings ended up developing a crude but functioning device.
From Van de Graaff generator to storage chips
When back in Jakarta, Sehat started building a miniature version of the Van de Graaff generator. After a little research at a bookstore, he discovered that the improved device would require replacing mechanical switches with transistors. That led him to a nearby radio shop, and in a year or so, he’d received a radio repair license.
Figure 1 Sehat was very fond of the radio repair license he received at the tender age of 13; his wife kept a copy in her purse in case he wanted to show it to people. Source: Marvell
While playing with transistors, he often encountered company names such as Fairchild, National Semiconductor, Motorola, and Texas Instruments. These were all U.S. companies, which led to his inclination toward studying in the United States. A friend of his brother was enrolled at the University of San Francisco, and that connection took him there in the summer of 1980.
What happened next clearly shows Sehat’s intimate bond with electronics. After disacovering that the university doesn’t have electrical engineering program, he moved to Iowa State University and earned his bachelor’s in electrical engineering in 1983. Then he moved to the University of California, Berkley, where he completed his master’s in 1985 and Ph.D. in 1988 in electrical engineering and computer science.
That’s where he also met his wife, Weili Dai, who was a computer science major. Sehat began his professional career as an analog circuit designer with two Bay Area companies: Micro Linear and Integrated Information Technology. At Micro Linear, he worked on digital-to-analog converters (DACs) and other chips for hard disk drives (HDDs).
Next, at Integrated Information Technology, he worked on circuits for digital video compression and decompression, a technology that ended up in AT&T’s infamous VideoPhone. Meanwhile, his wife Weili wanted them to start their own company, so in 1995, they founded Marvell Technology Group along with Sehat’s brother Pantas Sutardja. The name Marvell came from their quest to create “marvelous” things; it ended with “el” following the names of successful tech companies like Intel, Novell, and Nortel.
Birth of Marvell
Pantas, who had recently left IBM’s Almaden Research Center, had worked on hard drive technology at IBM. That, combined with Sehat’s stint at Micro Linear and expertise in mixed-signal chips, led them to develop digital read channels for hard disk drives. At that time, analog read channels from companies like Infineon, STMicroelectronics, and TI depended on amplitude peaks to decode HDD data.
On the other hand, digital technology could utilize the newly arrived CMOS technology scaling to define bit patterns on a hard disk track. So, Marvell used high-speed sampling and DSP filtering to introduce digital read channels that significantly increased disk drive data densities. That put TI out of read-channel business.
Figure 2 Weili Dai, Sehat Sutardja, and Pantas Sutardja founded Marvel in February 1995 with their savings and money from Weili’s parents. Source: IEEE Spectrum
They had working chips by Christmas 1995, and Seagate became Marvell’s first customer. Marvell has dominated the disk drive controller market since then. The timing was impeccable from two standpoints. First, the fabless design movement was just taking off, and Marvell became one of the early success stories in the emerging fabless semiconductor business model.
Second, by adopting CMOS technology for its debut chip for a hard disk drive, Marvell became one of the early adopters and beneficiaries of the historic transition from bipolar to CMOS chip manufacturing. Marvell followed the CMOS-centric approach on other products like Ethernet switches and transceivers to create faster and more power savvy chips.
However, Marvell and Sehat kept a relatively low profile while laser focused on the company’s product and technology roadmaps. Sehat, known as humble and down to earth, didn’t make splashes in trade media like many other founders and CEOs of successful chip companies.
Then, in 2016, Marvell’s intensely quite world was hit by an accounting scandal. Though president and CEO Sutardja and his wife, Dai, chief operating officer, were cleared of any financial misconduct, the pressure on sales teams to meet revenue targets led both Sutardja and Dai to leave their respective positions. Sutardja remained the chairman of the board.
The chiplets man
In the aftermath of this accounting investigation, Sutardja and Dai remained highly respected in semiconductor industry circles. After turning a scrappy little startup into a formidable semiconductor outfit, the husband-wife duo had engaged in over dozen startups, including Alphawave and DreamBig.
They also co-founded a specialized fab built around chiplet and advanced packaging technologies. Silicon Box, after building a fab in Singapore, is setting up another chiplet fab in Northen Italy to better serve European chip companies.
Figure 3 Sehat Sutardja, known for his humility, kindness and generosity, made significant gifts to the University of California, Berkeley. He is seen here with his wife and two sons at the grand opening of UC Berkeley’s Sutardja Dai Hall on 27 February 2009. Source: University of California, Berkeley
Sehat’s focus on chiplets shows his foresight on the future of semiconductors. To express his relationship with semiconductor technology and how it kept him going, he once said, “I don’t know anything else.”
Related Content
- Marvell Acquires Radlan
- Marvell’s President & CEO Resign
- Marvell CEO: The Tinkerer at The Top
- Ethernovia, Marvell Aim to Revamp Car Networks
- Marvell Unleashes High-Performance Preamplifier
The post The life and chip works of Marvell co-founder Sehat Sutardja appeared first on EDN.
New i.MX MCU serves TinyML applications
NXP’s has extended its ultra-low power i.MX series line with the RT700, this device incorporates AI processing via the integrated eIQ® Neutron neural processing unit (NPU), enhanced compute with five total cores including two Arm® Cortex®-M33 cores and Cadence® Tensilica® HIFI1 and HIFI4 DSP blocks. The chip is designed to optimize time spent in sleep mode for up to a 50% improvement in power efficiency. With over 7.5 MB of SRAM, designers can split up the memory to either lock it down to either core or separate it to be shared between them. The large memory ensures users do not have to prune their NPU model or real-time operating system (RTOS) to fit the memory, easing the design process. The RT700 supports the embedded USB (eUSB) standard in order to connect to other USB peripherals at the lower 1.2 V I/O voltage instead of the traditional 3.3 V. Finally, an integrated DC-DC converter allows users to power up the onboard peripherals. A block diagram can be seen in Figure 1.
Figure 1 Block diagram of the new i.MX RT700 crossover MCU with an upgrade in the number of cores, amount of memory, advanced peripherals, as well as a new NPU. Source: NXP
The crossover MCUsNXP’s crossover family of MCUs were created to offer the performance of an applications processor, or a higher-end core running at higher frequencies, with the simplicity of the MCU. It is a direct alternative to customers that purchase low-end microprocessors with memory management units (MMUs) to run rich OSs where external DDR is often necessary as well as the desire to use an RTOS. Instead, crossover MCUs streamline this task by bumping up the performance of the MCU and including high speed peripherals such as GPUs. In essence, a microprocessor chassis with a RTOS running on an MCU core as the engine.
Enhanced performanceWhile the 4-digit category of this crossover lineup concentrates more on performance running from 500 MHz to 1 GHz, the 3-digital subcategory is specialized for battery-powered, portable applications. The RT500 was optimized for its low-power 2D graphics capabilities while the RT600 introduced higher performance DSP capabilities, the RT700 combines the power efficiency and performance of these two predecessors (Figure 2). The five cores in the RT700 means the M33 can do the RTOS work with two DSPs and the 325 MHz eIQ Neutron NPU alongside them to accelerate complex, multi-modal AI tasks in hardware.
Figure 2: The i.MX, RT700 family combines both existing RT500 and RT600 families, offering even lower power consumption while adding more performance through the increase of cores and other architectural enhancements. Source: NXP
Power optimizationThe design revolves around NXP’s energy flex architecture with heterogeneous domain computing to size the power consumption to the application’s specific compute needs, all built optimized based upon the RT700’s specific process technology. Two different power domains, the compute subsystem and the sense subsystem, serve high-speed processing and low-power compute scenarios respectively.
The RT700 can use as little as 9 µW in sleep mode while having more than 5 MB of memory content retention, ensuring that the device consumes as little power as possible in a deep sleep state with a short wakeup time while still retaining information within SRAM as it was kept on. The run mode power consumption has been reduced to 12 mW from the previous 17 mW of the RT500 (Figure 3).
Figure 3: The i.MX RT700 exhibits a 30% improvement in power consumption while in run mode and a 70% improvement in sleep mode.
The aptly named sense subsystem is generally geared towards sensor-hub type applications that are “always on”. The eIQ NPU will further optimize power consumption by minimizing time spent in run mode and maximizing sleep mode. Figure 4 shows the power consumption executing a typical ML use case on the Arm Cortex-M33 and the power consumption after the algorithm has been accelerated with the eIQ Neutron NPU with dynamically adjusting duty cycle.
Figure 4: eIQ Neutron NPU acceleration will maximize the amount of time the device spends in sleep mode, ensuring processing is done as rapidly as possible to switch back into low power sleep modes. Source: NXP
BenchmarksBenchmarks performed on MLPerf tiny benchmark suite for anomaly detection, keyword spotting, visual wakewords, and image classification on the Arm Cortex-M33 and the eIQ NPU can be seen in Figure 5. There is an immediate contrast showing up to 172x acceleration on models with the NPU.
Figure 5: MLPerf tiny benchmark showing improvements in standard ML models for anomaly detection, keyword spotting, visual wakewords, and image classification. Source: NXP
This is a critical enhancement in the RT700 over previous generations as use cases for smart AI-enabled edge devices are cropping up exponentially. This can be seen with the increase in worldwide shipments for TinyML, or types of ML that are optimized to run on less powerful devices often at the edge compared. TinyML is a large shift in the conventional view of AI hardware with beefy datacenter GPUs for data-intensive deep tasks and model training. The rise of edge computing shares the processing burden between the cloud and the edge device, allowing for much lower latencies while also removing the bandwidth burden required to constantly communicate data to the cloud and back. This opens up many opportunities however, it does force a higher burden on smart data processing to optimize power management. The RT700 attempts to meet this demand with its integrated NPU while also easing the burden on developers by using common software languages for more simplified programmability.
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for nearly a decade. She holds a Bachelor’s degree in electrical engineering, and has published works in major EE journals.
Related Content
- Ultra-wideband tech gets a boost in capabilities
- The AI-centric microcontrollers eye the new era of edge computing
- Industrial IoT SOM pairs edge processing with wireless connectivity
- AI hardware acceleration needs careful requirements planning
- System-on-module enables machine learning at the edge
The post New i.MX MCU serves TinyML applications appeared first on EDN.
Long delay timer sans large capacitors
There are several applications at home or industry for long delay timers (ON or OFF delay). Time delays on the order of seconds can be generated using 555 timer circuits, provided the timing capacitor values do not exceed the limit specified by 555 datasheets. When the time delay needed goes to minutes and hours, these circuits will not help.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1’s circuit can generate time delay of seconds to hours without using high value capacitors. This simple, inexpensive circuit is basically an oscillator, divide by 4096 counter and followed by another divide by 10 circuit. The equation below can be used to obtain the time delay in seconds. The circuit draws very little current and is therefore suitable for battery powered gadgets.
Time delay = (4096X10)/F
Where F is the U1 (555) oscillator frequency in Hz.
Figure 1 The above circuit produces an OFF delay of 30 minutes. The switch SW1 can be thrown on the other side at starting to get a 30 minute ON delay. The capacitor C2 can be changed to get various timings. The required load can be connected in place of LED D1.
Circuit in Figure 1 generates a time delay of 25 minutes for the component values selected. In one position of switch SW1, an “ON” delay is generated. In another position, an “OFF” delay is generated. The load can be connected between Q1 and the power supply (across LED + R6), which can be switched “ON” or “OFF” after the time delay. U1 (555) is connected as an astable multivibrator where its output is given to U3 (4020 counter), its Q12 output is fed to U4 (4017counter and decoder). U4’s Q9 output goes “High” (up until this point, it was “LOW”), after receiving 10 cycles from Q12 of U3. It is inverted by U2C and given to U2A. This “LOW” output inhibits pulses reaching U3 causing the counters to stop after the required time delay. The Q9 output of U4 and output of U2C are connected to the SW1 switch to select an “ON” delay or “OFF” delay mode. In “ON” delay mode, the load/LED gets energized after the time delay. In “OFF” delay mode, the load / LED is energized as soon as timer is switched “ON” and gets de-energized after the time delay.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- Design low-duty-cycle timer circuits
- 555 timer draws zero off current
- Inductor-based astable 555 timer circuit
- RLD-based astable 555 timer circuit
- A long pulse detector circuit
- Single phase mains cycle skipping controller sans harmonics
- Microcontroller’s single I/O-port line drives a bar-graph display
The post Long delay timer sans large capacitors appeared first on EDN.
Overcoming challenges in dynamic electronics design landscape
The electronics engineering landscape has experienced significant changes in the past half-decade, driven by technological advancements and global trends and disruptions. Significant limitations faced by electronics engineers and organizations include talent shortages, shortened product lifecycles, and a shift toward unpredictability in global trade and supply chains.
These macro-challenges impact the readiness and effectiveness of development teams in addressing complex electronic system requirements. To overcome these challenges, next-generation electronic design solutions must possess key components such as intuitiveness, artificial intelligence (AI) infusion, cloud connectivity, integration, and security.
By prioritizing usability, leveraging AI capabilities, embracing cloud connectivity, adopting an integrated approach, and ensuring robust security measures, engineers can navigate the dynamic landscape more effectively.
What modern electronics engineers want
In examining the evolution of electronic systems design and the role of electronic design automation (EDA) in meeting these needs, the focus has traditionally been on delivering design tools and solutions that address critical requirements such as increasing product complexity, cost management, and meeting tight schedules.
As design activities became more globalized in the 1990s, the need for automation and solutions to support geographically dispersed team collaboration and integrated verification became increasingly vital.
Over the past decade, there has been an expanded focus on managing the growing complexity of systems. This has necessitated support for multi-board development and collaboration across various engineering disciplines, with integration into product lifecycle management (PLM) systems for effective data management.
Figure 1 The above image provides a sneak peek of the evolution of electronics system design solutions. Source: Siemens EDA
However, the current reality is that complexity is now outpacing the capacity of organizations to effectively meet the challenges posed by modern electronic systems. As readers of this article, you are already well-informed about the reasons behind the increasing complexity of electronic systems.
Factors such as the convergence of electronics and software-driven products, the need for higher processing speeds, advanced IC packaging devices, edge-connectivity, and higher density are all examples of contributing factors.
This article will focus on macro-challenges that significantly impact engineering and organizational capacity. These macro challenges extend beyond specific design complexities and address broader issues that affect an organization’s overall effectiveness in tackling modern electronic system design.
The big picture: Macro challenges
As engineers, we often prioritize technology challenges, but it is crucial to recognize that these challenges are intensified by the macro realities of the global environment we operate in. Nowadays, we are tasked with designing extraordinarily complex products with a limited number of engineers, while also ensuring that these products stand out in a competitive market.
Moreover, we must navigate an environment of growing unpredictability, where the assumptions of a globalized supply chain system are no longer dependable and are susceptible to volatility. Engineers and their organizations are expected to “build the plane while flying it.”
One of the major challenges facing electronics development teams is the critical impact of long-anticipated engineering talent shortages on organizational readiness to meet market demands. Projections indicate that at least one-third of all engineering roles will remain unfilled due to an insufficient talent pipeline throughout this decade.
As a result, today’s electronics engineers are shouldering additional responsibilities, ranging from layout to high-speed simulation. Industry executives frequently express concerns about recruitment and workforce retention, recognizing it as a high-priority issue. The ongoing talent shortages and intense competition for skilled professionals will continue to significantly affect an organization’s ability to maintain competitiveness.
Figure 2 Workforce shortages will put acute stress on development organizations. Source: Boston Consulting Group
Another challenge arises from the fact that lifecycles of electronic products have significantly shortened. Factors such as innovation, connectivity, emerging economies, mass urbanization, and an abundance of consumer options are driving faster product replacement and upgrade cycles. In the past, consumers were content with maintaining their goods and services for several years. However, the landscape has shifted dramatically.
While trends in consumer markets are well known, it is important to acknowledge that this phenomenon is also present in the B2B space. A notable example is the rapid growth of the Internet constellation market, which did not exist just a decade ago. This market, along with other emerging services, has placed increased demands on electronics development teams.
To thrive in this dynamic environment, product differentiation has become essential. The ability to offer unique features and capabilities that set products apart from those of competitors is crucial for motivating the adoption of new products. Consequently, electronics development teams face even greater pressure to continually innovate and deliver products within tight timelines that are not only technologically advanced but also meet the market demand of the day.
Figure 3 Electronics innovation is driving down the lifecycle availability of products. Source: Pew Research
The last major challenge having significant impact on product development that design engineers should address is the shift toward unpredictability in the global electronics ecosystem. Unpredictability is the new normal, and there is no end in sight, requiring resilience across organizations.
Since the late 1980s, and particularly in the 1990s, we witnessed the development of a globalized supply chain that facilitated a “design-anywhere, build-anywhere” system. This system heavily relied on global cooperation fostered through trade agreements. But trade conflicts have become prominent over time.
Additionally, governments are now making significant investments to develop domestic capabilities, such as the US Department of Commerce’s CHIPS Act, which aims to ensure a robust American-based design and manufacturing capacity for semiconductors.
Contributing to this scenario, increasing regulations for sustainability in many countries and regions mean that to remain compatible and competitive, development teams from “across the border” must contend with new and often onerous requirements. The recent pandemic further exposed critical vulnerabilities, as supply chain shortages exposed the weak links of globalized networks.
In recent years, organizations like Siemens have emphasized the importance of designing for resilience, particularly to ride out supply chain volatility. However, resilience extends beyond the supply chain and applies to all aspects of electronics development. Organizational resilience is the ability to adapt and thrive in the face of constant change. Given the current landscape of unpredictability, the need for strategic resilience has become even more critical.
To navigate this terrain successfully, development teams must prioritize strategic planning, flexibility, and the ability to quickly adapt to unforeseen circumstances. They need to anticipate potential disruptions and build resilience into their processes, systems, and partnerships.
Road signs to successful system design
To thrive in this volatile environment, today’s electronics engineers require a next-generation electronics system design solution. This solution has five components that are essential for delivering a next generation electronic system design solution.
Intuitive
In the past, engineers and their organizations operated with a focus on specialization, where each individual or team had a specific area of expertise. However, due to talent shortages, engineers are now taking on additional tasks and expanding their skillset.
The traditional approach to EDA tools prioritized delivering specialized tools for specific tasks, often minimizing user-friendliness. But times have changed, and there is now a growing demand for highly intuitive tools. To address these and other workforce changes, it’s critical to ensure that engineers can quickly become productive with minimal learning curves, especially for tasks they do infrequently.
The ability to achieve productivity quickly is crucial, and engineers require tools that allow them to execute operations effectively and work in an environment that is logical and easy to navigate. By prioritizing usability, these tools enable engineers to work more efficiently, work with greater confidence, and increase their satisfaction.
AI-infused
AI has emerged as a powerful tool that can bridge the gap between the complexity of engineering tasks, talent shortages, and the rapid acceleration of design complexity. Rather than replacing engineers, AI is designed to enhance their capabilities by enabling intelligent human-machine interaction, providing on-demand assistance through deep learning, and offering surrogate modeling for comprehensive simulation and analysis. Just as AI has become prevalent in our digital experiences, its integration into next-generation engineering solutions is crucial to empower and extend the capabilities of engineers.
The potential for AI applications in electronics system design is profound. AI can provide in-design assistance, predictive engineering capabilities, and the ability to perform space exploration analysis. It can also extract and utilize actionable data from component suppliers, facilitating more efficient decision-making.
By leveraging AI, engineers can streamline their workflows, gain valuable insights, and optimize their designs for improved performance and efficiency. The integration of AI into electronics system design holds immense promise for advancing the field and pushing the boundaries of what is possible.
Cloud-connected
In today’s electronics system landscape, collaboration across the value chain is essential. From component suppliers to design service providers and manufacturing contractors, purposeful collaboration is crucial for maximizing development opportunities, staying synchronized, reacting to supply chain changes, and evaluating alternatives.
Cloud connectivity offers a wide range of potential services that can benefit electronics system design collaboration and more. These include manufacturing analysis, circuit exploration, component research, and scaling services, such as high-end simulation, which require powerful compute resources.
The ability to access services and resources in the cloud fosters agility, enabling engineers to rapidly adapt to changing requirements and leverage specialized expertise as needed. It also facilitates seamless collaboration, as multiple stakeholders can work on the same project simultaneously, regardless of their physical location.
Integrated
An integrated and multidisciplinary approach is essential for maximizing efficiency and productivity. This approach eliminates silos and fosters collaboration throughout the development process.
At the core of digital transformation initiatives lies the concept of digital threads, which, by definition, embody integration. These threads enable the seamless flow of data and information across various stages of processes, systems, and organizations. Examples of these threads include architectural, component lifecycle, electromechanical, verification, and manufacturing data.
By collecting, integrating, and managing data across various stages of a product’s lifecycle, digital threads provide a comprehensive view of the product. This, in turn, enables informed decision-making, fosters collaboration, and optimizes designs.
To enable this integration, a next-generation solution must include an electronics design data management environment that supports critical domain-specific data. This environment should seamlessly integrate with PLM systems for requirements management, digital mock-ups, configuration management, change management, and bill of materials management.
By embracing this integrated and multi-disciplinary approach, organizations can leverage digital threads to enhance their electronic systems design processes and be better positioned to meet their digital transformation goals.
Secure
Security is a critical concern across the electronics industry, with a focus on adhering to government regulations and protecting organizational IP from cybersecurity risks. As design activity becomes more connected in the cloud, controlling access to data in specific locations and at specific times becomes increasingly vital.
A next-generation electronics system design solution must prioritize security as a core principle. This includes implementing safeguards for IP protection and enforcing data-access restrictions. Additionally, it’s important to ensure that cloud service providers adhere to the strictest security protocols when you partner with them.
Figure 4 The next-generation system design solutions must prioritize core principles such as AI, cloud, and security. Source: Siemens EDA
Toward a brighter future
The challenges faced by electronics engineers and organizations today are significantly different from what they were just a few years ago. Recognizing this changing global landscape, Siemens understands that our customers’ challenges have accelerated since the beginning of this decade.
Therefore, we have undertaken the development of a next-generation electronics system design solution that not only addresses current needs but also anticipates future challenges. Set to launch in the second half of 2024, this solution has been developed following extensive customer testing and validation.
Our aim is to empower engineers, optimize workflows, foster collaboration, and enhance overall product development efficiency while increasing user satisfaction.
AJ Incorvaia is senior VP of electronic board systems at Siemens Digital Industries Software.
Related Content
- Cloud Computing for EDA
- AI features in EDA tools: Facts and fiction
- Altair eying a place in EDA’s shifting landscape
- Synopsys Explores Chip Industry Trends and AI in EDA
- To Conquer SoC Design Challenges, Change EDA Methods and Tools
The post Overcoming challenges in dynamic electronics design landscape appeared first on EDN.
Nano-batteries may enable mega possibilities
Bigger batteries are getting a lot of attention these days, where “bigger” is defined in terms of capacity, density, charging times, lifetime cycles, and other desirable attributes.
However, all this “big-battery” attention tends to obscure the significant but literally neatly invisible activity at the other end of the physical and energy scale with ever-smaller batteries. These could be used to power the electronics associated with microsensors, tiny actuators, and even nano-robots. If the batteries were small and light enough yet offered adequate capacity, they could be power medical micro-implants or free those swarming robo-insects from tethers or the need for laser beams focused on their minuscule solar cells for transmitted power (interestingly, those configurations are known as “marionettes” because they are powered by an external source).
Creating such batteries is the project undertaken by an MIT-led multi-university research team. They have developed and fabricated a battery which is 0.1 millimeters long and 0.002 millimeters thick that can capture oxygen from air and use it to oxidize zinc, creating a current at a potential of up to 1 volt.
Their battery consists of a zinc electrode connected to a platinum electrode, embedded into a strip of a polymer called SU-8, a high-contrast, epoxy-based photoresist designed for micromachining and other microelectronic applications where a thick chemically and thermally stable image is desired. When these electrodes interact with oxygen molecules from the air, the zinc becomes oxidized and releases electrons that flow to the platinum electrode, creating a current.
To fabricate these batteries, they photolithographically patterned a microscale zinc/platinum/SU-8 system to generate the highest energy-density microbattery at the picoliter (10−12 liter) scale, Figure 1.
Figure 1 The fabrication and release of Zn/Pt/SU8 picoliter Zn-air batteries. (a) Side view schematic of a Zn-air picoliter battery placed in a droplet of electrolyte. (b) Height profile and (c) optical micrograph of an open-circuit Zn-air picoliter battery after fabrication. Scale bar: 40 μm. From a to c, the SU-8 base has a side length of 100 μm. d) Image of a Si wafer with a 100 × 100 array of picoliter batteries. (e)(f)(g) (h) Optical micrographs of picoliter batteries at different stages of the fabrication, as indicated by the annotation. (i) Optical micrograph of picoliter battery arrays patterned for Cu etching. Scale bar: 200 μm. (j) Schematics of batteries with loads (memristors in this case) released into solution. (k) Image of a bottle of dispersion containing 100 μm batteries. (l) Optical micrographs of open circuit and short-circuited Zn-air picoliter batteries, both are 100 μm. (m) Central image: optical micrographs of picoliter batteries deposited onto a glass slide. Scale bar: 200 μm. Side images: optical micrographs of individual batteries that were facing down (left), and up (right). Scale bar: 50 μm. (n) Optical micrographs of short-circuited batteries with various sizes. Scale bar: 50 μm. (o) Optical micrographs of 20 μm batteries after releasing and re-depositing onto a glass slide. (The dust in the leftmost image was residual from the sacrificial substrate.) The rightmost image showed a 20 μm battery that was facing downward.
The device scavenges ambient or solution-dissolved oxygen for a zinc oxidation reaction, achieving an energy density ranging from 760 to 1070 watt-hours per liter at scales below 100 micrometers in the lateral direction and 2 micrometers thickness in size. Similar to IC fabrication, the inherent “parallel” nature of photolithography processes allowed them to fabricate 10,000 devices per wafer.
Within a volume of only 2 picoliters each, these primary (non-rechargeable) microbatteries delivered open-circuit voltages of 1.05 ± 0.12 volts, with total energies ranging from 5.5 ± 0.3 to 7.7 ± 1.0 microjoules and a maximum power of nearly 2.7 nanowatts, Figure 2.
Figure 2 Performance summary and comparison. (a) Ragone plot of energy and power of individual batteries with 2 pL volume. The theoretical Gibbs free energy of the cell reaction is shown as the red dashed line. (b) Ragone plot of the average energy and power densities under 4 current densities. The error bars represent the standard deviation across multiple devices. The red squares are data of Li-MnO2 primary microbatteries from literature. (c) Master plot of the energy density versus cell volume for various microbatteries reported in the literature (electrolyte volume excluded for all entries). This work is shown in red asterisk.
While this doesn’t sound like much energy or power—and it isn’t, clearly—it’s enough for the diverse applications with which they tested it, such as powering a micrometer-sized memristor circuit for providing access to nonvolatile memory. They also cycled power to drive the reversible bending of microscale bimorph actuators at 0.05 hertz for mechanical functions of colloidal robots, powered two distinct nanosensor types, and supplied a clock circuit. In this study, the researchers used wires to connect their battery to the external powered device, but they plan to build robots in which the battery is incorporated into a device, analogous to an integrated circuit.
I could go into details of what they have done, how they did it, and their tests and results, but that would be duplicative to their paper “High energy density picoliter-scale zinc-air microbatteries for colloidal robotics” published in Science Robotics; while that paper is unfortunately behind a paywall, an identical preprint is fortunately posted here.
For their next phase, the researchers are working on increasing the voltage of the battery, which may enable additional applications. The research was funded by the U.S. Army Research Office, the U.S. Department of Energy, the National Science Foundation, and a MathWorks Engineering Fellowship.
Will these microbatteries become meaningful in the real world? Do they provide adequate useful power with enough energy capacity for projects you might like to explore? Can you think of situations where you would use them? Could they lead to new types of powered devices that are so tiny that new applications become realistic? Or are they just another eye-catching, head-turning topic which is well-positioned to get more research grants?
Related content
- 3D microbatteries: A future option for ultralow-power applications?
- Hearing aids and batteries: a challenge beyond words and music
- Get ready for even smaller batteries
- Is this the smallest battery in widespread use?
The post Nano-batteries may enable mega possibilities appeared first on EDN.
Crypto modules gain the latest FIPS certification
ST’s STSAFE-TPM cryptographic modules for PCs, servers, and embedded systems are among the first to receive FIPS 140-3 certification. These Trusted Platform Modules (TPMs) protect sensitive data by securely managing cryptographic keys and operations, ensuring compliance with security and regulatory requirements for critical information systems.
FIPS 140-3 is the most recent Federal Information Processing Standard (FIPS) for cryptographic modules, superseding FIPS 140-2. It defines four security levels to address various applications and environments, covering secure design, implementation, and operation. FIPS 140-2 certificates expire in September 2026.
The newly certified TPMs include the ST33KTPM2X, ST33KTPM2XSPI, ST33KTPM2XI2C, ST33KTPM2I, and ST33KTPM2A. The ST33KTPM2I is qualified for long lifetime industrial systems, while the ST33KTPM2A leverages an AEC-Q100 qualified hardware platform required for automotive integration.
STSAFE-TPM devices comply with multiple security standards, including Trusted Computing Group TPM 2.0, Common Criteria EAL4+ (AVA_VAN.5), and FIPS 140-3 level 1 with physical security level 3. They provide cryptographic services—including ECDSA, ECDH (up to 384 bits), RSA (up to 4096 bits), AES (up to 256 bits), and SHA1, SHA2, and SHA3—all standardized by TCG and compatible with FIPS 140-3-certified software stacks.
ST also offers provisioning services to load device keys and certificates, speeding time to market and ensuring supply chain security.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Crypto modules gain the latest FIPS certification appeared first on EDN.
MathWorks improves MATLAB and SIMULINK
MathWorks’ MATLAB and SIMULINK Release 2024b simplifies development for wireless communication, control systems, and digital signal processing. This second of twice-yearly releases provides major updates to popular MATLAB and Simulink tools, as well as new features and bug fixes.
The major updates found in Release 2024b include:
- 5G Toolbox now supports 6G waveform generation and 5G signal quality assessments.
- DSP HDL Toolbox adds an interactive DSP HDL IP Designer app for configuring DSP algorithms and generating HDL code and verification components.
- Simulink Control Design offers the ability to design and implement nonlinear and data-driven control techniques, such as sliding mode and iterative learning control.
- System Composer allows users to edit subsetted views and define system behavior with activity and sequence diagrams.
In addition, a new hardware support package for Qualcomm’s Hexagon NPU, embedded in Snapdragon processors, leverages Simulink and model-based design to deploy production-quality C code across various Snapdragon platforms for DSP applications.
To learn more about what’s new in MATLAB and SIMULINK Release 2024b, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post MathWorks improves MATLAB and SIMULINK appeared first on EDN.
SiC power modules elevate energy efficiency
Six 2300-V baseplate-less power modules from Wolfspeed boost energy efficiency in renewable energy, energy storage, and fast charging applications. These half-bridge modules, optimized for 1500-V DC bus systems, are built on advanced 200-mm SiC wafers.
The 2300-V power modules not only enhance system efficiency, but also reduce the need for passive components. According to the manufacturer, they provide 15% more voltage headroom than comparable SiC modules, improved dynamic performance with stable temperature characteristics, and a substantial reduction in EMI filter size. Wolfspeed also reports a 77% decrease in switching losses compared to IGBTs and a 2x to 3x reduction in switching losses for SiC devices used for 1500-V applications.
Modules support a two-level topology, simplifying design and reducing driver count compared to IGBT-based three-level systems. This building block approach enables scalable power from kilowatts to megawatts and reduces potential single points of failure in a two-level implementation.
Datasheets for the 2300-V SiC power modules are available here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post SiC power modules elevate energy efficiency appeared first on EDN.
PON-X chipset enables FTTR deployments
Joining Semtech’s PON-X lineup are a combo chip and a burst-mode TIA, designed for 2.5G PON Fiber to the Room (FTTR) applications. FTTR is regarded as the next step in fixed broadband technology, gaining traction in both residential and business markets. As demand for higher speeds grows, Semtech’s FTTR chipset can be easily upgraded to 10G PON without recabling.
The GN25L81 integrates a 2.5-Gbps directly modulated laser (DML) driver and a dual-rate 2.5/1.25-Gbps burst-mode limiting amplifier into a single combo chip, suited for both FTTR and GPON optical line terminal (OLT) applications. The laser driver features dual-loop extinction ratio control and eye shaping.
Complementing the GL25L81, the GN25L42 is a single-channel, reset-less 2.5-Gbps burst-mode TIA that offers low-noise performance and sensitivity better than -30 dBm when used with a PIN photodiode. It also integrates a burst-mode received signal strength indicator (RSSI) output for cost-effective diagnostics of receiver input power.
The GN25L81 combo chip is in production and available in a QFN package. The GN25L42 burst-mode TIA is sampling now and supplied as bare die.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post PON-X chipset enables FTTR deployments appeared first on EDN.
Module delivers satellite and cellular comms
Swiss provider u-blox announced its first combined 3GPP-compliant terrestrial network (TN) and non-terrestrial network (NTN) IoT module. The SARA-S528NM10 module supports global coverage with accurate, low-power, and concurrent positioning.
Most satellite systems require proprietary hardware and software, locking users to a specific operator and forcing terminal replacement for switching. The u-blox device, based on global 3GPP standards, offers interoperability with multiple satellite providers, giving customers greater flexibility.
Powered by the UBX-S52 cellular/satellite chipset and M10 GNSS receiver, the module adheres to the 3GPP Release 17 NB-NTN specification. This standards-based approach ensures extended connectivity via LTE-M and NB-IoT on terrestrial cellular networks, as well as NB-IoT on GEO satellite constellations, with readiness for LEO satellites.
The SARA-S528NM10 module supports the two NTN bands introduced in 3GPP Release 17—n255 (L-band global) and n256 (S-band Europe)—as well as the n23 band (US). It is currently being certified by Skylo, a global NTN service provider, for its satellite network. This certification will ensure seamless connectivity with both cellular and Skylo satellite networks.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Module delivers satellite and cellular comms appeared first on EDN.
Not all oscilloscopes are made equal: Why ADC and low noise floor matter
In the world of engineering, precision is paramount. Whether it’s performing quality assurance on cutting-edge electronics or debugging complex systems, the accuracy of measurements can make or break a project. This is where the concept of vertical accuracy in oscilloscopes becomes crucial, it refers to how closely the voltage readings match the actual voltage of the signal being measured. Achieving a high vertical accuracy depends on two factors: the number of analog-to-digital converter (ADC) bits and the noise floor of the oscilloscope.
The role of ADC bitsThe horizontal axis of an oscilloscope represents the time base (seconds per division or s/div), while the vertical axis shows the voltage (volts per division or V/div). Vertical accuracy is about how accurately the oscilloscope displays the voltage of the signal, which is vital for visual representation and precise measurements. The closer the voltage reading on the oscilloscope screen is to the actual signal voltage, the higher the vertical accuracy.
To achieve the optimal reading, engineers need oscilloscopes with the highest number of ADC bits and the lowest noise floor. Higher ADC bits provide more vertical resolution, leading to more precise signal visualization, while a lower noise floor minimizes the oscilloscope’s impact on the signal. This combination ensures the oscilloscope provides the most accurate representation of the signal, minimizing any distortion or noise that could affect the measurements.
To look at this in more detail, an oscilloscope with an 8-bit ADC can encode an analog input into 256 unique levels of conversion (28 = 256). Each additional bit doubles the number of levels of conversion. Therefore, 9 bits provide 512 levels (29 = 512), 10 bits provide 1,024 levels (210 = 1,024), and so on.
Oscilloscopes with a 14-bit ADC can encode the analog input into 16,384 levels (214 = 16,384), which is 4x the resolution of an average 12-bit ADC oscilloscope and 64 times the resolution of an 8-bit ADC. This higher resolution allows the oscilloscope to capture finer details of the signal, providing a more accurate representation.
Now consider how this applies to an oscilloscope with a vertical setting of 100 mV per division and 8 vertical divisions. The oscilloscope’s full screen equals 800 mV (100 mV/div * 8 divisions). With an 8-bit ADC, the full screen (800 mV) is divided into 256 levels, resulting in a resolution of 3.125 mV per level. In comparison, a 14-bit ADC divides the same 800 mV into 16,384 levels, achieving a resolution of 48.8 µV per level. This significant increase in resolution allows engineers to detect and measure much smaller changes in the signal, as shown in Figure 1.
Figure 1 As the number of ADC bits increases, so does the number of levels of conversion. This results in a higher vertical resolution that enables engineers to measure much smaller changes in the signal. Source: Keysight
The importance of a low noise floorWhile a high number of ADC bits is essential for vertical accuracy, it is not the only factor. The noise floor of the oscilloscope also plays a critical role. This refers to the intrinsic noise generated by the oscilloscope itself, which can interfere with the signal being measured, leading to inaccurate readings.
All electronic devices, including oscilloscopes, generate some level of noise. However, the goal is to minimize this as much as possible. A lower noise floor means that the oscilloscope has less impact on the signal, resulting in more accurate measurements. Furthermore, you won’t be able to see signal detail smaller than the noise of the oscilloscope. This is especially important when measuring very small voltages, where even a small amount of noise can significantly distort the readings.
For example, Figure 2 shows an oscilloscope measuring a 53 mV signal. At 2 mV/div, this oscilloscope has a noise floor of less than 50 mVRMS. Using this oscilloscope, you can capture the very small 53 mV signal because the noise floor is low enough. This signal would be lost in the noise floor of other general-purpose oscilloscopes that tend to exceed 100 mVRMS.
Figure 2 An oscilloscope with a noise floor of <50 mVRMS captures a small 53 mV signal that is lost in the noise floor of other general-purpose oscilloscopes. Source: Keysight
Combining high ADC bits and low noise floorThe combination of a high number of ADC bits and a low noise floor results in the highest vertical accuracy. This ensures that the oscilloscope provides the most accurate representation of the signal, allowing engineers to make precise measurements and avoid costly errors.
For instance, an oscilloscope that could feature a 14-bit ADC and a noise floor of less than 50 µVRMS at 2 mV/div and a 1 GHz bandwidth on a 50-Ω input would provide exceptional vertical accuracy, enabling engineers to detect even the smallest changes in the signal. This difference can impact an engineer’s ability to gain insight, understand, debug and characterize designs. In addition, inaccurate results from an oscilloscope can increase risk in the development cycle times, production quality and potentially the components chosen. Engineers need to be able to rely on tools and technology that will give them the best possible insights and accuracy.
Achieving a high vertical accuracyIt’s critical to recognize that not all oscilloscopes are made equal. Engineers need to opt for the highest ADC bit, combined with a low noise floor, to achieve the highest vertical accuracy. This combination ensures the oscilloscope accurately represents the signal, minimizing any distortion or noise that could affect the measurements. High vertical accuracy is essential for precise measurements, reducing errors, and saving time and resources. By investing in oscilloscopes with high vertical accuracy, engineers can trust their measurements, leading to more efficient debugging.
Michelle Tate is currently a Product Marketing Manager at Keysight Technologies for InfiniiVision Oscilloscopes. She previously worked in the semiconductor industry with Texas Instruments’ wireless connectivity and brushless-DC motor drivers and received her Bachelor of Electrical Engineering from The University of Texas at Austin.
Related Content
- Maximize the accuracy of your oscilloscope measurements
- Understanding FFT vertical scaling
- How to make better measurements with your oscilloscope or digitizer
- Oscilloscope cursors complement other measurement tools
- FFTs and oscilloscopes: A practical guide
The post Not all oscilloscopes are made equal: Why ADC and low noise floor matter appeared first on EDN.
Electric vehicles, my perspective
A great deal of engineering effort and a great deal of political effort are being put forth to bring electric vehicles (EVs) into the modern world. EV advantages are asserted and re-asserted and re-re-asserted all over mass media, but I myself have grave doubts about those assertions. Frankly, I see many negative attributes of EVs, some of which have been written up as “debunked”, but I do not agree with those authors. I hold that there are real issues involved about which I’ve read and still do read from time to time.
Please consider the following and where shown, a related link:
- Zero CO2 emissions are claimed as a desirable EV trait but their electrical energy often comes from power plants that burn fossil fuels which put out the CO2 emissions anyway.
- EVs are extremely heavy which leads to faster tire wear. Roadway accumulations of tire dust from EVs are worse than from conventional vehicles and are a worsened source of particulate air pollution.
- The acceleration and deceleration traits of EVs are different from those of conventional vehicles. Those EV movement traits have been reported to induce vehicle occupants into car sickness.
- EV batteries have finite service lives and must then be replaced at costs that can be in the multi-thousands of dollars. Some buyers of used EVs have discovered to their horror that their EV battery was close to end-of-life and did indeed fail after only a short while after purchase. Also, disposal issues for old EV batteries (landfills, recycling) seem to not yet have been successfully addressed.
- EVs can be costly to repair. There are several reasons for high repair costs as cited in this article. For example, in the interest of cutting costs of manufacture, some EVs are assembled using single-piece steel body. In such cars, one cannot just replace a damaged fender. Any bodywork such as from a “fender bender” can require metal repair work costing multi-thousands of dollars.
- EV charging stations are vulnerable to vandalism. Charging cables are reputed (true or not) to contain enough copper to make cutting and stealing those cables profitable, thus leaving some charging stations inoperable and leaving some drivers stranded.
- EV batteries have erupted in uncontrollable (rapid and intense) fires, sometimes even in parked vehicles. Such fires have been observed to re-ignite even when they appear to have been quelled. Such fires tend to erupt with great rapidity thus inhibiting vehicle escape and making occupant survival less likely. Although the likelihood of an EV fire is said to be less than the likelihood of a conventional car fire, the consequence of an EV fire when it does occur seems to be very much worse.
To me, the battery and battery fire issues are especially troubling. As shown in the screenshot from my cell phone below (Figure 1), in just one afternoon I came across all of these headlines and one particularly distressing image (Figure 2)
Figure 1 A screenshot of headlines found in a single afternoon related to EVs.
Figure 2 Screenshot image of an EV battery car fire.
The underlying truth is that present day EV and EV battery technologies have many shortcomings. While each of those headlines suggests that great efforts are being made toward overcoming those shortcomings, we ain’t there yet.
Maybe if I someday become personally convinced that success has been achieved and all of the above issues have been resolved, if I become convinced that my family will not be riding in danger of immolation, I might consider the purchase of an EV, but under present day circumstances, ab-so-lute-ly not.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Hurricane Ian and EV fires
- Lithium-ion battery fires: 7 solutions for improved safety
- Battery monitors improve safety in EVs and HEVs
- Who killed the electric car— the patent system, thats who
The post Electric vehicles, my perspective appeared first on EDN.
New design frontiers in BMS hardware and software
Battery management system (BMS) hardware and software continue to evolve as electric vehicles (EVs) transition to 800-V Li-ion battery systems comprising around 200 individual cells connected in series. Cell measurement accuracy and lifetime design robustness enhance BMS performance to maximize the usable capacity and safety of EV batteries and other energy storage systems.
BMS—essential for managing safe and healthy battery usage—employs battery-related data such as current, voltage, and temperature to ensure optimal performance. Yole Intelligence estimates that the BMS market is poised to surge from US$5 billion in 2022 to almost US$12 billion in 2028.
Figure 1 New BMS designs increasingly incorporate system-level semiconduction solutions while adding software-centric features and capabilities. Source: NXP
Below is a sneak peek at some noticeable developments in BMS hardware and software, respectively. It demonstrates how the BMS solutions are advancing to help extend a vehicle’s battery life.
Battery junction box
The new battery management ICs increasingly aim to offer system-level solutions to more accurately perform voltage measurements for state-of-charge (SOC) and state-of-health (SOH) calculations. Take the case of NXP’s MC33777 battery management IC, which integrates sense, think and act capabilities on a single chip.
While conventional pack-level monitoring solutions require multiple discrete components, external actuators and processing support, this new BMS chip integrates everything needed to monitor a battery pack and react quickly to safety-critical events into a single device. NXP plans to launch this BMS chip at Electronica 2024.
MC33777, which NXP calls battery junction box IC, integrates critical pack-level functions into a single device, reducing design complexity, qualification and software development effort, and cost for OEMs and Tier 1s. NXP claims the MC33777 chip reduces the component count by up to 80%.
Figure 2 MC33777 claims to be the first BMS chip to integrate critical pack-level functions into a single IC. Source: NXP
The battery pack monitoring IC aims to better protect high-voltage batteries from overcurrent by constantly monitoring the battery current and slope every eight microseconds. According to NXP, MC33777 detects and reacts to a wide matrix of configurable events up to 10 times faster than conventional ICs without waiting for specific current thresholds to be exceeded.
The faster reaction times help to provide additional safety capabilities, like reducing the risk of electric shocks to passengers in case of a crash. Then, there is fuse-emulation capability, which removes expensive and low-reliability melting fuses from the system. That, in turn, leads to significant cost savings for OEMs and Tier 1s and enhances safety for the vehicle occupants.
AI algorithms in BMS
On the software front, while BMS chips like MC33777 claim to reduce software development effort due to hardware implementations, EV battery outfits like LG Energy Solution are employing artificial intelligence (AI) algorithms to bolster BMS capabilities. They are doing this by partnering with chipmakers to improve BMS diagnostics.
Presently, BMS software mostly operates on dedicated hardware, and battery diagnostic technologies are developed based on virtual conditions rather than real-world battery data. Moreover, traditional BMS solutions cannot measure the exact temperature inside an individual battery cell in real time.
LG Energy Solution is teaming up with Qualcomm to bolster its battery diagnostic software with AI hardware and software solutions featured on Snapdragon Digital Chassis, Qualcomm’s BMS solution. The enhanced BMS solution could perform real-time battery health diagnosis by employing sophisticated battery algorithms while utilizing the computing power of semiconductor platforms like Snapdragon Digital Chassis.
Figure 3 ADI and LG Energy Solution are co-developing solutions for precisely measuring battery cells’ internal temperature.
LG Energy Solution is also joining hands with Analog Devices Inc. to co-develop algorithms that could precisely measure the internal temperatures of EV battery cells. The two companies will employ electrochemical impedance spectroscopy (EIS) technology to precisely estimate the internal temperature of individual battery cells without needing a separate temperature measuring device.
That will open the door to improving charging speeds. The EIS technology, primarily used to analyze defects in used batteries, has yet to be commercialized. The success of this joint effort could thrust this promising technology into the commercial mainstream.
Related Content
- How to design a battery management system
- Battery Management Systems for Electric Scooters
- The ins and outs of battery management system (BMS)
- Battery Management System for efficiency of sustainable mobility
- Optimizing Energy Storage: The Importance of Battery Management Systems
The post New design frontiers in BMS hardware and software appeared first on EDN.
Experimenting with a modern solar cell
For about two decades now, I’ve owned (and occasionally used) an early 24-element solar cell from SunPower. It’s kept my deep cycle secondary “coach” battery topped up through two generations of Volkswagen camper vans and a multitude of multi-day music festivals and other extended “disconnected” situations. Here’s what I wrote about it back in mid-2005:
We didn’t have access to AC power at our campsite this time, so we brought along the SunPower 24-element solar cell array for some more testing. The refrigerator in Bertha, our ’81 VW Adventurewagen, is an ancient and inefficient Norcold unit. When I exposed the SunPower panel to direct sunlight, it cranked out around 5 amps peak (per the display on the Morningstar regulator), which seemed to be enough to power the fridge and prevent Bertha’s dual-6V ‘house’ marine battery pack from draining (and even, when the current output was especially strong, to simultaneously charge up Bertha’s battery pack, and the one in my laptop, a bit). Direct sunlight, however, is an impermanent phenomenon; the SunPower array (as is the case with any solar cell) is very sensitive to position versus the sun’s orientation. Even if I only slightly tilted the solar cell towards or away from an optimum vertical angle, or adjusted its ‘compass heading’ by just a few degrees, its current output would increase or decrease by several amps.
Here’s what it looks like (after hosing it off to remove the accumulated dust, spider webs, etc.); it has protective frame-inclusive dimensions of 40 ¾” x 20 ¾” (so probably 40”x20” standalone) and weighs around 14.5 lbs.:
And here’s the accompanying Morningstar ProStar PS-15M solar charge controller, which adds another ~2.5 lbs. to the total kit payload:
The solar cell array-to-charge controller intermediary connection looks like this (there’s another mating pair on the other end of the cable that then goes to the PS-15M):
And the controller then tethers to the battery via an in-between connector pair like this one:
The setup has served me long and well, but it’s obviously quite bulky and heavy. Plus, it’s intended pretty much exclusively for bulk battery-charging purposes; there aren’t direct USB outputs for topping off a smartphone, tablet, laptop or other portable piece of equipment, for example. So, shortly after purchasing my solar recharge-compatible portable power station, I came across an $80.57 (inclusive of tax, shipping and insurance) promotion on a 100W (150W peak, supposedly) mobile solar array. I decided to pull the “purchase” trigger, among other motivations as a means of assessing how far solar technology has progressed from efficiency, cost, and other metrics in the past 20-or-so years.
Here’s what it looks like, first using a “stock” photo:
And now from a personal smartphone snap:
Fully unfolded, it’s a bit larger than its predecessor, at ~43” x 23”. But it’s quite a bit thinner:
And quite a bit lighter, too, at ~8 lbs. total. Regarding size, you’ve likely already noticed that it folds to a quarter of its fully unfurled length, where it then fits neatly in an included carry case:
Here’s the electronics module in its entirety, complete with both conventional and QC 3.0 enhanced-power USB-A outputs, along with a USB-C connection, and an 18-V DC output implemented using a recessed 5521 male plug (one of the many things I learned in researching this piece was that “55” refers to the 5.5 mm outer contact diameter while “21” is the 2.1 mm inner contact diameter, with both specs key for successful cabling mating …and of course, polarity is also key for connected-equipment compatibility purposes…):
And here’s what the backside looks like, accompanied by the remainder of the kit:
Although labeled as coming from a company called Foursun (the model F-SP100, to be precise), my research suggests that this solar cell was also (previously?) sold as the iMars SP-B100. Both companies, unsurprisingly, are based in China, reflective of that country’s increasing dominance (albeit with unclear-at-best profitability) as a supplier of solar cells and products based on them.
The F-SP100, as the earlier photo shows, comes with two DC connection cable options. The first, roughly 5 feet long (AWG unknown), has a female 5521 connector on one end and dual terminal clamps on the other, for direct connection to a battery. And as such, you might think that that’s how I connect it to my portable power station, which has battery terminals on its backside too:
But recharging the internal battery is not what those terminals are for! Instead, they’re intended to daisy-chain the portable power station to an external in-parallel battery for longer effective aggregate runtime. For solar recharging purposes, conversely, you leverage (referring again back to the most recent “stock” photo) the “Anderson” connectors (two polarity-appropriate-colored Anderson Powerpole PP15-45s, to be exact):
So how did I get from a male 5521 DC connector to a pair of Andersons? In-between them, in order to physically separate the solar cell from the portable power station (in the hopes of keeping the latter in the shade) is the other included-in-kit DC extension cable, this one also ~5 ft. long (although the specs claim only 1 m in length…mebbe I got a gratis bonus?), AWG again unknown, and with female 5521 connectors on both ends. One end goes to the solar cell (duh). The other mates to a nifty inexpensive 2 ft 14 AWG adapter cable I found on Amazon:
with a recessed male 5521 on one end and dual also-polarity-appropriate-colored Anderson connectors on the other…which plug into the portable power station. Voila!
I admittedly haven’t spent much time yet with the setup, but so far it looks like it’ll address my needs nicely. Yeah, the solar cell only outputs 100 W or so, but as you already know from my previous coverage, that’s all the juice the portable power station will accept as intake anyway (and I’m being generous in even saying that). Its portability is key considering that, as with its predecessor, I’ll need to regularly reorient it for proper sun orientation and peak consequent efficiency. And I still can’t wrap my head around the fact that it only cost me $80 and change at retail. Let me know your thoughts in the comments, please!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Post-Mortem: The High-Tech Holiday Weekend
- Cellular and Solar Musings
- Solar fan with dynamic battery backup for constant speed of operation
- Home solar-supply topologies illustrate tradeoff realities
- Solar-mains hybrid lamp
- Solar day-lamp with active MPPT and no ballast resistors
The post Experimenting with a modern solar cell appeared first on EDN.
Switch mode hotwire thermostat
In recent EDN design ideas, we’ve seen thermostat designs that meld the functions of sensor and heater into a single device: FET, BJT, or even a simple length of fine gauge copper wire. A virtue inherent in thermostat designs that use a transistor as combined sensor and heater is that independent of whether it’s operated in linear or pulse mode, high efficiency is virtually guaranteed.
This happens simply because when the power pass device and heater are combined, power dissipated isn’t wasted. Instead, by definition, it’s simply more heat. Result: near 100% efficiency is inevitable! Sadly, life isn’t so simple for a hotwire thermostat. While it too melds sensor and heater, they remain separate from the pass device. The power that it dissipates by operating in linear mode therefore contributes nothing to heating. It’s totally wasted, thus eroding efficiency. The potential for avoiding this inefficiency makes switch mode an interesting possibility.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 shows a design idea that achieves it.
Figure 1 A switch mode thermostat efficiently heats melded copper wire sensor/heater.
Figure 1 shares much in common with a linear sibling: “Hotwire thermostat: Using fine copper wire as integrated sensor and heater for temperature control” whose schematic is found in Figure 2.
Figure 2 Linear mode hot wire thermostat that uses the tempco and I2R heating of 40 AWG copper wire as a melded sensor/heater.
Their respective interfaces with a copper wire melded heater/sensor are essentially identical. Where they differ is the way op-amp A1a controls Q1.
In Figure 2, temperature dependent voltage differences between R1 and R5+R6 are linearly amplified by A1a and applied to Q1’s gate to linearly force hotwire heating to match the setpoint dialed in on R5. The result is good temperature control, but also up to 10 W of dissipation on Q1.
In Figure 1, by contrast, positive feedback around A1a via R7 forces the amplifier to latch Q1 fully ON or OFF in response to the same error signals. This simple difference improves heating efficiency enough that, unlike Figure 2, Figure 1’s Q1 needs no heatsink and the overall circuit runs from only half the supply voltage.
Heating efficiency depends on hotwire length, and ranges from 83% for 5 feet, to 94% for 15. These numbers compare well to the linear version, that maxes out at about 50%.
Meanwhile, the calibration sequence remains the same for both switcher and linear:
- Before first power up, allow sensor/heater to fully equilibrate to room temperature.
- Set R4 and R5 fully counter-clockwise (CCW).
- Push and hold the CAL NC pushbutton.
- Turn power on.
- Slowly turn R4 clockwise until LED first flickers on.
- Release CAL.
Thanks for the suggestion, Konstantin Kim!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Using a MOSFET as a thermostatic heater
- Hotwire thermostat: Using fine copper wire as integrated sensor and heater for temperature control
- Self-heated ∆Vbe transistor thermostat needs no calibration
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Square-root function improves thermostat
- Fixing a fundamental flaw of self-sensing transistor thermostats
- ∆Vbe differential thermometer needs no calibration
The post Switch mode hotwire thermostat appeared first on EDN.
How microchannel liquid cooling trims electronic designs
Compact electronics present a unique challenge when it comes to cooling. While thermal management is becoming a growing concern amid increased chip functionality, smaller devices mean there’s less room for conventional heatsinks. And recent breakthroughs in microchannel liquid cooling could change that.
Fluids are 50 to 1,000 times more efficient than air at transferring heat, but the necessary infrastructure is often too large for small Internet of Things (IoT) gadgets. So, advances in precision manufacturing mean liquid heatsinks are smaller than ever before. Some cold plates are as small as 2 cm x 2 cm while dissipating up to 1,000 watts per square centimeter.
Microchannel liquid cooling facilitates more compact device form factors. Source: Sinda Thermal Technology
Such performance comes from a combination of innovative materials and a network of microchannels—fluid channels just a few microns across—that enable small-scale liquid cooling. On-chip cooling takes this potential further. Manufacturers can etch microchannels directly onto the semiconductor substrate, bringing thermal fluids as close as possible to the chip. The design minimizes losses from radiated heat and enables more compact device form factors.
Research shows on-chip liquid heatsinks exhibit 50 times greater cooling performance than conventional microchannels. Such designs also use water as the fluid. Consequently, device manufacturers could see even greater improvements using on-chip microchannels with chemical coolants.
Stacked components
The advent of on-chip microchannel liquid cooling paves the way for other optimizations. Component stacking is among the most promising of these. Removing the need for bulkier cooling infrastructure makes it possible to stack components instead of placing them next to each other without risking excessive heat.
This packaging technique reduces signal latency to improve performance and enables even more compact circuit designs. Manufacturers could use it to their advantage to overcome conventional barriers related to dark silicon.
Future challenges and considerations
A few remaining obstacles may hinder progress for microchannel liquid cooling. Manufacturing costs and complexity are the most prevalent. As effective as these solutions are, etching microscopic channels into sensitive components is inherently risky and difficult. Moreover, doing so at scale may require factories to upgrade to newer micromachining equipment, which could impact the cost-effectiveness of new devices.
However, as this technology matures, its expenses will fall. In the meantime, device manufacturers can take matters into their own hands instead of looking for a semiconductor fab that offers such cooling systems. Studies have found it’s possible to achieve a 44.4% reduction in thermal resistance by etching channels into off-the-shelf consumer-grade chips.
As microchannels enable higher functionality at lower temperatures, the industry may eventually face another challenge. Once thermal constraints no longer hold chip performance back, power delivery might. So, manufacturers could come to the point of designing chips so powerful that powering them is no longer cost-effective.
Such a conflict is likely far away, and new energy delivery technologies could emerge to address it in the meantime. However, the possibility deserves attention. Electronics manufacturers should consider these long-term implications and seek potential solutions as they capitalize on microchannel liquid cooling.
As gadgets keep getting smaller, cooling techniques will have to evolve. Here, etching microchannels directly onto the substrate is a promising solution, especially when combined with component stacking. While some challenges remain, electronics manufacturers can gain a lot by considering these methods and learning to implement them.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- AI Heats Up Data Center Cooling
- Heat Removal with Microchannel Heat Sinks
- IBM, GIT demo 3D die with microchannel cooling
- Dutch Liquid Cooling Startup ‘Turbocharges’ Gigabyte Servers
- Temperature-Monitoring Systems Optimize Cooling in Power Designs
The post How microchannel liquid cooling trims electronic designs appeared first on EDN.