EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 5 hours 33 min ago

Laser party lights

Fri, 11/08/2024 - 16:56

This is about a festive event accessory for lots of happy people with good cheer all around and in my opinion, a public safety hazard.

We were at a gala party one day where there were several hundred people. There were all kinds of food, there was music and there was this rotating orb in the center of the room which emitted decorative beams of light in constantly changing directions (Figure 1).

Figure 1 Party light at several different moments that emitted beams in several different directions.

Those beams of light were generated by moving lasers. They produced tightly confined light in whatever direction they were being aimed, just like the laser pointers you’ve undoubtedly seen being used in lecture settings.

I was not at ease with that (Figure 2).

Figure 2 A google search of the potential dangers of a laser pointer.

I kept wondering if when the decorative light beams would shine directly into someone’s eye, would that someone be in danger of visual injury? Might the same question be raised with respect to laser-based price checking kiosks in the stores (Macy’s or King Kullen for example) or for cashiers at their price scanning check out stations?

Everyone at the party went home happy.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Laser party lights appeared first on EDN.

Investigating a vape device

Thu, 11/07/2024 - 16:54

The ever-increasing prevalence of lithium-based batteries in various shapes, sizes and capacities is creating a so-called “virtuous circle”, leading to lower unit costs and higher unit volumes which encourage increasing usage (both in brand new applications and existing ones, the latter as a replacement for precursor battery technologies), translating into even lower unit costs and higher unit volumes that…round and round it goes. Conceptually similarly, usage of e-cigarettes, aka so-called “vape” devices, is rapidly growing, both by new  and existing users of cigarettes, cigars, pipes and chewing tobacco. The latter are often striving to wean themselves off these conventional “nicotine delivery platforms” and away from their well-documented health risks but aren’t yet able or ready to completely “kick the habit” due to nicotine’s potent addictive characteristics (“vaping” risks aren’t necessarily nonexistent, of course; being newer, however, they’re to date less thoroughly studied and documented).

What’s this all got to do with electronics? “Vapes” are powered by batteries, predominantly lithium-based ones nowadays. Originally, the devices were disposable, with discard-and-replacement tied to when they ran out of oft (but not always) nicotine-laced, oft-flavored “juice” (which is heated, converting it into an inhalable aerosol) and translating into lots of perfectly good lithium batteries ending up in landfills (unless, that is, the hardware hacker community succeeds in intercepting and resurrecting them for reuse elsewhere first). Plus, the non-replaceable and inherently charge-“leaky” batteries were a retail shelf life issue, too.

More recent higher-end “vape” devices’ batteries are capable of being user-recharged, at least. This characteristic, in combination with higher capacity “juice” tanks, allows each device to be used longer than was possible previously. But ultimately, specifically in the absence of a different sort of hardware hacking which I’ll further explore in the coming paragraphs, they’re destined for discard too…which is how I obtained today’s teardown victim (a conventional non-rechargeable “vape” device is also on my teardown pile, if I can figure out how to safely crack it open). Behold the Geek Bar Pulse, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

One side is bland:

The other is also seemingly so:

at least until you flip the “on” switch at the bottom, at which time it turns into something reminiscent of an arcade video game (thankfully not accompanied by sounds):

The two-digit number at the top indicates that the battery is still a bit more than halfway charged. Its two-digit counterpart at the bottom however, reports that its “juice” tank is empty, therefore explaining why it was discarded and how it subsequently ended up in my hands (not exactly the result of “dumpster diving” on my part, but I did intercept it en route to the trash). To that latter point, and in one of those “in retrospect I shouldn’t have been surprised” moments, when researching the product prior to beginning my dissection, I came across numerous web pages, discussion group threads and videos about both it and alternatives:

with instructions on how to partially disassemble rechargeable “vape” devices, specifically to refill their “juice” tanks with comparatively inexpensive fluid and extend their usable life. Turns out, in fact, that this device’s manufacturer has even implemented a software “kill switch” to prevent such shenanigans, which the community has figured out how to circumvent by activating a hidden hardware switch.

Anyhoo, let’s conclude our series of overview shots with the top, containing the mouthpiece nozzle from which the “vape” aerosol emits:

and the bottom, encompassing the aforementioned power switch, along with the USB-C recharging connector:

That switch, you may have already noticed, is three-position. At one end is “off”. In the middle is normal “on” mode, indicated in part by a briefly visible green ring around the display:

And at the other end is “pulse” mode, which emits more aerosol at the tradeoffs of more quickly draining the battery and “juice” tank, and is differentiated by both a “rocket” symbol in the middle of the display and a briefly illuminated red ring around it:

By the way, the power-off display sequence is entertaining, too:

And now, let’s get inside this thing. No exposed screws, of course, but that transparent side panel seems to be a likely access candidate:

It wasn’t as easy as I thought, but thanks to a suggestion within the first video shown earlier, to pop off the switch cover so that the entire internal assembly could then move forward:

I finally got it off, complete with case scratches (and yes, a few minor curses) along the way:

Quick check: yep, still works!

Now to get those insides out. Again, my progress was initially stymied:

until I got the bright (?) idea of popping the mouthpiece off (again, kudos to the creator of that first video shown earlier for the to-do guidance):

That’s better (the tank is starting to come into view)…

Success!

Front view of the insides, which you’ve basically already seen:

Left side, with our first unobstructed view of the tank:

Back (and no, it wasn’t me who did that symbol scribble):

Right side:

Top, showing the aerosol exit port:

And bottom, again reminiscent of a prior perspective photo:

Next, let’s get that tank off:

One of those contacts is obviously, from the color, ground. I’m guessing that one of the others feeds the heating element (although it’s referred to on the manufacturer’s website as being a “dual mesh coil” design, I suspect that “pulse” mode just amps—pun intended—up the output versus actually switching on a second element) and the third routes to a moisture or other sensor to assess how “full” the “tank” is.

To clarify (or maybe not), let’s take the “tank” apart a bit more:

More (left, then right) side views of the remainder of the device, absent the tank:

And now let’s take a closer look at that rubber “foot”, complete with a sponge similar to the one earlier seen with the mouthpiece, that the tank formerly mated with:

Partway through, another check…does it still work?

Yep! Now continuing…

Next, let’s again use the metal “spudger”, this time to unclip the display cover from the chassis:

Note the ring of multicolor LEDs around the circumference of the display (which I’m guessing is OLED-fabricated: thoughts, readers?):

And now let’s strive to get the “guts” completely out of the chassis:

Still working?

Amazing! Let’s next remove the rest of the plastic covering for the three-position switch:

Bending back the little plastic tab at the bottom of each side was essential for further progress:

Mission accomplished!

A few perspectives on the no-longer-captive standalone “guts”:

It couldn’t still be working, after all this abuse, could it?

It could! Last, but not least, let’s get that taped-down battery out the way and see if there’s anything interesting behind it:

That IC at the top of the PCB that does double-duty as the back of the display is the Arm Cortex-M0+- and flash memory-based Puya F030K28. I found a great writeup on the chip, which I commend to your attention, with the following title and subtitle:

The cheapest flash microcontroller you can buy is actually an Arm Cortex-M0+

Puya’s 10-cent PY32 series is complicating the RISC-V narrative and has me doubting I’ll ever reach for an 8-bit part again.

“Clickbait” headlines are often annoying. This one, conversely, is both spot-on and entertaining. And given the ~$20 retail price point and ultimately disposable fate for the device that the SoC powers, $0.10 in volume is a profitability necessity! That said, one nitpick: I’m not sure where Geek Bar came up with the “dual core” claim on its website (not to mention I’m amazed that a “vape” device supplier even promotes its product’s semiconductor attributes at all!).

And with that, one final check; does it still work?

This is one rugged design! Over to you for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Investigating a vape device appeared first on EDN.

Test solutions to confront silent data corruption in ICs

Thu, 11/07/2024 - 14:57

While semiconductor design engineers become more aware of silent data corruption (SDC) or silent data errors (SDE) caused by aging, environmental factors, and other issues, embedded test solutions are emerging to address this subtle but critical challenge. One such solution applies embedded deterministic test patterns in-system via industry-standard APB or AXI bus interfaces.

Siemens EDA’s in-system test controller—designed specifically to work with the company’s Tessent Streaming Scan Network (SSN) software—performs deterministic testing throughout the silicon lifecycle. Tessent In-System Test is built on the success of Siemens’ Tessent MissionMode technology and Tessent SSN software.

Figure 1 The Tessent In-System Test software with embedded on-chip in-system test controller (ISTC) enables the test and diagnosis of semiconductor chips throughout the silicon lifecycle. Source: Siemens EDA

Tessent In-System Test enables seamless integration of deterministic test patterns generated with Siemens’ Tessent TestKompress software. That allows chip designers to apply embedded deterministic test patterns generated using Tessent TestKompress with Tessent SSN directly to the in-system test controller.

The resulting deterministic test patterns are applied in-system to provide the highest test quality level within a pre-defined test window. They also offer the ability to change test content as devices mature or age through the silicon lifecycle.

Figure 2 Tessent In-System Test applies high-quality deterministic test patterns for in-system/in-field testing during the lifecycle of a chip. Source: Siemens EDA

These in-system tests with embedded deterministic patterns also support the reuse of existing test infrastructure. They allow IC designers to reuse existing IJTAG- and SSN-based patterns for in-system applications while improving overall chip planning and reducing test time.

“Tessent In-System Test technology allows us to reuse our extensive test infrastructure and patterns already utilized in our manufacturing tests for our data center fleet,” said Dan Trock, senior DFT manager at Amazon Web Services (AWS). “This enables high-quality in-field testing of our data centers. Continuous monitoring of silicon devices throughout their lifecycle helps to ensure AWS customers benefit from infrastructure and services of the highest quality and reliability.”

The availability of solutions like the Tessent In-System Test shows that silent data corruption in ICs is now on designers’ radar and that more solutions are likely to emerge to counter this issue caused by aging and environmental factors.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Test solutions to confront silent data corruption in ICs appeared first on EDN.

Negative time-constant and PWM program a versatile ADC front end

Wed, 11/06/2024 - 15:57

A variety of analog front-end functions typically assist ADCs to do their jobs. These include instrumentation amplifiers (INA), digitally programmable gain (DPG), and sample and holds (S&H). The circuit in Figure 1 is a bit atypical in merging all three of these functions into a single topology controlled by the timing from a single (PWM) logic signal.

Figure 1 Two generic chips and five passives make a versatile and unconventional ADC front end

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1’s differential INA style input starts off conventionally, consisting of tera-ohm impedance and picoamp bias CMOS followers U1a and U1b. The 916x family op-amps are pretty good RRIO devices for this job, with sub-mV input offset, 110 dB CMR, 11 MHz gain-bandwidth, 33 V/µs slew rate, and sub-microsecond setting time. They’re also inexpensive. Turning this into a high CMR differential input, however, is where the circuit starts to get unconventional. The ploy in play is the “flying capacitor”. 

During the logic-0 interval of the PWM, through switches U2a and U2b both ends of capacitor C are driven by the unity-gain follower amplifiers with CMR limited only by the amplifier’s 110 dB = 300,000:1. Unlike a typical precision differential INA input, no critical resistor matching is involved. A minimum duration interval of a microsecond or two is adequate to accurately capture and settle to the input signal. When the PWM input transitions to logic-1, one end of C is grounded (via U2b) while the other becomes the now single-ended input to U1c (via U2a). Then things get even less conventional.

The connection established from U1c’s output to C through U1c and R1 creates positive feedback that causes the voltage captured on C to multiply exponentially with a (negative) time-constant of:

Tc = (R1 + (U2 on resistance)) C
= (14.3 kΩ + 130) 0.001 µF = 14.43 µs
= 10 µs / ln(2)

Due to A1c’s gain = R3 / R2 + 1 = 2 the current through R1 from Vc:

IR1 = (Vc – 2Vc) / R1
= Vc / -R1

Thus, R1 is made effectively negative which makes R1C negative and for any time T after the 0-1 transition of PWM the familiar exponential decay of:

V(T) = V(0) e-(T/RC)

becomes with a negative R1:

= V(0) e-(T/-R1C) = Vc(0) e– -(T / 14.43 µs) = Vc(0) e(T / 14.43 µs)
= Vc(0) 2(T / 10 µs )

Therefore, taking U1c’s gain of 2.00:

Vout = Vc(0) 2((T / 10 µs) + 1)

For example, if a 7-bit 1 MHz PWM is used, then each 1µs increment in the duration of the logic-1 period will equate to a gain increment of 20.1 = 1.072 = 0.60 dB.  So, a 100 PWM 1-count would create a gain of 2((T / 10 µs) + 1)  = 66.2 dB = 2048. Having 100 available programmable gain settings is a useful and unusual feature.

 Note that R1 and C should be precision with low-tempco types like metal film and C0G so that the gain/time relationship will be accurate and stable. The 14.43 µs = 11 kHz roll-off of R1C interacts with the 11 MHz gain bandwidth of U1c to provide ~60 dB of closed loop gain. This is adequate for 10-bit acquisition accuracy.

During this PWM = 1 exponential gain phase, the U2c switch causes the output capacitor and U1d to track Vc, which is captured and held for input to the connected ADC during the subsequent PWM = 0 phase. While the front end of the circuit is acquiring the next sample.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Negative time-constant and PWM program a versatile ADC front end appeared first on EDN.

Smart TV power-ON aid

Tue, 11/05/2024 - 16:59

Editor’s note: This design idea offers a solution to the common issue of a TV automatically restarting after a power outage. Oftentimes, power may be restored when the consumer is not present and unknowingly left running. This could be due to several reasons, including the HDMI-CEC settings on the TV or simply an automatic restore factory setting. While it a useful setting to have enabled, it would be helpful to ensure the TV will not be automatically turned on when power is restored after a power outage.

Introduction

Present day TV designers take ample care in power supply design such that TV comes “ON” automatically after a power shut down and resumption, if TV was “ON” before power shut down. If the TV was “OFF” before power shut down, it continues to be “OFF”, even after power resumption. This is an excellent feature; one can continue to watch TV after a brief shut down and resumption without any manual intervention.

At times, this can lead to certain inconveniences in case of long power shutdowns. The last time this happened to us, we were watching TV, and the power suddenly went off. At some point during this power outage, we had to leave and came back home after two days. The power may have resumed a few hours after we left. However, as per its design, the TV turned “ON” automatically and was “ON” for two days. This caused discomfort to neighbors until we returned and switched the TV off. What a disturbance to others!

Wow the engineering world with your unique design: Design Ideas Submission Guide

TV power-ON aid

I designed the “TV Power-ON aid” gadget in Figure 1 to overcome this problem. Mains power is fed to this gadget. Power is fed to the TV through this gadget. Once the SW1 switch/button is pushed, the TV receives power, as long as mains power is there. If power goes “OFF” and resumes within say, a half hour, the TV will receive power from the mains without any manual intervention, like the original design. If the power resumes after a half hour, where it is likely you may not be near the TV at that time, the power will not be extended to TV automatically. Instead, you will have to push the button SW1 once to feed power to TV. This gadget saves us from causing discomfort to the neighbors from an unattended TV blasting shows: a problem anybody can face during a long power outage when he/she was not present in the house.

Figure 1 TV power-ON aid circuit. Connect mains power to J1. Take power to TV from J2. Connect power supply pins of U2, U3, and U4 to V1. The grounds of these ICs must be connected to the minus side of the battery. These connections are not shown in the image.

Circuit description

The first time, you will have to press momentary push button SW1 once. Relay RL2 gets energized and its “NO” contact closes, shorting SW1. Hence, the relay continues to be “ON” and power is extended to TV.

When mains power goes off, RL2 relay de-energizes. Through the “NC” contact of relay RL2, the battery (3X 1.5 V alkaline batteries) become connected to the OFF-delay timer circuit formed by U1(555), U2 (4011), U3 (4020), and U4(4017). As soon as the battery gets connected, this circuit switches “ON” the relay RL1 through MOSFET Q1 (IRLZ44N). Its “NO” contact closes and shorts SW1.

The timer circuit holds this relay for approximately a half hour. (The time can be adjusted by suitable selection of C2). If power resumes in this half an hour period, since SW1 is shorted by RL1contact, the power gets fed to TV automatically. If the power resumes after a half hour, since RL1 gets de-energized due to the OFF-delay timer action, its contact which is connected across SW1, is opened and power is not extended to TV. This is a safe condition. When you come back, you can push the button SW1 to feed power to TV. The RL1 coil voltage is 5 V and the voltage of RL2 is either 230 V AC or 110 V as needed.

The U1 circuit works as an oscillator. The U3 circuit works as frequency divider. This frequency is counted by U4 circuit. When time delay reaches around 30 minutes, its Q9 goes high. Hence U2C output goes “LO” and RL1 gets de-energized. Whenever power goes off, timer circuit gets battery voltage through “NC” contact of RL2. When power resumes, battery is disconnected from timer circuit, thus saving battery power. A Lithium-ion battery and charger circuit can be added in place of alkaline batteries, if desired.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Smart TV power-ON aid appeared first on EDN.

Apple’s fall 2024 announcements: SoC and memory upgrade abundance

Mon, 11/04/2024 - 17:33

Two years ago, Apple skipped its typical second second-half-of-year event, focusing solely on a September 2022 unveil of new iPhones, smartwatches, and earbuds. Last year I thought the company might pull the same disappearing act, but it ended up rolling out the three-member M3 SoC family, along with an M3-based subset of its systems suite. And this year? Mid-September predictably brought us new iPhones, smartwatches, and earbuds. But as October’s end drew near with nothing but silence from Cupertino (tempting leaks from Russian vloggers aside), I wondered if this year would be a repeat of the prior or a reversion to 2022 form.

Turns out, October 2024 ended up being a conceptual repeat of 2023 after all…well, sort of. Apple waited until last Thursday (as I write these words on Halloween) to cryptically “tweet” (or is that “X”?) an “exciting week of announcements ahead, starting on Monday morning”. And indeed, the company once again unveiled new SoCs and systems (plus software updates) this year. But the news virtually (of course) dribbled out over three days this time, versus dropping at one big (online) event. That said, there were in-depth (albeit, as usual, with a dallop of hype) videos accompanying many of the releases. Without further ado, and in chronological order:

The iPad mini 7

The first unveil in the October 2024 new-product sequence actually happened two weeks ago, when Apple rolled out its latest-generation tiny tablet. That the iPad mini 7 would sooner-or-later appear wasn’t a surprise, although I suppose Apple could have flat-out killed the entire iPad mini line instead, as it’s already done with the iPhone mini. The iPad mini 6 (an example of which I own) is more than four years old at this point, as I’d mentioned back in May. And specifically, it’s based on the archaic A15 Bionic SoC and only embeds 4 GBytes of RAM, both of which are showstoppers to the company’s widespread Apple Intelligence support aspirations.

SoC update (to the A17 Pro) and memory update (to 8 GBytes, reflective of deep learning model storage requirements) aside, the iPad mini 7 pretty much mirrors its predecessor, although it does now also support the Apple Pencil, and the USB-C bandwidth has been boosted to 10 Gbps. Claimed improvements to the “jelly scrolling” display behavior seen by some iPad mini 6 users (truth be told, I never noticed it, although I mostly hold mine in landscape orientation) are muddled by iFixit’s teardown, which suggests the display controller location is unchanged.

And by the way, don’t be overly impressed with the “Pro” qualifier in the SoC’s moniker. That’s the only version of the A17 that’s been announced to date, after all. And even though it’s named the same as the SoC in the iPhone 15 Pros, it’s actually a defeatured variant, with one less (5, to be precise) GPU core, presumably for yield maximization reasons.

O/S updates

Speaking of Apple Intelligence, on Monday the company rolled out “.1” updates to all of its devices’ operating systems, among other things adding initial “baby step” AI enhancements. That said, Europe users can’t access them at the moment, presumably due to the European Union’s privacy and other concerns, which the company hopes to have resolved by next April. And for the rest of us, “.2” versions with even more enabled AI capabilities are scheduled for release this December.

A couple of specific notes on MacOS: first off, in addition to iterating its latest-generation MacOS 15 “Sequoia” O/S, Apple has followed longstanding extended-support tradition by also releasing patches for the most recent two prior-generation operating system major versions, MacOS 13 (“Ventura”) and MacOS 14 (“Sonoma”). And in my case, that’s a great thing, because it turns out I won’t be updating to “Sequoia” any time soon, at least on my internal storage capacity-constrained Mac mini. I’d been excited when I read that MacOS 15.1 betas were enabling the ability to selectively download and install App Store software either to internal storage (as before) or to an external drive (as exists in my particular setup situation).

But as it turns out, that ability is only enabled for App Store-sourced programs 1 GByte or larger in size, which is only relevant to one app I use (Skylum’s Luminar AI which, I learned in the process of writing this piece, has been superseded by Luminar Neo anyway). Plus, the MacOS “Sequoia” upgrade from “Sonoma” is ~12 GBytes, and Apple historically requires twice that available spare capacity before it allows an update attempt to proceed (right now I have not 25 GBytes, but only 7.5 GBytes, free on the internal SSD). And the Apple Intelligence capabilities aren’t relevant to Intel-based systems, anyway. So…nah, at least for now.

By the way, before proceeding with your reading of my piece, I encourage you to watch at least the second promo video above from Apple, followed by the perusal of an analysis (or, if you prefer, take-down) of it by the always hilarious (and, I might, add, courageous) Paul Kafasis, co-founder and CEO of longstanding Apple developer Rogue Amoeba Software, whose excellent audio and other applications I’ve mentioned many times before.

The 24” iMac

Apple rolled out some upgraded hardware on Monday, too. The company’s M1-based 24” iMac, unveiled in April 2021, was one of its first Apple Silicon-based systems. Apple then skipped the M2 SoC generation for this particular computing platform, holding out until last November (~2.5 years later), when the M3 successor finally appeared. But it appears that the company’s now picking up the pace, since the M4 version just showed up, less than a year after that. This is also the first M4-based computer from Apple, following in the footsteps of the iPad Pro tablet-based premier M4 hardware surprisingly (at least to me) released in early May. That said, as with the defeatured A15 Bionic in the iPad mini 7 mentioned earlier in this writeup, the iMac’s M4 is also “binned”, with only eight-core CPU and GPU clusters in the “base” version, versus the 9- or 10-core CPU and 10-core GPU in the iPad Pros and other systems to come that I’ll mention next.

The M4 24” iMac comes first-time standard with 16 GBytes of base RAM (to my earlier note about the iPad mini’s AI-driven memory capacity update…and as a foreshadow, this won’t be the last time in this coverage that you encounter it!), and (also first-time) offers a nano-texture glass display option. Akin to the Lightning-to-USB-C updates that Apple made to its headphones back in mid-September, the company’s computer peripherals (mouse, keyboard and trackpad) now instead recharge over USB-C, too. The front camera is Center Stage-enhanced this time. And commensurate with the SoC update, the Thunderbolt ports are now gen4-supportive.

The Mac mini and M4 Pro SoC

Tuesday brought a more radical evolution. The latest iteration of the Mac mini is now shaped like a shrunk-down Mac Studio or, if you prefer, a somewhat bigger spin on the Apple TV. The linear dimensions and overall volume are notably altered versus its 2023 precursor, from:

  • Height: 1.41 inches (3.58 cm)
  • Width: 7.75 inches (19.70 cm)
  • Depth: 7.75 inches (19.70 cm)
  • Volume: 84.7 in3 (1,389.4 cm3)

to:

  • Height: 2.0 inches (5.0 cm)
  • Width: 5.0 inches (12.7 cm)
  • Depth: 5.0 inches (12.7 cm)
  • Volume: 50 in3 (806.5 cm3)

Said another way, the “footprint” area is less than half of what it was before, at the tradeoff of nearly 50% increased height. And the weight loss is notably too, from 2.6 pounds (1.18 kg) or 2.8 pounds (1.28 kg) before to 1.5 pounds (0.67 kg) or 1.6 pounds (0.73 kg) now. I also should note that, despite these size and weight decreases, the AC/DC conversion circuitry is still 100% within the computer; Apple hasn’t pulled the “trick” of moving it to a standalone PSU outside. That said, legacy-dimension “stacked” peripherals won’t work anymore:

And the new underside location of the power button is, in a word and IMHO, “weird”.

The two “or” qualifiers in the earlier weight-comparison sentence beg for clarification which will simultaneously be silicon-enlightening. Akin to the earlier iMac conversation, there’s been a SoC generation skip, from the M2 straight to the M4. The early-2023 version of the Mac mini came in both M2 and M2 Pro (which I own) SoC variants. Similarly, while this year’s baseline Mac mini is powered by the M4 (in this case the full 10 CPU/10 GPU core “spin”), a high-end variant containing the brand new M4 Pro SoC is also available. In this particular (Mac mini) case, the CPU and GPU core counts are, respectively, 12 and 16. Memory bandwidth is dramatically boosted, from 120 GBytes/sec with the M4 (where once again, the base memory configuration is 16 GBytes) to 273 GBytes/sec with the M4 Pro. And the M4 Pro variant is also Apple’s first (and only, at least for a day) system that supports latest-generation Thunderbolt 5. Speaking of connectors, by the way, integrated legacy USB-A is no more, though. Bring on the dongles.

MacBook Pros, the M4 Max SoC and a MacBook Air “one more thing”

Last (apparently, we shouldn’t take Apple literally when it promises an “exciting week of announcements ahead”) but definitely not least, we come to Wednesday and the unveil of upgraded 14” and 16” MacBook Pros. The smaller-screen version comes in variants based on M4, M4 Pro and brand-new M4 Max SoC flavors. This time, if you dive into the tech specs, you’ll notice that the M4 Pro is “binned” into two different silicon spins, one (as before in the Mac mini) with a 12-core CPU and 16-core GPU, and a higher-end variant with a 14-core CPU and 20-core GPU. Both M4 Pro versions deliver the same memory bandwidth—273 GBytes/sec—which definitely can’t be said about the high-end M4 Max. Here, at least on the 14” MacBook Pro, you’ll again find a 14-core CPU, although this time it’s mated to a 32-core GPU, and the memory bandwidth further upticks to 410 GBytes/sec.

If you think that’s impressive (or maybe just complicated), wait until you see the 16” MacBook Pro’s variability. There’s no baseline M4 option in this case, only two M4 Pro and two M4 Max variants. Both M4 Pro base “kits” come with the M4 Pro outfitted with a 14-core CPU and 20-core GPU. The third variant includes the aforementioned 14-core CPU/32-core GPU M4 Max. And as for the highest-end M2 Max 16” MacBook Pro? 16 CPU cores. 40 GPU cores. And 546 GBytes/sec of peak memory bandwidth. The mind boggles at the attempt at comprehension.

Speaking (one last time, I promise) of memory, what about that “one more thing” in this section’s subhead? Apple has bumped up (with one notable Walmart-only exception) the baseline memory of its existing MacBook Air mobile computers to 16 GBytes, too, at no price increase from the original 8 GByte MSRPs (or, said another way, delivering a 16 GByte price cut), bringing the entire product line to 16 GBytes minimum. I draw two fundamental conclusions:

  • Apple is “betting the farm” on memory-demanding Apple Intelligence, and
  • If I were a DRAM supplier previously worried about filling available fab capacity, I’d be loving life right about now (although, that said, I’m sure that Apple’s purchasing department is putting the screws on your profit margins at the same time).

With that, closing in on 2,000 words, I’ll sign off for now and await your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s fall 2024 announcements: SoC and memory upgrade abundance appeared first on EDN.

Intel: Gelsinger’s foundry gamble enters crunch

Mon, 11/04/2024 - 14:58

Intel is at the crossroads, again, and so is its charismatic chief, Pat Gelsinger, who was brought for a turnaround of this American semiconductor giant more than three years ago. Is trouble brewing at the Santa Clara, California-based chip industry icon? According to a recent Reuters story that chronicles Gelsinger’s three years at the helm, it looks more so.

The Reuters story titled “Inside Intel, CEO Pat Gelsinger fumbled the revival of an American icon” comes soon after the news about a possible patch-up between the foundry operations of Intel and Samsung, two outfits competing with fab market leader TSMC at cutting-edge semiconductor manufacturing processes.

While the hookup between these two TSMC rivals isn’t without merits, industry watchers mostly see it as a non-starter. Samsung, which entered the foundry business in 2017, has been able to grab an 11.5% fab market share compared to TSMC’s 62.3%. Intel, on the other hand, is just at the starting gate when it comes to the foundry business it set up in 2021.

While Gelsinger sought to transform Intel by venturing into the foundry business, the chipmaker steadily lost ground to AMD in the lucrative data center processors business. Meanwhile, its bread-and-butter PC processors business is still reeling from the post-pandemic glut. But Intel’s troubles don’t end here. Another elephant in the room, besides Intel Foundry, is the struggling artificial intelligence (AI) chips business.

Apparently, Intel is late to the AI party, and just like data center processors, that puts it behind companies like AMD and Nvidia. Intel, which launched three AI initiatives in 2019, including a GPU, hasn’t much to show so far and its Gaudi AI accelerator manufactured at TSMC seems to be falling short of expectations.

Figure 1 Gaudi was touted as an alternative to Nvidia’s GPUs. Source: Intel

While Gelsinger declined to be interviewed for this Reuters story, Intel’s statements published in this special report seem to have come straight from Gelsinger’s corner office. “Pat is leading one of the largest, boldest and most consequential corporate turnarounds in American business history,” said the Intel statement. “3.5 years into the journey, we have made immense progress—and we’re going to finish the job.”

Is Gelsinger in trouble?

Intel Foundry seems to be all Gelsinger is betting on, but this premise has proven easier said than done. As Sandra Rivera, now CEO of Altera and then head of Intel’s data center business, said while talking about Intel’s GPU foray, “It’s a journey, and everything looks simpler from the outside.” This premise perfectly fits Intel’s fab gambit as well.

Soon after taking the charge, Gelsinger vowed to form a foundry business to compete with TSMC and promised to develop five manufacturing nodes in five years. However, its 18A processing node has been facing delays, and one of its early customers, Broadcom, reportedly has yield issues. A mere 20% of its chips have passed the early tests.

Intel maintains that 18A is on track for launch in 2025. But as Goldman Sachs analyst Toshiya Hari notes, semiconductor vendors have little incentive to bet on Intel’s manufacturing when TSMC continues to serve them well.

Figure 2 The news about problems with the launch of 18 processing doesn’t bode well for the company’s foundry ambitions. Source: Intel

When a large company becomes an acquisition target, it generally spells doom. So, in another statement in the Reuters story, Intel said that it won’t let merger speculation distract it from executing its five-year turnaround plan. That clearly shows the pressure and how Gelsinger is asking more time to put the house in order.

Will Gelsinger get more time? He acknowledges a lot of work ahead but is confident that Intel will pull it off. But if the foundry business betting on Intel’s chip manufacturing prowess takes longer to bear fruit, Gelsinger’s rocky tenure may end sooner than later.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Intel: Gelsinger’s foundry gamble enters crunch appeared first on EDN.

“Half & Half” piezo drive algorithm tames overshoot and ringing

Fri, 11/01/2024 - 16:40

Piezoelectric actuators (benders, stacks, chips, etc.) are excellent fast and precise means for generation and control of micro, nano, and even atomic scale movement on millisecond and faster timescales. Unfortunately, they are also excellent high-Q resonators. Figure 1 shows what you can expect if you’re in a hurry to move a piezo and simply hit it with a unit step. Result: a massive (nearly 100%) overshoot with prolonged follow-on ringing.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 Typical piezo actuator response to squarewave drive with ringing and ~100% overshoot.

 Don’t worry. It’ll get there. Eventually. But don’t hold your breath. Clearly something has to be done to modify the drive waveshape if we’re at all interested in speed and settling time. Many possibilities exist, but Figure 2 illustrates a remarkably simple yet effective trick that actually takes advantage of the piezo’s natural 2x overshoot: Half and Half step drive.

Figure 2 Half &Half drive step with half amplitude and half resonance period kills overshoot and ringing.

 The surprisingly simple trick is to split the drive step into an initial step with half the desired movement amplitude and a duration of exactly half the piezo resonance period. Hence: “Half & Half”(H&H) drive. The half-step is then followed by application of the full step amplitude to hold the actuator in its new position.

The physics underlying H&H rely on kinetic energy imparted to the mass of the actuator during the first quarter cycle to be just sufficient to overcome actuator elasticity during the second quarter, this bringing the actuator to a graceful stop at half cycle’s end. The drive voltage is then stepped to the full value, holding the actuator stationary at the final position.

Shown in Figure 3 is H&H would work for a sequence of arbitrary piezo moves.

Figure 3 Example of three arbitrary H&H moves: (T2 – T1) = (T4 – T3) = (T6 – T5) = ½ piezo resonance period.

If implemented in software, the H&H algorithm would be simplicity itself and look something like this:

Let DAC = current contents of DAC output register
N = new content for DAC required to produce desired piezo motion
Step 1: replace DAC = (DAC + N) / 2
Step 2: wait one piezo resonance half-period
Step 3: replace DAC = N
Done

If implemented in analog circuitry, H&H might look like Figure 4. Here’s how it works.

Figure 4 The analog implementation of H&H.

 The C1, R1, C2, R2||R3 voltage divider performs the half-amplitude division function of the H&H algorithm, while dual-polarity comparators U2 detect the leading edge of each voltage step. Step detection triggers U3a, which is adjusted via the TUNE pot to have a timeout equal to half the piezo resonance period, giving us the other “half”.

U3a timeout triggers U3b, which turns on U1, outputting the full step amplitude, completing the move. The older metal gate CMOS 4066 is used due to its superior low-leakage Roff spec’ while the parallel connection of all four of its internal switches yields an adequately low Ron.

U4 is just a place keeper for a suitable piezo drive amplifier to translate from the 5-V logic of the H&H circuitry to piezo drive voltage and power levels.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post “Half & Half” piezo drive algorithm tames overshoot and ringing appeared first on EDN.

NXP software bolsters edge AI development

Fri, 11/01/2024 - 15:55

NXP has expanded its eIQ AI and ML software development environment with two new tools to simplify AI deployment at the edge. The software supports low-latency, energy-efficient, and privacy-focused AI, enabling ML algorithms to run on a range of edge processors, from small MCUs to advanced application processors.

The eIQ Time Series Studio introduces an automated machine learning workflow for efficiently developing and deploying time-series ML models on MCU-class devices, including NXP’s MCX series of MCUs and i.MX RT crossover MCUs. It supports various input signals—voltage, current, temperature, vibration, pressure, sound, and time of flight—as well as multi-modal sensor fusion.

GenAI Flow provides the building blocks for creating Large Language Models (LLMs) that power generative AI applications. With Retrieval Augmented Generation (RAG), it securely fine-tunes models on domain-specific knowledge and private data without exposing sensitive information to the model or processor providers. By linking multiple modules in a single flow, users can customize LLMs for specific tasks and optimize them for edge deployment on MPUs like the i.MX 95 application processor.

To learn more and access the newest version of the eIQ machine learning development environment, click on the product page link below.

eIQ product page

NXP Semiconductors 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post NXP software bolsters edge AI development appeared first on EDN.

RS-485 transceivers operate in harsh environments

Fri, 11/01/2024 - 15:55

Half-duplex RS-485 transceivers from MaxLinear offer extended ESD and EFT protection to ensure reliable communication in industrial environments. The MxL8312x and MxL8321x families include nine product SKUs with three speed options—250 kbps, 500 kbps, and 50 Mbps—and three package variants, including small 3×3-mm types.

These families expand MaxLinear’s portfolio with mid- and high-tier products alongside the MxL8310x and MxL8311x lineup of RS-485 transceivers. Smaller form-factor packages, higher speeds, and enhanced system-level ESD and EFT protection make them well-suited for delivering high performance under harsh conditions. Key applications include factory automation, industrial motor drives, robotics, and building automation.

The transceivers’ bus pins tolerate up to ±4 kV of electrical fast transients (IEC 61000-4-4) and up to ±12 kV of electrostatic discharge (IEC 61000-4-2). A supply range of 3.3 V to 5 V supports reliable operation in systems with potential power drops, while an extended common-mode range of up to ±15 V ensures stable communication over long distances or in applications with significant ground plane shifts between devices. MxL83214 devices are cable of supporting 50-Mbps data rates with strong pulse symmetry and low propagation delays.

In addition to conventional 4.9×3.9-mm NSOIC-8 packages, the transceivers are offered in 3×3-mm MSOP-8 and VSON-8 packages. The MxL83121, MxL83122, MxL83211, MxL83212, and MxL83214 are available now.

MaxLinear

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RS-485 transceivers operate in harsh environments appeared first on EDN.

IGBT and MOSFET drivers manage high peak current

Fri, 11/01/2024 - 15:55

Vishay’s VOFD341A and VOFD343A IGBT and MOSFET drivers, available in stretched SO-6 packages, provide peak output current of up to 4 A. Their high peak output current allows for faster switching by eliminating the need for an additional driver stage. Each device contains an AlGaAs LED optically coupled to an integrated circuit with a power output stage, specifically designed for driving power IGBTs and MOSFETs in motor control inverters.

The drivers support an operating voltage range of 15 V to 30 V and feature an extended temperature range of -40°C to +125°C, ensuring a sufficient safety margin for more compact designs. They also have a maximum propagation delay of 200 ns, which minimizes switching losses and facilitates more precise PWM regulation.

Additionally, the drivers’ high noise immunity of 50 kV/µs helps prevent failures in fast-switching power stages. Their stretched SO-6 package provides a maximum rated withstanding isolation voltage of 5000 VRMS.

Samples and production quantities of the 3-A VOFD341A and 4-A VOFD343A are available now, with lead times of six weeks.

VOFD341A product page 

VOFD343A product page 

Vishay Intertechnology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post IGBT and MOSFET drivers manage high peak current appeared first on EDN.

Infineon shrinks silicon wafer thickness

Fri, 11/01/2024 - 15:54

With a thickness of only 20 µm and a diameter of 300 mm, Infineon’s silicon power wafers are the thinnest in the industry. These ultra-thin wafers are a quarter the thickness of a human hair and half the thickness of typical wafers, which range from 40 µm to 60 µm. This achievement in semiconductor manufacturing technology will increase energy efficiency, power density, and reliability in power conversion for applications such as AI data centers, motor control, and computing.

Infineon reports that reducing the thickness of a wafer by half lowers the substrate resistance by 50%, resulting in over a 15% reduction in power loss in power systems compared to conventional silicon wafers. For high-end AI server applications, the ultra-thin wafer technology enhances vertical power delivery designs based on vertical trench MOSFETs, enabling a close connection to the AI chip processor. This minimizes power loss and improves overall efficiency.

Infineon’s wafer technology has been qualified and integrated into its smart power stages, which are now being delivered to initial customers. As the ramp-up of ultra-thin wafer technology progresses, Infineon anticipates that it will replace existing conventional wafer technology for low-voltage power converters within the next three to four years.

Infineon will present the first ultra-thin silicon wafer publicly at electronica 2024.

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Infineon shrinks silicon wafer thickness appeared first on EDN.

Stretchable printed circuit enhances medical devices

Fri, 11/01/2024 - 15:54

Murata’s stretchable printed circuit (SPC) offers both flexibility and the ability to stretch or deform without losing functionality. It can be used in wearable therapeutic devices and vital monitoring tools, providing improved accuracy, durability, and patient comfort compared to current devices.

Many existing devices are too rigid for certain applications, leading to patient discomfort, poor contact with surfaces, and unstable data acquisition. The SPC’s flexibility, stretchability, and adaptability support multi-sensing capabilities and a wide range of user requirements. Its soft material is gentle on the skin, making it well-suited for disposable EEG, EMG, and ECG electrodes that meet ANSI/AAMI EC12 standards. The stretchable design allows a single device to fit various body areas, like elbows and knees, and accommodate patients of different sizes.

SPC technology ensures seamless integration and optimal performance through telescopic component mounting and hybrid bonding between substrates. Its shield layer effectively blocks electromagnetic noise, providing reliable signal-path protection. The substrate construction also enhances moisture resistance and supports sustained high-voltage operation.

Murata’s SPC is customizable based on application requirements.

SPC product page

Murata Manufacturing

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Stretchable printed circuit enhances medical devices appeared first on EDN.

PIC Microncontrollers with Integrated FPGA Features in TME 

Fri, 11/01/2024 - 14:00

The new PIC16F131xx microcontrollers in TME’s offer from Microchip are ideal for the evolving and miniaturizing electronic equipment market, offering efficient power management and predictable response times for controllers. 

Key features include core independent peripherals (CIPs) like the configurable logic block (CLB), which allows for predictable circuit behavior without burdening the CPU, thereby saving energy. These microcontrollers, based on the classic 8-bit Harvard architecture, come in various packages (DIP, DFN, SSOP, TSSOP, SOIC, and VQFN) with 6 to 18 I/O pins, and support a wide voltage range (1.8V to 5.5V DC). They operate at a 32 MHz clock frequency, with instruction execution times as low as 125 ns, and offer 256 to 1024 bits of SRAM and up to 14 kB of FLASH program memory. 

The microcontrollers are equipped with an array of peripherals, including PWM generators, counters/timers, EUSART serial bus controllers, and MSSP modules for I2C or SPI operation. They also feature configurable comparators, an 8-bit DAC, and a 10-bit ADC with hardware processing capabilities (ADCC) 

The core independent peripherals (CIPs) allow the microcontrollers to handle tasks like sensor communication without using the CPU, thus enhancing efficiency and simplifying programming. The CLB technology, a highlight of the PIC16F131xx series, uses basic logic gates configurable by the designer, facilitating functional safety and rapid response times.  

The Curiosity Nano development kit for the PIC16F131xx series offers a convenient platform for exploring the microcontrollers’ capabilities, featuring an integrated debugger, programming device, and access to microcontroller pins. The EV06M52A kit, equipped with the PIC16F13145 microcontroller, includes a USB-C port for power and programming, an LDO MIC5353 regulator, a green LED for power and programming status, a yellow LED, and a button for user interaction.  

Curiosity Nano development kit

Additionally, adapters like the AC164162 extend the functionality of the Curiosity Nano boards, offering compatibility with mikroBUS™ standard connectors and an integrated charging system for lithium-ion and lithium-polymer cells. 

AC164162

The new microcontroller series from Microchip offers efficient power management, predictable response times, and innovative features like core independent peripherals (CIPs) and configurable logic blocks (CLB). These microcontrollers, ideal for modern embedded systems, come in various packages and support a wide voltage range, enhancing their versatility and performance. The Curiosity Nano development kit and its adapters further facilitate easy development and prototyping. 

These products are available in TME’s offer, providing a comprehensive solution for designers and developers looking to leverage the latest advancements in microcontroller technology. 

Text prepared by Transfer Multisort Elektronik S.p. z o.o. 

 

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PIC Microncontrollers with Integrated FPGA Features in TME  appeared first on EDN.

Understanding and combating silent data corruption

Fri, 11/01/2024 - 12:41

The surge in memory-hungry artificial intelligence (AI) and machine learning (ML) applications has ushered in a new wave of accelerated computing demand. As new design parameters ramp up processing needs, more resources are being packed into single units, resulting in complex processes, overburdened systems, and higher chances of anomalies. In addition, demands of these complex chips presents challenges with meeting reliability, availability, and serviceability (RAS) requirements.

One major, yet often overlooked, RAS concern and root cause of computing errors is silent data corruption (SDC). Unlike software-related issues, which typically trigger alerts and fail-safe mechanisms, SDC issues in hardware can go undetected. For instance, a compromised CPU may miscalculate data, leading to corrupt datasets that can take months to resolve and cost organizations significantly more to fix.

Figure 1 A compromised CPU may lead to corrupt datasets that can take months to resolve. Source: Synopsys

Meta Research highlights that these errors are systemic across generations of CPUs, stressing the importance of robust detection mechanisms and fault-tolerant hardware and software architectures to mitigate the impact of silent errors in large-scale data centers. Anything above zero errors is an issue given the size, speed, and reach of hyperscalers. Even a single error can result in a significant issue.

This article will explore the concept of SDC, why it continues to be a pervasive issue for designers, and what the industry can do to prevent it from impacting future chip designs.

The multifaceted hurdle

Industry leaders are often hesitant to invest in resources to address SDC because they don’t fully understand the problem. This reluctance can lead to higher costs in the long run, as organizations may face significant operational setbacks due to undetected SDC errors. Debugging these issues is costly and not scalable, often resulting in delayed product releases and disrupted production cycles.

To put this into perspective, today’s machine learning algorithms run on tens of thousands of chips, and if even one in 1,000 chips is defective, the resulting data corruption can obstruct entire datasets, leading to massive expenditures for repairs. While cost is a large factor, the hesitation to invest in SDC prevention and fixes is not the only challenge. The complexity and scale of the problem also make it difficult for decision makers to take proactive measures.

Figure 2 Defect screening rate is shown using DCDIAG test to assess a processor. Source: Intel

Chips have long production cycles, and addressing SDC can take several years before fixes are reflected in new hardware. Beyond the lengthy product lifecycles, it’s also difficult to measure the scale of SDC errors, presenting a big challenge for chipmakers. Communicating the magnitude and urgency of an issue to decision makers without solid evidence or data is a daunting task.

How to combat silent data corruption

When a customer receives a faulty chip, the chip is typically sent back to the manufacturer for replacement. However, this process is merely a remedy for the larger SDC issue. To shift from symptom mitigation to a problem-solving solution, here are some avenues the industry should consider:

  • Research investments: SDC is an area the industry is aware of but lacks comprehensive understanding. We need researchers and engineers to focus on SDC despite how costly the investment will be. This involves generating and sharing extensive data for analysis, identifying anomalies, and diagnosing potential issues like time delays or data leaks. All things considered, enhanced research will help clarify and manage SDC effectively.
  • Incentive models: Establishing stronger incentives with more data for manufacturers to address SDC will help tackle the growing problem. Like the cybersecurity industry, creating industry-wide standards for what constitutes a safe and secure product could help mitigate SDC risks.
  • Sensor implementation: Implementing sensors in chips that alert chip designers to a potential problem is another solution to consider, similar to automotive sensors that alert the owner when tire pressure is low. A faulty chip can go one to two years without being detected, but sensors will be able to detect a problem before it’s too late.
  • AI and ML toolbox: AI algorithms, an option that is still in the early stages, could flag conditions indicative of SDC, though this requires substantial data for training. Effective implementation would necessitate careful curation of datasets and intentional design of AI models to ensure accurate detection.
  • Silicon lifecycle management (SLM) strategy: SLM is a process that allows chip designers to monitor, analyze and optimize their semiconductor devices throughout its life. By executing this strategy, it makes it easier for designers to track and gain actionable insights on their device’s RAS in real time and ultimately, detecting SDC before it’s too late.

Partly due to its stealthy nature, SDC has become a growing problem as the scale of computing has increased over time, and the first step to solving a problem is recognizing that a problem exists.

Now is the time for action, and we need stakeholders from all areas—academics, researchers, chip designers, manufacturers, software and hardware engineers, vendors, government and others—to collaborate and take a closer look at underlying processes. Together, we can develop solutions at every step of the chip lifecycle that effectively mitigate the lasting impacts of SDC.

Jyotika Athavale is the director for engineering architecture at Synopsys, leading quality, reliability and safety research, pathfinding, and architectures for data centers and automotive applications.

Randy Fish is the director of product line management for the Silicon Lifecycle Management (SLM) family at Synopsys.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Understanding and combating silent data corruption appeared first on EDN.

Real examples of the IoT edge: A guide of NXP’s Innovation Lab

Thu, 10/31/2024 - 15:36

Most tradeshow experiences tend to be limited to the exhibition floor and a couple of breakout sessions, all housed within the spacious convention center floor plan. However, embedded world North America seemed to diverge from this with a number of companies offering tours of their facilities, one of these companies was NXP. EDN was able to tour their Austin campus with a guide of their “Smart Home Innovation Lab”. This lab is a proving ground for a number of IoT applications and edge computing applications where systems and applications engineers can take the NXP microcontrollers (MCU) and microprocessors (MPU) as well as their RF and sensor tech, and see how they might be able to build a prototype. However, Smart Home Innovation Lab might be a bit of a misleading name since many of proof-of-concept designs fell into the medical and automotive realms where many of the underlying technologies would naturally find use cases that extended well beyond these fields. 

The concept and implementation of the internet of things (IoT) has been a very well-discussed topic, especially within the smart home where endless companies have found (and are continuing to find) innovative ways to automate home functions. However, using inference at the edge is relatively nascent, and therefore the use cases where existing IoT applications can be augmented or improved by AI is growing rapidly. In all of these demos, NXP engineers integrated one of their i.MX crossover MCUs for local edge processing capabilities. So, the tour was geared more towards the use cases of TinyML. 

The tour spanned over an hour, with Austin-based systems engineers walking the group through each demonstration that took place in a “garage”, “kitchen”, “living room”, media room/theater”, and a “gym”. Many of the demonstrations involved modified appliances that were taken off-the-shelf while some prototypes were co-developed with customers in partnership. 

Home mode automations

Many of the solutions were focused on using more unified application-level connectivity standards such as Thread and Matter to simplify integration where smart home devices from different vendors can be used in a single smart home “fabric”. The lab contained two Matter fabrics, including a commercially available Thread border router and an NXP open Thread border router that used the i.MX 93. The NXP open source home automation system that connects many of the IoT devices and acts as a backend to the “dashboard” that appears in Figure 1

Figure 1 NXP Innovation Lab tour with the home dashboard appearing on the screen and door lock device to the left.

Their proprietary home control system has two main “home mode” automations available: one where the user was away from home and one where they were present at home. The “away from home” demo included automated functions such as the dimming of the lab lights, blinds going down, pausing any audio streaming, and the locking the door. When the user is present, all of these processes are essentially reversed, automating some very basic home functions. 

A touch-free lighting experience

The ultra wideband (UWB) technology found in the recent SR150 chip includes a ranging feature that can, for instance, track a person as they are walking through their home. This was another demonstration where a system engineer held the UWB-enabled mobile device and the lights and speakers within the lab essentially followed them, turning on the lights and streaming the radio station through the speakers that were available locally, in the room they were physically occupying and turning off all lights/speakers in the rooms that they had exited. Other use cases for this are in agriculture for locating sprinklers covered in mud, or, in medical applications to kick off automations/check-ups when a nurse walks into a patient’s room. This could also be extended to the automotive space, automatically opening the door that the user is approaching. 

Door lock

As with many smart home appliances, smart locks are nothing new. Typically though, these door locks can be remotely engaged with an app, requiring a more manual approach to the solution. The door lock prototype used five different technologies–keypad, fingerprint, face recognition, NFC, and UWB–as well as the i.MX RT1070 MCU/MPU to lock or unlock (Figure 2). The lock used a face recognition algorithm with depth perception while the UWB tech used an angle of attack (AoA) algorithm to ascertain whether the user is approaching the lock from outside the facility or within it. This way, the door lock can be engaged only with multiple forms of identification for building security management; or, in smart home applications, where the door lock will automatically open upon approach from the outside. 

Figure 2 Smart door lock using the SR150 and i.MX RT1070 with integrated keypad, fingerprint, face recognition, NFC, and UWB.

The garage: Automotive automations

The “garage” included a model EV where i.MX MCUs are used to run cluster and infotainment systems, demonstrating the graphics capability of the platform. There was also a system that displayed a bird’s eye view of the vehicle where the MCU takes the warp image from the four cameras mounted at different angles, dewarps them, and stitches them together to recreate this inclusive view of the vehicle’s surroundings. 

Figure 3 Garage demos showing the EV instrument cluster and infotainment running on i.MX MCUs.

The demo in Figure 4, shows a potential solution to a current potential problem in EVs: a large, singular human-machine interface (HMI) that both the driver and passenger are meant to use. While it does offer a clean, sleek aesthetic, the single screen could be inconvenient when one user needs it as a dashboard, while the other might want it for entertainment purposes. The dual-view display will simultaneously display two entirely different images for users sitting on the right-hand side or left-hand side of the screen. This is made possible by the large viewing angles of the display, so that the driver and passenger are able to view their specific application on the entire screen without impacting the user experience. The technology involves sending two outputs interleaved together where the screen has the ability to deinterleave them and show them on the screen. 

This comes with the additional ability to independently control the screen using the entire space available within HMI without impacting the application of the driver or passenger. In other words, a passenger could essentially play Tetris on the screen without messing around with the driver’s map view. This is achieved through electrodes installed under the seat where each electrode is connected to the driver’s, or passenger’s, respective touch controller. Another quite obvious application for this would be in gaming, removing the need for two screens or a split-screen view. 

Figure 4 A single dual-view display that simultaneously offers two different views for users sitting to the left or right of it. Electrodes installed under the seats allow one user to independently control the screens via touch without impacting the application of the other user.

Digital intrusion alarm 

The digital intrusion alarm prototype seen in Figure 5 can potentially be added on to a consumer access point or router to protect it from malicious traffic such as a faulty IoT device that might jam the network. The design uses the i.MX 8M+ where a ML model is trained with familiar network traffic over a period of time so when unfamiliar traffic is observed, it is flagged as malicious and blocked from the network. The demo showcased a denial of service (DoS) attack that was blocked. If the system detects a known device and blocks it, the user is able to fix the issue, and unblock the device so that it can connect back to the network.

Figure 5 Digital intrusion alarm that is first trained to monitor the traffic specific to the network for a period of time before beginning the process of monitoring network traffic for any potential bad actors.

Smart cooktop, coffee maker, and pantry 

A smart cooktop can be seen in Figure 6, the prototype uses face detection to detect whether or not a chef is present, all of this information is processed locally on the device itself. In the event of unsafe conditions, e.g., water boiling over, a burner left on without cookware present, excessive smoke, burning food, the system could potentially detect it and shut off. Once shut off, the home dashboard will show that the cooktop is turned off. Naturally, the entire process can be done without AI, however, it can massively speed up the time it takes for the cooktop to recognize that a cook is present. Other sensors can be integrated to either fine-tune the performance of the system or eliminate the potential intrusion of having a camera. 

Figure 6 Smart cooktop demo with facial recognition to sense if a cook is present.

The guide continued to a “smart barista” that uses facial recognition on the i.MX’s eIQ neural processing unit (NPU) to customize the type of coffee delivered from the coffee maker.  A pantry classification system also uses the i.MX RT1170 along with a classification and detection model to take streams of the pantry and performance inference to inform the user of the items that are taken out of the pantry. The system could potentially be used in the refrigerator as well to offer the user with recipe recommendations or grocery list recommendations. However, as one member of the tour noted, pantries are generally packed with goods that would not necessarily be within view of this system for vision-based classification. 

Current state indicator

Another device was trained, at a very basic level, with knowledge on car maintenance using a GM car manual and used a large language model (LLM) to respond to prompts such as: “How do I use cruise control?” or “Why isn’t my car turning on?” The concept was presented as a potential candidate for the smart home where these smart speakers could potentially be trained on the maintenance of various appliances, e.g., washing machines, dryers, dishwashers, coffee makers, etc., so that the user can ask questions on maintenance or use. 

The natural question is how is this concept any different from established smart speakers? Like many of the devices already described, this is all processed locally and where there is no interaction with the cloud to process data and present an answer. This concept can also be expanded for preventive or predictive maintenance in the case where appliances are outfitted with sensors that transmit appliance status information to, for instance, show a continuous record on the service life motor bearings within a CNC machine; or, the estimation life of a drive belt in a washing machine. 

An automated media room

The Innovation Lab houses a living room space that experiments with automated media control using UWB, millimeter-wave, vision, and voice activation (Figure 7). In this setup, the multiple mediums will first detect the presence of individuals seated on the couch to trigger a sequence of events including the lights turning on, the blinds going up, and the TV turning on to a preferred program of choice. A system utilizing the i.MX 8M+ and an attached Basler camera as well as another system with an overhead camera will use vision to detect persons and perform personalizations such as changing the channel from a show with more adult content to one catered to a younger audience if a child walks into the area. For those who would find that particular personalization vexing (myself included) the system is meant to be trained towards the preference of the individual. 

Another demo in this area included NXP’s “Audio Vista” or sonic optimization. This solution uses a UWB ranging to detect the precise location of the person/people sitting on the couch and communicates with the four speakers located throughout the space to let the user know where/how speakers will have to be moved for an optimal audio setup. This very same underlying UWB technology can be trained to detect heart arrhythmias, breathing, or falls for home health applications. Another media control experiment involved using echo cancellation to extract a voice from a noisy environment so that users do not have to speak over audio to, for instance, ask a smart speaker to pause a TV program. 

Figure 7 The living room space that experiments with automated media control using UWB, millimeter-wave, vision, and voice activation. The UWB system can be seen up front, millimeter-wave transmitter and receiver are seated above the speakers, and Basler camera to the far right. 

The home theater: Downsizing the AV receiver

In the second-to-last stop, everyone sat in a theater to experience the immersive Dolby Atmos surround sound, an experience provided by the i.MX 8M Mini (Figure 8). The traditional AV receiver design involves a specific audio codec IC as well as an MCU and MPU to handle functions such as the various connectivities and the rendering of video. The multicore i.MX 8M Mini’s Arm Cortex A53s have abundant processing capability so that the audio portion of the processing in a traditional AV receiver takes only ~30% of the entire IC; all this while the 8M Mini handles its own controls, processing, and many other renderings as well. 

Dolby Atmos has previously been considered a premium sound function that was not easily provided by products such as soundbars or low- to mid-tier AV receivers. Powerful processors such as the 8M Mini can integrate these functions to lower the barrier of entry for companies, providing not only Dolby Atmos decoding, but MPEG and DTS:X as well. The i.MX also runs a Linux operating system in conjunction with a real-time operating system (RTOS) allowing users to easily integrate Matter, Thread, and other home automation connectivity protocols onto the AV receiver or soundbar. 

Figure 8 Theater portion of the Innovation Lab with the Dolby Atmos immersive surround sound experience processed on the i.MX 8M Mini.

The gym: Health and wellbeing demos

The gym showcased a number of medical solutions starting with medical devices with embedded NTAGs so users can scan and commission the device using NFC to, for example, verify the authenticity of the medication that you are injecting into your body. Other medical devices included insulin pouches that utilized NXP’s BLE MCUs that allowed them to be scanned with a phone so that a user could learn the last time they took an insulin shot. Smart watches, fitness trackers, based upon NXP’s RTD boards were also shown that go for up to a week without being charged. 

Another embedded device that measured ECG was demonstrated (Figure 9) that has the ability to take ECG data, encrypt it, and send the information to the doctor of choice. There are three main secure aspects of this process:

  1. Authentication that establishes the OEM credentials
  2. Verification of insurance details through NFC
  3. Encryption of health data being sent 

The screen in the image shows what a doctor might view on a daily basis to track their patients. This could, for instance, sense a heart attack and call an ambulance. This concept could also be extended to diabetic patients that must track insulin and blood sugar levels as they change through the day. 

Figure 9 Tour of health and wellness devices with a monitor displaying patient information for a doctor that has authenticated themselves through an app. 

Aalyia Shaukat, associate editor at EDN, has worked in the engineering publishing industry for over 8 years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Real examples of the IoT edge: A guide of NXP’s Innovation Lab appeared first on EDN.

PUF security IPs bolstered by test suite, PSA certification

Thu, 10/31/2024 - 11:20

Internet of Things (IoT) security, one of the biggest challenges for embedded developers, is making way for physical unclonable functions (PUFs) into microcontroller (MCU) and system-on-chip (SoC) designs. And a new design ecosystem is emerging to make PUF implementation simpler and more cost-effective.

PUF, which creates secure, unclonable identities based on manufacturing variations unique to each semiconductor chip, facilitates the essential hardware root-of-trust IP required in security implementations. A cryptographic root-of-trust forms the security foundation of modern hardware infrastructures.

Here, PUF creates random numbers on demand, so there is no need to store cryptographic keys in flash memory. That, in turn, eliminates the danger of side-channel memory attacks revealing the keys. But PUF’s technical merits aside, where does it stand as a cost-effective hardware security solution?

Below are two design case studies relating to PUF’s certification and testing. They provide anecdotal evidence of how this hardware security technology for IoT and embedded systems is gaining traction.

PUF certification

PUFsecurity, a supplier of PUF-based security solutions and a subsidiary of eMemory, has achieved Level 3 Certification from PSA for its PUF security IP, which it calls a crypto coprocessor. PSA Certified is a safety framework that tests and verifies the reliability of secure boot, secure storage, firmware update, secure boundary, and crypto engines.

PUFsecurity has teamed up with Arm to test its crypto coprocessor IP, subsequently passing the PSA Certified Level 3 RoT Component. Its PUFcc crypto coprocessor IP, incorporated into the Arm Corstone-300 IoT reference design platform, was evaluated under the Security Evaluation Standard for IoT Platforms (SESIP) profile.

Figure 1 The PUF security IP has been certified on Arm’s reference platform. Source: PUFsecurity

The PSA Certified framework—a globally recognized safety standard platform to ensure that the security features of IoT devices are secured during the design phase—guarantees that all connected devices are built upon a root-of-trust. “PSA Certified has become the platform of choice for our partners to swiftly meet regional cybersecurity and regulatory requirements,” said Paul Williamson, senior VP and GM for IoT Line of Business at Arm.

The evaluation, carried out by an independent laboratory, used five mandatory and five optional security functional requirements (SFRs). The mandatory requirements verify platform identity, secure platform update, physical attacker resistance, secure communication support, and secure communication enforcement.

On the other hand, the optional requirements include verification of platform instance identity, attestation of platform genuineness, cryptographic operation, cryptographic random number generation, and cryptographic key generation.

PUF testing

PUFs used in semiconductors for secure, regenerable random number generation have unique testing challenges. While PUF’s random number generation provides a basis for unique device identities and cryptographic key generation, unlike traditional random number generators (RNGs), PUFs produce a fixed-length output.

That makes existing tests inadequate for determining randomness, a fundamental requirement for a secure device root-of-trust. Crypto Quantique, a supplier of quantum-driven security solutions for IoT devices, has developed a randomness test suite tailored specifically for PUFs.

Figure 2 Test suite overcomes the limitations of NIST 800-22 in evaluating PUF randomness. Source: Crypto Quantique

The new test suite adapts existing tests from the NIST 800-22 suite and makes them suitable for unique PUF characteristics like spatial dependencies and limited output length. It also introduces a test to ensure the independence of PUF outputs, a vital consideration for maintaining cryptographic security by identifying correlated outputs.

In short, the test suite ensures that PUFs meet randomness requirements without excessive data demands. It does that by running tests in different data orderings to account for potential spatial correlations in PUF outputs. Therefore, by reducing the number of required bits for certain tests, the suite enables more efficient testing. It also minimizes the risk of misrepresenting PUF quality.

The availability of PUF-centric test solutions shows that the design ecosystem around this security technology is steadily taking shape. The certification of PUF IPs further affirms its standing as a reliable root-of-trust subsystem.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PUF security IPs bolstered by test suite, PSA certification appeared first on EDN.

Need an extra ADC? Add one for a few cents

Wed, 10/30/2024 - 15:53

When designing devices with microcontrollers (MCU), I like to use some of the analog-to-digital converter (ADC) inputs to measure onboard voltages along with all the required sensors inputs. This means I often run out of ADC inputs. So presented here is a way to add more ADCs without adding external chips, costing less than 5 cents, and taking up negligible space on a PCB!

There are two things in the MCU you are using: a pulse width modulator (PWM) output and an onboard analog comparator. Some MCU lines that have these are the PIC, AVR, and ATmega MCUs from Microchip. Also, TI’s Piccolo line and STMicroelectronics STM32L5 have both a PWM and comparator.

So, let’s look at how this is configured.

The basic concept

Figure 1 is a diagram showing the addition of a resistor and capacitor to your MCU project.

Figure 1 Basic concept of the circuit that uses an MCU with an onboard PWM and comparator, as well as an RC filter to create an ADC.

The resistor and capacitor form a single pole low-pass filter. So, the circuit concept takes the output of an onboard PWM, filters it to create a DC signal that is set by the PWM’s duty cycle. The DC level is then compared to the input signal using the on-board comparator. The circuit is very simple so let’s talk about the code used to create an ADC from this arrangement.

To get a sample reading of the input signal, we start by setting the PWM to a 50% duty cycle. This square-wave PWM signal will be filtered by the RC low-pass filter to create a voltage that is ½ of the MCU’s system voltage. The comparator output will go high (or output a digital 1) if the filtered DC level is greater than the instantaneous input signal voltage, otherwise the comparator output will go low (outputting a digital 0).

The code will now read the comparator output and execute a search to find a new level that forces the comparator to an opposite output. In other words, if the comparator is a 0 the code will adjust the PWM duty cycle up until the comparator outputs a 1. If the comparator is currently showing a 1 the PWM duty cycle will be reduced until the comparator outputs a 0. If the PWM is capable of something like 256 steps (or more) in duty cycle, this search could take some significant time. To mitigate this, we will do a binary search so if there are 256 steps available in the PWM, it will only take log2(256), or 8, steps to test the levels.

A quick description of the binary search is that after the first 50% level reading, the next test will be at a 25% or 75% level, depending on the state of the comparator output. The steps after this will again test the middle of the remaining levels.

An example of the circuit’s function

Let’s show a quick example and assume the following:

  • System voltage: 5 V
  • PWM available levels: 256
  • Instantaneous input signal: 1 V

The first test will be executed with the PWM at about 50% duty cycle (a setting of 128), creating a 2.50-V signal that is applied to the “+” input of the comparator. This means the comparator will output a high which implies that the PWM duty cycle is too high. So, we will cut the duty cycle in half giving a setting of 64, which creates 1.25 V on the “+” input. The comparator again will output a 1… to high so we drop the PWM duty cycle by half again to 32. This gives a “+” level of 0.625 V. Now the comparator will output a 0 so we know we went too low, and we increase the PWM duty cycle. We know 64 was too high and 32 was too low so we go to the center, or (64+32)/2 = 48, giving 0.9375 V. We’re still too low so we split the difference of 64 and 48 resulting in 56 or about 1.094 V…too high. This continues with (56+48)/2=52, giving 1.016 V…too high. Again, with a PWM setting of (52+48)/2=50, giving 0.9766 V. One last step, (52+50)/2=51 giving 0.9961 V.

This was 8 steps and got us as close as we can to the answer. So, our ADC setup would return an answer that the instantaneous input signal was 0.9961 V.

Sample circuit with Arduino Nano

Let’s take a look at a real-world example. This example uses an Arduino Nano which uses an ATmega328P which has a number of PWM outputs and one analog comparator. The PWM we will use can be clocked at various rates and we want to clock it fast as this will make the filtering easier. It will also speed up the time for the filter output to settle to its final level. We will select a PWM clocking rate of about 31.4 kHz. Figure 2 shows the schematic with a one pole RC low-pass filter.

Figure 2 Schematic of the sample circuit using an Arduino Nano and a one-pole RC low-pass filter.

In this schematic D11 is the PWM output, D6 is the comparator’s “+” input, while D7 is the comparator’s “-” input. The filter is composed of a 20kΩ resistor and a 0.1 µF capacitor. I arrived at these values by playing around in an LTspice simulation to try to minimize the remnants of the PWM pulse (ripple) while also maintaining a fairly fast settling time. A target for the ripple was the resolution of a 1-bit change in the PWM, or less. Using the 5 V of the system voltage and the information that the PWM has 8-bit (256 settings) adjustability we get 5 V/256 = ~20 mV. In the LTspice simulation I got 18 mV of ripple while the dc level settled in within a few millivolts of its final value at 15 ms. Therefore, when writing the code, I used 15 ms as the delay between samples (with a small twist you’ll see below). Since it takes 8 readings to get a final usable sample, it will take 8*15 ms = 120 ms, or 8.3 samples per second. As noted at the beginning, you won’t be sampling at audio rates, but you can certainly monitor DC voltages on the board or slow-moving analog signals.

[This may be a good place to note that the analog input does not have a sample-and-hold as most ADCs do, so readings are a moving target. Also, there is no anti-aliasing filter on the input signal. If needed, an anti-alias filter can remove noise and also act as a rough sample and hold.]

Sample code

Below is the code listing for use in an Arduino development environment. You can also download it here. It will read the input signal, do the binary search, convert it to a voltage, and then display the final 8-bit DAC value, corresponding voltage reading, and a slower moving filtered value.

The following gives a deeper description of the code:

  • Lines 1-8 define the pin we are using for the PWM and declares our variables. Note that line 3 sets the system voltage. This value should be measured on your MCU’s Vcc pin.
  • Lines 11 and 12 set up the PWM at the required frequency.
  • Lines 15 and 16 set up the on-board comparator we are using.
  • Line 18 initializes the serial port we will print the results on.
  • Line 22 is where the main code starts. First, we initialize some variables each time to begin a binary search.
  • Line 29 we begin the 8-step binary search and line 30 sets the duty cycle for the PWM. A 15-millisecond delay is then introduced to allow for the low-pass filter to settle.
  • Line 34 is the “small twist” hinted at above. This introduces a second, random, delay between 0 and 31 microseconds. This is included because the PWM ripple that is present, after the filter, is correlated to the 16-MHz MCU’s clock so, to assist in filtering this out of our final reading, we inject this delay to break up the correlation.
  • Lines 37 and 38 will check the comparator after the delay is implemented. Depending on the comparison check, the range for the next PWM duty cycle is adjusted.
  • Line 40 calculates the new PWM duty cycle within this new range. The code then loops 8 times to complete the binary search.
  • Lines 43 and 44 calculate the voltage for the current instantaneous voltage reading as well as a filtered average voltage reading. This voltage averaging is accomplished using a very simple IIR filter.
  • Lines 46-51 send the information to the Arduino serial monitor for display.
1 #define PWMpin 11 // pin 11 is D11 2 3 float systemVoltage = 4.766; // Actual voltage powering the MCU for calibrating printedoutput voltage 4 float ADCvoltage = 0; // Final discovered voltage 5 float ADCvoltageAve = 0; // Final discovered voltage averaged 6 uint8_t currentPWMnum = 0; // Number sent to the PWM to generate the requested voltage 7 uint8_t minPWMnum = 0; 8 uint8_t maxPWMnum = 255; 9 10 void setup() { 11 pinMode(PWMpin, OUTPUT); // Set up PWM for output 12 TCCR2B = (TCCR2B & B11111000) | B00000001; // Set timer 1 to 31372.55 Hz which is now the D11 PWM frequency 13 14 // Set up comparator 15 ADCSRB = 0b01000000; // (Disable) ACME: Analog Comparator Multiplexer disabled 16 ACSR = 0b00000000; //enable AIN0 and AIN1 comparison with interrupts disabled 17 18 Serial.begin(9600); // open the serial port at 9600 bps: 19 } 20 21 22 void loop() { 23 24 currentPWMnum = 127; // Start binary search at the halfway point 25 minPWMnum = 0; 26 maxPWMnum = 255; 27 28 // Perform a binary search for matching comparator setting 29 for (int8_t i = 0; i < 8; i++) { // Loop 8 times 30 analogWrite(PWMpin, currentPWMnum); // Adjust PWM to new dutycycle setting 31 32 // Now wait 33 delay(15); // Wait 15 ms to let the low-pass filter to settle. 34 delayMicroseconds(random(0,32)); // Delay a random number of microseconds (0 thru 31) to break possible correlation (dithering) 35 36 // Check to see if comparator shows AIN0 > AIN1 ( if so ACO in ACSR is set to 1) 37 if (ACSR & (1<<ACO)) maxPWMnum = currentPWMnum; // (AIN0 > AIN1) Move max pointer 38 else minPWMnum = currentPWMnum; // Move min pointer 39 40 currentPWMnum= minPWMnum + ((maxPWMnum - minPWMnum) / 2); // Set new test number to the middle of PWMmin and PWMmax 41 } 42 43 ADCvoltage = systemVoltage * ((float)currentPWMnum/255); // Set the PWM for binary search of voltage (assumes 0 to 5v signal 44 ADCvoltageAve = (ADCvoltageAve * 0.95) + (ADCvoltage * 0.05); // Generate an average value to smooth reading 45 46 Serial.print("PWM Setting = "); 47 Serial.print(currentPWMnum); 48 Serial.print(" ADC Voltage = "); 49 Serial.print(ADCvoltage, 4); 50 Serial.print(" ADC Voltage Filtered = "); 51 Serial.println(ADCvoltageAve, 4); 52 } Test results

The first step was to measure the system voltage on the +5-V pin or the Arduino Nano. This value (4.766 V) was entered on line 3 of the code. I then ran the code on an Arduino Nano V3 and monitored the output on the Arduino serial monitor. To test the code and system, I first connected a 2.5-V reference voltage to the signal input. This reference was first warmed up and a voltage reading was taken on a calibrated 5 ½ digit DMM. The reference read 2.5001 V. The serial monitor showed an instantaneous voltage varying from 2.5232 to 2.4858 V and the average voltage varied from 2.5061 to 2.5074 V. This is around 0.9% error in the instantaneous voltage reading and about 0.3% on the averaged voltage reading. This shows we are getting a reading with about ±1 LSB error in the instantaneous voltage reading and a filtered reading of about ± 0.4 LSB. When inputting various other voltages I got similar accuracies.

I also tested with an input of Vcc (4.766 V) and viewed results of 4.7473 V which means it could work up very close to the upper rail. With the input grounded the instantaneous and filtered voltages showed 0.000 V.

These seem to be a very good result for an ADC created by adding two inexpensive parts.

So next time you’re short of ADCs give this a try. The cost is negligible, PCB space is very minimal, and the code is small and easy to understand.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Need an extra ADC? Add one for a few cents appeared first on EDN.

Dual resonance revisited

Tue, 10/29/2024 - 15:39

A method of measuring both the inductance and the paralleled capacitance of a transformer winding was presented some years ago at in this URL. That essay should be read to best follow the thesis of this one.

That essay’s target devices were high voltage transformers whose secondary windings and secondary loadings from long Cockcroft-Walton voltage multipliers yielded so much paralleled capacitance across the primary windings that conventional LC test instruments were rendered useless for making transformer winding inductance measurements. Instead, dual resonance testing methods overcame that problem.

It should be noted that the above essay referred to measurements being made on ferrite core transformers but is also applicable to low frequency transformers with laminated steel cores such as the following device in Figure 1. Although iron core properties are known to change markedly versus excitation level, this measurement technique itself is not inherently limited to ferrite core transformers which fact is demonstrated by the following example.

Figure 1 A line frequency power transformer with laminated steel cores.

Test results for this iron core transformer were analyzed using the same GWBASIC code as before and can be seen in Figure 2.

Figure 2 The test results for the line frequency transformer in Figure 1.

Averaging the multiple readings, the primary winding apparently exhibits 338 mHy, the full secondary exhibits 70.7 mHy, one side of the secondary exhibits 17.1 mHy, while the other side of the secondary exhibits 17.3 mHy; thus revealing some imbalance between two otherwise identical windings. But, with leakage inductance and measurement tolerances, nothing is ever perfect, right?

The shunt capacitance calculation results which are very small, and even negative, tell us that the shunt capacitance of this transformer is essentially negligible.

Just as a quick check, 70.7 / 17.1 = 4.134 and 70.7 / 17.3 = 4.087 which are close to, but somewhat above, the nominal 4.000 ratio that would nominally apply for the 2:1 turns ratio.

My suspicion is that we were seeing the effects of leakage inductances.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Dual resonance revisited appeared first on EDN.

A PoE injector with a “virtual” usage precursor

Mon, 10/28/2024 - 16:40

Back in August 2021, I did a teardown of an Ubiquiti Networks power-over-Ethernet (PoE) injector, following up on the dissection of a TRENDnet PoE-supportive powerline networking adapter set from three years earlier. I did detailed PoE technology overviews in both of those, which I’m not going to substantially replicate here in the interest of brevity. Suffice it to say that:

  • There are multiple variants of PoE technology, some multiplexing DC voltage on the same wires that carry AC data transitions and others (specifically for 10/100 Mbps Ethernet) leveraging otherwise-unused wires-and-pins for DC transmission. i.e.:

The means by which power is carried within a 10/100 Mbps Ethernet cable also varies: with so-called “Mode A,” the power delivery takes place over the same 1-2 and 3-6 pairs used for data, whereas “Mode B” uses “spare” pairs 4-5 and 7-8. With Gigabit Ethernet, which employs all four pairs of wires for data, merging data and DC power over the same wires is the only option.

  • Even within a particular PoE implementation “flavor” a diversity of input (at source)-and-output (at “sink”) voltage combinations exist in the marketplace, making multi-vendor and (even within a common supplier’s product line) multi-product interoperability difficult at best and more likely a non-starter.

If your PoE injector (or PoE support-inclusive router or switch source device, for that matter) implements one mode and your PoE splitter (or PoE support-inclusive remote device) implements another, your only recourse is a frustrating return of one or both devices to your retailer. Voltage and current incompatibilities between source and destination can also result in a product return (not to mention, potentially “fried” gear).

And regarding network topology node naming, excerpting from that premiere 2018 writeup:

If power is added to the network connection at a PoE-cognizant router, switch, or adapter source, that particular power sourcing equipment (PSE) variant is referred to as an endspan or endpoint. Otherwise, if power is added in-between the router/switch and remote client, such as via an appropriately named PoE injector, that device is known as a midspan. A PoE-cognizant remote client is called a powered device (PD); a PoE splitter can alternatively provide separate power and data connections to a non-PoE-supportive LAN client.

For more background details, please see the earlier powerline-plus-PoE adapter and PoE injector writeups.

What we’re looking at today is another PoE injector, the TP-DCDC-2USB-48 from Tycon Systems:

Here’s the sticker on the baggie my unit came in (absent the USB cables shown in the earlier stock photos):

Why the seeming teardown-device redundancy? Part of the answer comes from the respective product names. The Ubiquiti Networks injector was specifically the model 24-24W-G-US:

As I’ve found is commonly the case with PoE products, the output voltage (for an injector, or alternately the input voltage for a splitter) is embedded within the product name. Specifically, in the Ubiquiti Networks 24-24W-G-US case:

Here’s the “decoder ring” for the product name: The first “24” indicates that it outputs 24 V over the Ethernet connection; the following “24W” means what it says—24 W, alternatively indicating that the unit outputs 1 A max current; “G” means that it supports GbE connections; and “US” means that the power cord has a US-compatible NEMA 5-15 wall outlet connection.

Note, too, that since this device supports GbE (whether it actually delivers GbE over powerline is a different matter), “With Gigabit Ethernet, which employs all four pairs of wires for data, merging data and DC power over the same wires is the only option” applies.

If I apply a “decoder ring” to the Tycon Systems TP-DCDC-2USB-48, conversely, what do I get? Honestly, skimming through the company’ injector (which it also refers to in various places as “inserter”) product line web pages, I can’t discern an obvious pattern. And unlike with Ubiquiti Networks, there isn’t explicit documentation to assist me with the code decipher.

That said, the “DCDC” portion might mean that we have an injector that not only outputs DC voltage but also inputs it (versus, say, an injector with an integrated AC-to-DC power supply, which would conceivably be an “ACDC” variant). Specifically, as you may have already noticed from the stock photos, it takes its input voltage from the dual 5V 1.4A (each) USB-A connectors on one side, presumably explaining the “2USB” portion of the product name. And, unlike the 24V/24W (1A)-output Ubiquiti Networks device, this Tycon Systems one outputs 48V (at 12W, therefore 0.25A) DC, therefore—duh—the “48” in the name. But that all said, both the Ubiquiti Networks and Tycon Systems devices are passive, not active, meaning that they provide a fixed output voltage; there is no upfront negotiation as to what the powered device on the other end of the Ethernet strand needs.

As another key rationale for revisiting the “PoE injector teardown” theme, I’ll in-advance share with you one of the product photos, that of the underside (as usual accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes):

Again, recall that the Ubiquiti Networks device was GbE-cognizant, therefore using all eight Ethernet wires for data, so DC voltage multiplexing was a necessity. The even earlier analyzed TRENDnet PoE-supportive powerline networking setup wasn’t GbE-capable, even in theory, so it theoretically had four spare wires available for DC voltage purposes (presumably generated by an AC/DC converter inside the adapter). Nevertheless, it also was a multiplexed data-and-DC device, in this case the earlier discussed 10/100 Mbit Ethernet “Mode A”. In contrast, from the photo you’ll note that this device is “Mode B”, completing the implementation option variety.

The final notable revisit rationale was fundamentally curiosity-fueled. I’d found the Tycon Systems TP-DCDC-2USB-48 for sale used at Lensrentals, a photo (mostly video) gear retailer that I’ve mentioned (and bought stuff from multiple times) before. The device’s listed price was only $22; at the time (November 2023) an additional 15%-off promotion made it even more economical ($18.70 plus tax). But what on earth was a PoE injector doing for sale at a photo equipment retailer’s website? This final TP-DCDC-2USB-48 stock photo provides a clue:

Even more telling, after a bit of revealing research, is the seemingly cryptic “VR – Orah PoE Injector” notation on the Lensrental website product page for the Tycon Systems TP-DCDC-2USB-48. It refers to the Orah 4i, a four-fisheye-camera plus multi-mic setup for live-streaming 360° spherical virtual reality:

The A/V capture module tethers to an Intel- and NVIDIA-based mini-PC “box” that stitches together the various sources’ A/V data before network broadcasting the result. And speaking of networks—specifically, Ethernet cables—some of you have probably already guessed how the mini-PC and cameras-plus-mics module connected…over PoE-augmented Ethernet, via the injector on the “stitching box” side and directly powering the capture module on the other end.

1,000 (hopefully educational) words in, let’s get to tearing down, shall we? You’ve already seen my device’s underside; here are views from other perspectives. Top-side block diagram first; gotta love that “Ethent” spelling variant of “Ethernet”, eh?

The two bare sides:

The input voltage-and-data end:

And the unified voltage-plus-data output end:

The seam along all four sides wasn’t glued down, but something else was still holding the two halves together:

I found it when I peeled away the underside sticker:

That’s better:

Note the sizeable ground plane and other thick traces on the PCB underside, similar to those encountered with the Ubiquiti Networks PoE injector three years back:

The PCB pops right out of the top with no additional screws to be removed first:

At the far right is the power LED, with the Ethernet connector above it and the dual USB power inputs below. Two inductors along the bottom, one of them toroidal. The PoE connector is on the left edge. Two more inductors in the upper left corner (one again toroidal), with two capacitors in-between them. At the top is a Linear Technology (now Analog Devices) LT1619 low voltage current mode PWM controller for DC/DC conversion purposes.

And what of that heat sink to the right of the PoE connector? Glad you asked:

It’s normally held in place by glue (to the PoE connector) to one side and a thermal pad underneath. And below it are two ICs: an On Semiconductor FDS86140 small signal MOSFET and, to one side (and therefore only partially attached to the thermal pad) a chip marked:

CSP
10S100S
citc

and a “D1” mark on the PCB alongside, which I’m guessing is a 3-lead Schottky diode (readers?).

In conclusion, here are some PCB side-view shots for your perusal:

That’s what I’ve got for today. I’m now going to try to meaningfully reattach the heat sink and otherwise return the TP-DCDC-2USB-48 to full functionality. Why? The spec sheet says it best:

This USB powered PoE injector is a must for any technician’s toolbox because it allows powering …Passive PoE products from a laptop’s USB port. The 48VDC passive PoE model is perfect for powering IP Phones, Cameras and other devices that use 48VDC PoE. This allows the technician to quickly test the device by direct connection to his laptop, saving him a lot of time. The TP-DCDC-2USB-xx can also be used in customer premise equipment to power external wireless gear from a customer’s computer USB ports, reducing wiring clutter at the PC and allowing the wireless gear to be powered down when the computer is powered down to conserve energy.

As always, I welcome reader thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A PoE injector with a “virtual” usage precursor appeared first on EDN.

Pages