EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 1 година 29 хв тому

A pitch-linear VCO, part 2: taking it further

Втр, 03/11/2025 - 15:58

Editor’s Note: This DI is a two-part series.

Part 1 shows how to make an oscillator with a pitch that is proportional to a control voltage.

Part 2 shows how to modify the circuit for use with higher supply voltages, implement it using discrete parts, and modify it to closely approximate a sine wave.

In Part 1, we saw how to make an oscillator whose pitch, as opposed to frequency, can be made proportional to a control voltage. In this second part, we’ll look at some alternative ways of arranging things for other possible applications.

Wow the engineering world with your unique design: Design Ideas Submission Guide

To start with, Figure 1 shows a revised version of the basic circuit, built with B-series CMOS to allow rail voltages of up to 18 or 20 V rather than the nominal 5 V of the original.

Figure 1 A variant on Part 1’s Figure 2, allowing operation with a supply of up to 20 V.

Apart from U2’s change from a 74HC74 to a CD/HEF4013B, the main difference is in U1. With a 12 V rail, TL062/072/082s and even LM358s and MC1458s all worked well, as did an LM393 comparator with an output pull-up resistor. The control voltage’s span increases with supply voltage, but remains at ~±20% of Vs. Note that because we’re only sensing within that central portion, the restricted input ranges of those devices was not a problem.

Something that was a problem, even with the original 5-V MCP6002, was a frequent inability to begin oscillating. Unlike the 74HC74, a 4013 has active-high R and S inputs, so U1a’s polarity must be flipped. It tends to start up with its output high, which effectively locks U2a into an all-1s condition, forcing Q1 permanently on. That explains the need for R5/C5/Q2. If (when!) the sticky condition occurs, Q2 will turn on, shorting C2 so that Q1 can turn off and oscillation commence. A reverse diode across R5 proved unnecessary at the low frequencies involved.

This could also be built using the extra constant-current sink, shown in Part 1’s Figure 4, but then U1 would need to have rail-to-rail inputs.

A version that lacks any logic

This is an extension of the first version that I tried, which was built without logic ICs. It’s neat and works, but U1a could only output pulses, which needed stretching to be useful. (Using a flip-flop guaranteed the duty cycle, while the spare section, used as a monostable, generated much better-defined reset pulses.) The circuit shown in Figure 2 works around this and can be built for pretty much any rail voltage you choose, as long as U1 and the MOSFETS are chosen appropriately.

Figure 2 This all-discrete version (apart from the op-amps) uses a second section to produce an output having a duty cycle close to 50%.

U1b’s circuitry is a duplicate of U1a’s but with half the time-constant. It’s reset in the same way and its control voltage is the same, so its output pulses have half the width of a full cycle, giving a square wave (or nearly so). Ideally, Q1 and Q3 should be matched, with C3 exactly half of C1 rather than the practical 47n shown. R7 is only necessary if the rail voltage exceeds the gate-source limits for Q1/3. (ZVP3306As are rated at 20 V max.)

Purity comes from overclocking a twisted ring

The final variation—see Figure 3—goes back to using logic and has a reasonably sinusoidal output, should you need that.

Figure 3 Here the oscillator runs 16 times faster than the output frequency. Dividing the pulse rate down using a twisted-ring counter with resistors on its 8 outputs gives a stepped approximation to a sine wave.

The oscillator itself runs at 16 times the output frequency. The pulse-generating monostable multivibrator (MSMV) now uses a pair of cross-coupled gates, and not only feeds Q1 but also clocks an 8-bit shift register (implemented here as two 4-bit ones), whose final output is inverted and fed back to its D input. That’s known as a twisted-ring or Johnson counter and is a sort of digital Möbius band. As the signal is shifted past each Q output, it has 8 high bits followed by 8 low ones, repeated indefinitely. U2c not only performs the inversion but also delivers a brief, solid high to U3a’s D input at start-up to initialize the register.

U2 and U3 are shown as high-voltage CMOS parts to allow for operation at much more than 5 V. Again, U1 would then need changing, perhaps to a rail-to-rail input (RRI) part if the extra current source is added. 74HC132s and 74HC4015s (or ’HC164s) work fine at ~5 V.

The Q outputs feed a common point through resistors selected to give an output which, though stepped, is close to a sine wave, as Figure 4 should make clear. R4 sets the output level and C4 provides some filtering. (Different sets of resistors can give different tone colors. For example, if they are all equal, the output (if stepped) will be a good triangle wave.)

Figure 4 Waveforms illustrating the operation of Figure 3’s circuit when it’s delivering ~500 Hz.

The steps correspond to the 15th and 17th harmonics, which, though somewhat filtered by C4/R4, are still at ~-45 dB. To reduce them, add a simple two-pole Sallen–Key filter, like that in Figure 5, which also shows the filtered spectrum for an output of around 500 Hz.

Figure 5 A suitable output filter for adding to Figure 3, and the resulting spectrum.

The 2nd and 3rd harmonics are still at around -60 dB, but the others are now well below -70 dB, so we can claim around -57 dB or 0.16% THD, which will be worse at 250 Hz and better at 2 kHz. This approach won’t work too well if you want the full 4–5-octave span (extra current sink) unless the filter is made tunable: perhaps a couple of resistive opto-isolators combined with R14/15, driven by another voltage-controlled current source?

All that is interesting, but rather pointless. After all, the main purpose of this design idea was to make useful audible tones, not precision sine waves, which sound boring anyway. But a secondary purpose should be to push things as far as possible, while having fun experimenting!

A musical coda

Given a pitch-linear tone source, it seemed silly not to try make some kind of musical thingy using a tappable linear resistance. A couple of feet, or about 10kΩ’s-worth, of Teledeltos chart paper (which I always knew would come in handy, as the saying goes) wrapped round a length of plastic pipe with a smooth, shiny croc clip for the tap or slider (plus a 330k pull-down) worked quite well, allowing tunes to be picked out as on a Stylophone or an air guitar. Electro-punk lives! Though it’s not so much “Eat your heart out, Jimi Hendrix” as “Get those earplugs in”.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

Related Content

The post A pitch-linear VCO, part 2: taking it further appeared first on EDN.

How controllers tackle storage challenges in AI security cameras

Втр, 03/11/2025 - 09:08

Visual security systems have evolved enormously since the days of infrared motion detectors and laser tripwires. Today, high-definition cameras stream video into local vision-processing systems. These AI-enabled surveillance cameras detect motion, isolate and identify objects, capture faces, expressions, and gestures, and may even infer the intent of people in their field of view. They record interesting videos and forward any significant events to a central security console.

Integrating AI capabilities transforms security cameras into intelligent tools to detect threats and enhance surveillance proactively. Intent inference, for example, allows security cameras to quickly predict suspicious behavior patterns in crowds, retail stores, and industrial facilities. Case in point: AI-enabled cameras can detect unattended packages, license plates, and people in real time and report them to security personnel.

According to a report from Grandview Research, due to the evolving use of AI technology and growing security concerns, the market for AI-enabled security cameras is projected to grow at a CAGR of over 18% between 2024 and 2032. This CAGR would propel the market from $7.55 billion in 2023 to $34.2 billion in 2032.

 

The need for compute power

Increasing sophistication demands growing computing power. While that antique motion sensor needed little more than a capacitor and a diode, real-time object and facial detection require a digital signal processor (DSP). Advanced inferences such as expression or gesture recognition need edge AI: compact, low-power neural-network accelerators.

Inferring intent may be a job for a small-language model with tens or hundreds of millions of parameters, demanding a significantly more powerful inference engine. Less obviously, this growth in functionality has profound implications for the security camera’s local non-volatile storage subsystem. Storage, capacity, performance, reliability, and security have all become essential issues.

Storage’s new role

In most embedded systems, the storage subsystem’s role is simple. It provides a non-volatile place to keep code and parameters. When the embedded system is initialized, the information is transferred to DRAM. In this use model, reading happens only on initialization and is not particularly speed sensitive. Writing occurs only when parameters are changed or code is updated and is, again, not performance sensitive.

The use case for advanced security cameras is entirely different. The storage subsystem will hold voluminous code for various tasks, the massive parameter files for neural network models, and the continuously streaming compressed video from the camera.

To manage energy consumption, designers may shut down some processors and much of the DRAM until the camera detects motion. This means the system will load code and parameter files on demand—in a hurry—just as it begins to stream video into storage. So, both latency and transfer rate are essential.

In some vast neural network models, the storage subsystem may also have to hold working data, such as the intermediate values stored in the network’s layers or parameters for layers not currently being processed. This will result in data being paged in and out of storage and parameters being loaded during execution—a very different use model from static code storage.

Storage meeting new needs

Except in scale, the storage use model in these advanced security cameras resembles less a typical embedded-system model than what goes on in an AI-tuned data center. This difference will impose new demands on the camera’s storage subsystem hardware and firmware.

The primary needs are increased capacity and speed. This responsibility falls first upon the NAND flash chips themselves. Storage designers use the latest multi-level and quad-level, stacked-cell NAND technology to get the capacity for these applications. And, of course, they choose chips with the highest speeds and lowest latencies.

However, fast NAND flash chips with terabit capacity can only meet the needs of security-camera applications if the storage controller can exploit their speed and capacity and provide the complex management and error correction these advanced chips require.

Let’s look at the storage controller, then. The controller must support the read-and-write data rates the NAND chips can sustain. And it must handle the vast address spaces of these chips. But that is just the beginning.

Storage controller’s tasks

Error correction in NAND flash technology is vital. Soft error rates and the deterioration of the chips over time make it necessary to have powerful error correction code (ECC) algorithms to recover data reliably. Just how important, however, is application dependency? A wrong pixel or two in a recorded video may be inconsequential. Neural network models can be remarkably tolerant of minor errors.

But a bad bit in executable code can turn off a camera and force a reboot. A wrong most significant bit (MSB) in a parameter at a critical point in a neural network model, especially for small-language models, can result in an incorrect inference. So, a mission-critical security camera needs powerful, end-to-end error correction. The data arriving at the system DRAM must be precisely what was initially sent to the storage subsystem.

This requirement becomes particularly interesting for advanced NAND flash chips. Each type of chip—each vendor’s process, number of logic levels per cell, and number of cells in a stack—will have its error syndromes. Ideally, the controller’s ECC algorithms will be designed for the specific NAND chips.

Aging is another issue—flash cells wear out with continued reading and writing. However, as we have seen, security cameras may almost continuously read and write storage during the camera’s lifetime. That is the worst use case for ultra-dense flash chips.

To make matters more complex, cameras are often mounted in inaccessible locations and frequently concealed, so frequent service is expensive and sometimes counterproductive (Figure 1). The video they record may be vital for safety or law-enforcement authorities long after it is recorded, so degradation over time would be a problem.

Figure 1 Managing flash cell endurance is an essential issue since cameras are often mounted in inaccessible locations. Source: Silicon Motion

The controller’s ability to distribute wear evenly across the chips, scrub the memory for errors, and apply redundant array of independent disks (RAID)-like techniques to correct the mistakes translates into system reliability and lower total cost of ownership.

To counter these threats, the storage controller must be forearmed. Provisions should be made for fast checkpoint capture, read/write locking of the flash array, and a quick, secure erase facility in case of power loss or physical damage. To blunt cyberattacks, the storage subsystem must have a secure boot process, access control, and encryption.

A design example

To appreciate the level of detail involved in this storage application, we can focus on just one feature: the hybrid zone. Some cells in a multi-level or quad-level NAND storage can store only a single bit of data instead of two or four bits. The cells used as single level are called hybrid zones. They will have significantly shorter read and write latencies than if they were being used to store multiple bits per cell.

The storage controller can use this feature in many ways. It can store code here for fast loading, such as boot code. It can store parameters for a neural network model that must be paged into DRAM on demand. For security, the controller can use a hybrid zone to isolate sensitive data from the access method used in the rest of the storage array. Or the controller can reserve a hybrid zone for a fast dump of DRAM contents in case of system failure.

Figure 2 Here is how the FerriSSD controller offers a hybrid zone, the unique capability of partitioning a single NAND die into separate single-level cells (SLC) and multi-level cells/3D triple-level cells (MLC/TLC zones). Source: Silicon Motion

The hybrid zone’s flexibility ultimately supports diverse storage needs in multi-functional security systems, from high-speed data access for real-time applications such as authentic access to secure storage for critical archived footage.

Selecting storage for security cameras

Advanced AI security cameras require a robust storage solution for mission-critical AI video surveillance applications. Below is an example of how a storage controller delivers enterprise-grade data integrity and reliability using ECC technology.

Figure 3 This is how a storage controller optimizes the choice of ECC algorithms. Source: Silicon Motion

The storage needs of advanced security cameras go far beyond the simple code and parameter storage of simple embedded systems. They increasingly resemble the requirements in cloud storage systems and require SSD controllers with error correction, reliability, and security features.

This similarity also places great importance on the controller vendor’s experience—in power-conscious edge environments, high-end AI cloud environments, and intimate relationships with NAND flash vendors.

Lancelot Hu is director of product marketing for embedded and automotive storage at Silicon Motion.

Related Content

The post How controllers tackle storage challenges in AI security cameras appeared first on EDN.

Dead Lead-acid Batteries: Desulfation-resurrection opportunities?

Пн, 03/10/2025 - 18:04

Back in November 2023, I told you about how my 2006 Jeep Wrangler Unlimited Rubicon:

had failed (more accurately, not completed) its initial emissions testing the year before (October 2022) because it hadn’t been driven substantively in the prior two years and its onboard diagnostic system therefore hadn’t completed a self-evaluation prior to the emissions test attempt. Thankfully, after driving the vehicle around for a while, aided by mechanics’ insights, online info and data sourced from my OBD-II scanner, the last stubborn self-test (“oxygen sensor heater”) ran and completed successfully, as did my subsequent second emissions test attempt.

The battery, which I’d just replaced two years earlier in September 2020, had been disconnected for the in-between two-year period, not that keeping it connected would have notably affected the complications-rife outcome; even with the onboard diagnostic system powered up, the vehicle still needed to be driven in order for self-evaluation tests to run. This time, I vowed, I’d be better. I’d go down to the outdoor storage lot, where the Jeep was parked, every few weeks and start and drive it some. And purely for convenience reasons, I kept the battery connected this time, so I wouldn’t need to pop the hood both before and after each driving iteration.

I bet you know what happened next, don’t you? How’s that saying go…”the road to hell is paved with good intentions”? Weeks turned into months, months turned into years, and two years later (October 2024) to be exact, I ended up with not only a Jeep whose onboard diagnostics system tests had expired again, but one whose battery looked like this:

Here it is in the cart at Costco, after my removal of it from the engine compartment and right before I replaced it with another brand-new successor:

I immediately replaced it primarily for expediency reasons; it’s somewhat inconvenient to get to the storage lot (therefore why my prior aspirations had been for naught) and given that I already knew I had some driving to do before it’d pass emissions (not to mention that my deadline for passing emissions was drawing near) I didn’t want to waste time messing around with trying to revive this one. But I was nagged afterwards by curiosity; could I have revived it? I decided to do some research, and although in my case the answer was likely still no (given just how drained it was, and for how long it’d been in this degraded condition), I learned a few things that I thought I’d pass along.

First off: what causes a (sealed, in my particular) lead-acid (SLA) battery to fail in the first place? Numerous reasons exist, but for the purposes of this particular post topic, I’m going to focus on just one, sulfication. With as-usual upfront thanks to Wikipedia for the concise but comprehensive summary that follows:

Lead–acid batteries lose the ability to accept a charge when discharged for too long due to sulfation, the crystallization of lead sulfate. They generate electricity through a double sulfate chemical reaction. Lead and lead dioxide, the active materials on the battery’s plates, react with sulfuric acid in the electrolyte to form lead sulfate. The lead sulfate first forms in a finely divided, amorphous state and easily reverts to lead, lead dioxide, and sulfuric acid when the battery recharges. As batteries cycle through numerous discharges and charges, some lead sulfate does not recombine into electrolyte and slowly converts into a stable crystalline form that no longer dissolves on recharging. Thus, not all the lead is returned to the battery plates, and the amount of usable active material necessary for electricity generation declines over time.

And specific to my rarely used vehicle situation:

Sulfation occurs in lead–acid batteries when they are subjected to insufficient charging during normal operation, it also occurs when lead–acid batteries left unused with incomplete charge for an extended time. It impedes recharging; sulfate deposits ultimately expand, cracking the plates and destroying the battery. Eventually, so much of the battery plate area is unable to supply current that the battery capacity is greatly reduced. In addition, the sulfate portion (of the lead sulfate) is not returned to the electrolyte as sulfuric acid. It is believed that large crystals physically block the electrolyte from entering the pores of the plates. A white coating on the plates may be visible in batteries with clear cases or after dismantling the battery. Batteries that are sulfated show a high internal resistance and can deliver only a small fraction of normal discharge current. Sulfation also affects the charging cycle, resulting in longer charging times, less-efficient and incomplete charging, and higher battery temperatures.

Okay, but what if I just kept the battery disconnected, as I’d been doing previously? That should be enough to prevent sulfication-related degradation, since there’d then be no resulting current flow through the battery, right? Nope:

Batteries also have a small amount of internal resistance that will discharge the battery even when it is disconnected. If a battery is left disconnected, any internal charge will drain away slowly and eventually reach the critical point. From then on the film will develop and thicken. This is the reason batteries will be found to charge poorly or not at all if left in storage for a long period of time.

I also found this bit, both on how battery chargers operate and how sulfication adversely affects this process, interesting:

Conventional battery chargers use a one-, two-, or three-stage process to recharge the battery, with a switched-mode power supply including more stages in order to fill the battery more rapidly and completely. Common to almost all chargers, including non-switched models, is the middle stage, normally known as “absorption”. In this mode the charger holds a steady voltage slightly above that of a full battery, in order to push current into the cells. As the battery fills, its internal voltage rises towards the fixed voltage being supplied to it, and the rate of current flow slows. Eventually the charger will turn off when the current drops below a pre-set threshold.

A sulfated battery has higher electrical resistance than an unsulfated battery of identical construction. As related by Ohm’s law, current is the ratio of voltage to resistance, so a sulfated battery will have lower current flow. As the charging process continues, such a battery will reach the charger’s preset cut-off more rapidly, long before it has had time to accept a complete charge. In this case the battery charger indicates the charge cycle is complete, but the battery actually holds very little energy. To the user, it appears that the battery is dying.

My longstanding-use battery charger is a DieHard model 28.71222:

It’s fairly old-school in design, although “modern” enough that it enables the owner to front panel switch-differentiate between conventional SLA and newer absorbed glass mat (AGM) battery technologies from a charging-process standpoint (speaking of which, in the process of researching this piece I also learned that old-school vehicles like mine are also often, albeit not always, able to use both legacy SLA and newer AGM batteries). And it conveniently supports not only 10A charging but also 2A “trickle” (i.e., “maintain”) and 50A “engine start” modes.

That said, we’re storing the Volkswagen Eurovan Camper in the garage nowadays, with my Volvo perpetually parked in the driveway instead (and the Jeep still “down the hill” at the storage lot). I recently did some shopping for a more modern “trickle” charger for the van’s battery, and in the process discovered that newer chargers are not only much more compact than my ancient “beast” but also offer integrated desulfation support (claimed, at least). Before you get too excited, there’s this Wikipedia qualifier to start:

Sulfation can be avoided if the battery is fully recharged immediately after a discharge cycle. There are no known independently-verified ways to reverse sulfation. There are commercial products claiming to achieve desulfation through various techniques such as pulse charging, but there are no peer-reviewed publications verifying their claims. Sulfation prevention remains the best course of action, by periodically fully charging the lead–acid batteries.

With that said, there’s this excerpt from the linked-to ”Battery regenerator” Wikipedia entry:

The lead sulfate layer can be dissolved back into solution by applying much higher voltages. Normally, running high voltage into a battery will cause it to rapidly heat and potentially cause thermal runaway, which may cause it to explode. Some battery conditioners use short pulses of high voltage, too short to cause significant heating, but long enough to reverse the crystallization process. 

Any metal structure, such as a battery, will have some parasitic inductance and some parasitic capacitance. These will resonate with each other, and something the size of a battery will usually resonate at a few megahertz. This process is sometimes called “ringing”. However, the electrochemical processes found in batteries have time constants on the order of seconds and will not be affected by megahertz frequencies. There are some websites which advertise “battery desulfators” running at megahertz frequencies.

Depending on the size of the battery, the desulfation process can take from 48 hours to weeks to complete. During this period the battery is also trickle charged to continue reducing the amount of lead sulfur in solution.

Courtesy of a recent Amazon Prime Big Deal Days promotion, I ended up picking up three different charger models at discounted prices, with the intention of tearing down at least one in the future in comparative contrast to my buzzing DieHard beast. For trickle-only charging purposes, I got two ~$20 1A 6V/12V GENIUS 1s from NOCO, a well-known brand:

Among its feature set bullet points are these:

  • Charge dead batteries – Charges batteries as low as 1-volt. Or use the all-new force mode that allows you to take control and manually begin charging dead batteries down to zero volts.
  • Restore your battery – An advanced battery repair mode uses slow pulse reconditioner technology to detect battery sulfation and acid stratification to restore lost battery performance for stronger engine starts and extended battery life.

Then there were two from NEXPEAK, a lesser known but still highly rated (on Amazon, at least) brand, the ~$21 6A 12V model NC101:

  • [HIGH-EFFICIENCY PULSE REPAIR] battery charger automotive detects battery sulfation and acid stratification, take newest pulse repair function to restore lost battery performance for stronger engine starts and extended battery life. NOTE: can not activate or charging totally dead batteries.

And the also-$21 10A 12V/24V NC201 PRO:

with similarly worded desulfation-support prose:

  • [HIGH-EFFICIENCY PULSE REPAIR]Automatically detects battery sulfation and acid stratification, take newest pulse repair function to restore lost battery performance for stronger engine starts and extended battery life. Note: can not activate or charging totally dead batteries.

In fact, with this model and as the front panel graphic shows, the default recharging sequence always begins with a desulfation step.

Do the desulfation claims bear out in real life? Read through the Amazon user comments for the NC101 and NC201 PRO and you’ll likely come away with a mixed conclusion. Cynically speaking, perhaps, the hype is reminiscent of the “peak” cranking amp claims of lithium battery-based battery jump starters. And I also wonder for what percentage of the positive reviewers the battery resurrection ended up being only partial and temporary. That said, I suppose it’s better than nothing, especially considering how cost-effective these chargers are nowadays.

And that said, my ultimate future aspiration is to not need to try to resurrect my Jeep’s battery at all. To wit, given that as previously noted, “I don’t have AC outlet access for [editor note: conventional] trickle chargers” at the outdoor storage facility, I’ve also picked up a portable solar panel with integrated trickle charger for ~$18 during that same promotion (two, actually, in case I end up moving the van back down there, too):

which, next time I’m down there, I intend to mate to a SAE extension cable I also bought:

bungee-strap the solar panel to the Jeep’s windshield (or maybe the hood, depending on vehicle and sun orientations), on top of the car cover intermediary, and route the charging cable from underneath the vehicle to the battery in the engine compartment above. I’ll report back my results in a future post. Until then, I welcome your comments on what I’ve written so far!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Dead Lead-acid Batteries: Desulfation-resurrection opportunities? appeared first on EDN.

How to reinvent analog design in the age of AI

Пн, 03/10/2025 - 02:31

Where does analog design stand in the rapidly growing artificial intelligence (AI) world? While neuromorphic designs have been around since the 1980s, can they reinvent themselves with building blocks like field-programmable analog arrays (FPAAs)? Are there appropriate design tools for analog to make a foray into the AI space? Georgia Tech’s Dr. Jennifer Hasler, known for her work on FPAAs, joins other engineering experts to discuss ways of accelerating analog design in the age of AI.

Read the full transcript of this discussion or listen to the podcast at EDN’s sister publication, EE Times.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How to reinvent analog design in the age of AI appeared first on EDN.

The downside of overdesign, Part 2: A Dunn history

Птн, 03/07/2025 - 16:56

Editor’s Note: This is a two-part series. Part 1 can be found here.

My father, John Edward Dunn, was a Foreman in the New York City Department of Bridges. His shop was in Brooklyn on Kent Avenue adjacent to the Brooklyn Navy Yard. His first assistant at that shop was a man named Connie Rank. Dad’s responsibilities were to oversee the maintenance and repairs of all of the smaller bridges in Brooklyn, Staten Island, and parts of Queens. The Mill Basin Bridge was one of his.

Dad was on call 24/7 in response to any bridge emergencies. At any time of day or night a phone call would come in and he would have to respond. When calls came in at 2 AM or 3 AM or whenever, the whole household would be awakened. Dad would answer the call and I would hear “Yeah. Okay, I’m on my way.” Then I’d hear Dad dialing a call where I’d hear “Connie? Yeah. See you there,” and that would be that. The routine was that familiar. Nothing further needed to be said. He wouldn’t get home again until at least 5:30 PM the following day for having responded to whatever emergency had occurred and then having worked a full day afterward without interruption.

Many of those emergencies were at the Mill Basin Bridge. One of them made the front page of a city newspaper. There was a full page photo of the bridge taken from ground level showing all kinds of emergency vehicles on the scene with all of their lights gleaming against the dark sky. Dad showed me that paper and asked “Do you see that little dot here?” I said “Yes,” and he said, “That little dot is me.” He knew where he had been standing.

Following one accident, perhaps it was the accident above, Dad apparently saved someone’s life. He was honored for that by Mayor Robert F. Wagner. Neither I at the age of twelve nor my sister at nine were ever told the details of the event, but it led to Dad shaking hands with the Mayor at New York City Hall.

John Dunn’s late father, John Edward Dunn, shaking hands with NYC mayor Robert F. Wagner circa 1956 to receive an award for his brave work saving a life as a foreman with the NYC Department of Bridges.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The downside of overdesign, Part 2: A Dunn history appeared first on EDN.

Wireless MCUs deliver richer functionality

Птн, 03/07/2025 - 00:21

STM32WBA6 2.4-GHz wireless MCUs from ST offer increased memory and digital system interfaces for high-end applications in smart home, health, factory, and agriculture. Based on an energy-efficient Arm Cortex-M33 core running up to 100 MHz, the devices provide up to twice the flash and RAM of the previous STM32WBA5 series for application code and data storage.

Giulia Fagiani

With up to 2 MB of flash and 512 KB of RAM on-chip, the STM32WBA6 MCUs are able to support more advanced applications. Digital peripherals include high-speed USB, three SPI ports, four I2C ports, three USARTs, and one LPUART. By integrating the processing core, peripherals, and wireless subsystems, the MCUs streamline designs and reduce assembly size.

The STM32WBA6 wireless subsystem supports Bluetooth LE, Zigbee, Thread, and Matter, enabling concurrent communication across multiple protocols. It also enhances performance, with sensitivity increased to -100 dBm for more reliable connectivity up to the maximum specified range.

The STM32WBA6 wireless MCUs are in production and available now, priced from $2.50 each in lots of 10,000 units.

STM32WBA6 product page 

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wireless MCUs deliver richer functionality appeared first on EDN.

650-V GaN HEMT resides in TOLL package

Птн, 03/07/2025 - 00:21

Rohm Semiconductor introduced the GNP2070TD-Z, a 650-V enhancement-mode GaN HEMT in a TO-leadless (TOLL) package. With dimensions of 11.68×9.9×2.4 mm, this compact package enhances heat dissipation, supports high current, and enables strong switching performance.

The GNP2070TD-Z integrates second-generation GaN-on-Si technology, achieving an RDS(on) of 70 mΩ and a Qg of 5.2 nC. With a VDS of 650 V and an IDS of 27 A, the transistor is well-suited for power supplies, AC adapters, PV inverters, and energy storage systems.

For this launch, ROHM has outsourced package manufacturing to ATX Semiconductor, with TSMC handling front-end processes and ATX managing back-end processes. ROHM also plans to collaborate with ATX on automotive-grade GaN devices.

The EcoGaN HEMTs will be available starting in March from DigiKey, Mouser, and Farnell.

GNP2070TD-Z product page

Rohm Semiconductor

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 650-V GaN HEMT resides in TOLL package appeared first on EDN.

Marvell’s 2-nm silicon boosts AI infrastructure

Птн, 03/07/2025 - 00:21

Marvell Technology has demonstrated its first 2-nm silicon IP, enhancing the performance and efficiency of AI and cloud infrastructure. Built on TSMC’s 2-nm process, the working silicon is a key component of Marvell’s platform for developing next-generation custom AI accelerators, CPUs, and switches.

The company’s strategy focuses on developing a comprehensive semiconductor IP portfolio, including electrical and optical SerDes, die-to-die interconnects for 2D and 3D devices, advanced packaging technologies, silicon photonics, custom HBM compute architecture, on-chip SRAM, SoC fabrics, and compute fabric interfaces like PCIe Gen 7.

Additionally, the portfolio includes high-speed 3D I/O for vertically stacking die inside chiplets. This simultaneous bidirectional I/O operates at speeds up to 6.4 Gbps. By shifting from conventional unidirectional I/O to bidirectional I/O, designers can double the bandwidth and/or reduce the number of connections by 50%.

“Our longstanding collaboration with TSMC plays a pivotal role in helping Marvell develop complex silicon solutions with industry-leading performance, transistor density, and efficiency,” said Sandeep Bharathi, chief development officer at Marvell.

Marvell Technology

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Marvell’s 2-nm silicon boosts AI infrastructure appeared first on EDN.

Armv9 platform advances AI at the edge

Птн, 03/07/2025 - 00:21

Arm’s edge AI platform features the Cortex-A320 CPU and Ethos-U85 NPU, enabling on-device execution of models exceeding 1 billion parameters. The Armv9 platform enhances efficiency, performance, and security for IoT, while unlocking new edge AI applications through support for both large and small language models.

Built on the Armv9 architecture, the Cortex-A320 delivers 10× higher ML performance and 30% better scalar performance than its predecessor, the Cortex-A35. It also achieves an 8× ML performance gain over the Cortex-M85-based platform launched last year. Additionally, Armv9.2 offers advanced security features like pointer authentication, branch target identification, and memory tagging extension.

The Cortex-A320 pairs with the Ethos-U85 AI accelerator, which supports transformer-based models at the edge and scales from 128 to 2048 MAC units. To streamline edge AI development, Arm’s Kleidi for IoT compute libraries enhance AI an ML performance on Arm-based CPUs with seamless ML framework integration. For example, Kleidi boosts Cortex-A320 performance by up to 70% when running Microsoft’s Tiny Stories dataset on Llama.cpp.

To learn more about the Armv9 edge AI platform, click on the product page links below.

Cortex-A320 product page

Ethos-U85 product page

Kleidi product page

Arm

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Armv9 platform advances AI at the edge appeared first on EDN.

32-bit MCUs offer multiple sensing capabilities

Птн, 03/07/2025 - 00:21

Infineon’s PSOC 4 series microcontrollers now integrate capacitive, inductive, and liquid level sensing in a single device. The PSOC 4000T, powered by a 32-bit, 48-MHz Arm Cortex-M0+ processor, combines CAPSENSE capacitive sensing with Multi-Sense inductive sensing and non-invasive, non-contact liquid sensing.

Infineon says Multi-Sense inductive sensing offers greater noise immunity and durability than existing methods. Its differential, ratio-metric architecture supports new HMI and sensing applications, including touch-over-metal, force touch, and proximity sensing.

The PSOC 4000T’s liquid sensing uses an AI/ML-based algorithm that Infineon says is more cost-effective and accurate than mechanical sensors and standard capacitive solutions. It resists environmental factors like temperature and humidity and detects liquid levels with up to 10-bit resolution. It also rejects foam and residue and operates across varying air gaps between the sensor and container.

The fifth-generation CAPSENSE technology enables hover touch sensing, allowing interaction without direct button contact. Its always-on capability reduces power consumption by 10× while delivering 10× higher signal-to-noise ratio than Infineon’s previous devices.

The PSOC 4000T with CAPSENSE and Multi-Sense is available now. A second device, the PSOC 4100T Plus, offering more memory and I/Os, will gain Multi-Sense support in 2Q 2025.

PSOC 4000T product page

Infineon Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 32-bit MCUs offer multiple sensing capabilities appeared first on EDN.

Apple’s spring 2025 part II: Computers, tablets, and a new chip, too

Чтв, 03/06/2025 - 16:40

Another week, another suite of press release-only-announced products. With the exception of the yearly (and mid-year) in-person WWDC, will we ever see Apple do another live event?

I digress. Some of what Apple’s rolled out (so far…it’s only Wednesday night as I write these words) this week was accurately prognosticated at the end of my last-week coverage. Some of it was near-spot-on forecasted, albeit with an unexpected (and still baffling, a day later) twist. And two of the system unveilings were a complete surprise, at least from a timing standpoint. At all the new system’s cores were processor updates (core…processor…get it?). And speaking of which, there’s a new one of those, as well. In chronological order, starting with Tuesday’s news…

The iPad Air(s)

Apple had migrated the iPad Air tablet from the M1 to the M2 SoC less than a year ago, at the same time expanding the product suite to include both 11” and 13” form factors. So when Tim Cook teased that “There’s something in the Air” on Monday, M3-based iPad Airs were not what I expected. But…whatevah…🤷‍♂️ By the way, careful perusers of the press release might have already noticed that all the performance-improvement claims mentioned there were versus the 2022 M1-based model, not last year’s M2. That selective emphasis wasn’t an accident, folks.

And of course, there’s a new accompanying keyboard; heaven forbid Apple forego any available opportunity for obsolescence-by-design forced updates to its devoted customer base, yes? Sigh.

The iPad

 

This one didn’t even justify a press release of its own; instead, Apple tacked a paragraph and photo onto the end of the iPad Air announcement. Predictably, there were performance-improvement claims in that paragraph, and once again Apple jumped two product generations in making them, comparing against the September 2021 9th-generation A13 Bionic-based iPad versus the year-later (but still 2.5 years old) 10th-generation offering running the A14 Bionic SoC. And the doubled-up internal storage is nice. But here’s the surprising-to-me (and pretty much everyone else whose coverage I read) twist; the new 11th-gen iPad is based on the A16 SoC.

“What’s the big deal, Dipert?” you might understandably be asking at this point. The big deal is that the A16 is not Apple Intelligence-compatible. On the one hand, I get it; the iPad is the lowest-priced offering in Apple’s tablet portfolio, so to maintain shareholder-friendly profit margins, the bill-of-materials cost must be similarly suppressed. But given how increasingly fiscal-reliant Apple is on the services segment of its business, I’m still shocked that Apple didn’t instead put the A17 Pro, already found in the latest iPad mini, into the new iPad too, along with enough RAM to enable AI capabilities. Maybe the company just wants to upsell everyone to the iPad Air and Pro instead? If so, I’ve got an intentionally terse response: “good luck with that”.

The MacBook Air(s)

This is what everyone thought Tim Cook was alluding to with Monday’s “There’s something in the Air” tease, in-advance suggested by dwindling inventory of existing M3-based products. And one day later than the iPad Air, they belatedly got their wish. That said, with the exception of a new sky blue scheme (No more Space Gray? You gotta be kidding me!), all the changes are on the inside. The M4 SoC (this time exclusively with a 10-core CPU, albeit in both 8-and-10-core GPU variants) is more energy-efficient than its M3 forebear; we’ve already discussed this. But Apple was even more comparison-silly this time, benchmarking against the three-generations-old (and more than four years old) M1 MacBook Air, as well as even more geriatric x86-based variants (Really, Apple? Isn’t it time to stop kicking Intel?). About the most notable thing I can say, aside from the price cut, is that akin to its M4 Mac mini sibling, the M4 MacBook Air now supports up to two external displays in addition to the integrated LCD, without any software-based (therefore CPU-burdening) DisplayLink hacks. Oh, and the front camera is improved. Yay.

The Mac Studio

Speaking of the Mac mini, let’s close by mentioning its bigger (albeit not biggest) brother, the Mac Studio. Until earlier today (again, as I write these words on Wednesday evening) the most powerful Mac Studios, introduced at the 2023 WWDC, were based on M2 SoC variants: the 12 CPU core and 30-or-38 GPU core M2 Max; and dual-die (interposer-connected) 24 CPU core and 60-or-76 GPU core M2 Ultra. They were follow-ups to 2022’s M1 Max (an example of which I own) and M1 Ultra premiere Mac Studio products. So, we were clearly (over)due for next-gen offerings. But, although the M1 versions were introduced in March, M2 successors arrived the following June. So, I’d placed my bets on the (likely June) 2025 WWDC for the next-gen launch timing.

Shows you how much (or accurately, little) I know…Apple instead decided on a 2022-era early-March re-do this time. And skipping past the M3 Max, the new “lower-end” (I chuckle to even type those words, and you’ll see why in second) version of the Mac Studio is based on the 14-or-16 CPU core, 32-or-40 GPU core, and 16 neural processing core M4 Max SoC also found in the latest high-end MacBook Pros.

The M3 Ultra SoC

But, at least for now (and maybe never?) there’s no M4 Ultra processor. Instead, Apple revisited the M3 architecture to come up with the M3 Ultra, its latest high-end SoC for the Mac Studio family. It holds 28-or-32 CPU cores, 60-or-80 GPU cores, and 32 neural processing cores, all prior-gen. I’m guessing the target market will still be satisfied with the available “muscle”, in spite of the generational back-step. And it’s more than just an interposer-connected dual-die M3 Max pairing. It also upgrades Thunderbolt capabilities to v5, previously found only on higher-end M4 SoC variants, and the max RAM to 512 GBytes (the M3 Max only supports 128 GBytes max…see what I did there?).

Maybe we’ll see a next-gen Mac Pro at WWDC, then? And maybe it (or if not, which of its other product line siblings) will be the first system implementation of the next-gen M5 SoC? Stand by. Until then, let me know your thoughts on this week’s announcements in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s spring 2025 part II: Computers, tablets, and a new chip, too appeared first on EDN.

This is how an electronic system design platform works

Чтв, 03/06/2025 - 12:12

A new design platform streamlines electronics system development from component selection to software development by integrating hardware, software, and lifecycle data into a single digital environment. Renesas 365 is built around Altium 365, a design suite that provides seamless access to component sources and intelligence while connecting all stakeholders throughout the creation process.

Embedded system developers often struggle due to manual component searches, fragmented documentation, and siloed design teams. Renesas 365 addresses these challenges by connecting Altium’s cloud-connected system design platform with Renesas’ components for embedded compute, connectivity, analog, and power applications.

Renesas 365, built around Altium’s system design platform, streamlines development from component selection to lifecycle management. Source: Renesas

Renesas CEO Hidetoshi Shibata calls it a first-of-its-kind solution. “It’s the next step in the digital transformation of electronics, bridging the gap between silicon and system development.” Renesas has joined hands with the company it acquired last year to redefine how electronics systems are designed, developed, and sustained—from silicon selection to full system realization—in a connected world.

Here is how Renesas 365 works in five steps.

  1. Silicon: Renesas 365 will ensure that every silicon component is application-ready, optimized for software-defined products, and seamlessly integrated with the broader system.
  2. Discover: This part powered by Altium enables engineers to find components as well as complete solutions from Renesas’ portfolio for faster and more accurate system design.
  3. Develop: Altium powers this part to provide a cloud-based development environment to ensure real-time collaboration across hardware, software, and mechanical teams.
  4. Lifecycle: Also powered by Altium, this part establishes persistent digital traceability to facilitate over-the-air (OTA) updates and ensure compliance and security from concept to deployment.
  5. Software: This part provides developers with artificial intelligence (AI)-ready development tools to ensure that the software is optimized for their applications.

The final part of Renesas 365 offerings demonstrates how a unified software framework covering low- to high-compute performance can help developers create software-defined systems. For instance, these development tools enable real-time, low-power AI inference at the edge. They can also track compliance and automate OTA updates to ensure secure lifecycle management.

This cloud-connected system design platform can aid developers in everything from component selection to embedded software development to OTA updates. Meanwhile, it ensures that existing workflows remain uninterrupted and supports everything from custom AI models to advanced real-time operating system (RTOS) implementations.

Renesas will demonstrate this system design platform live at embedded world 2025, which will be held from 11 to 13 March in Nuremberg, Germany. The company’s booth 5-371 will be dedicated to presentations and interactive demonstrations of the Renesas 365 solution.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post This is how an electronic system design platform works appeared first on EDN.

Three discretes suffice to interface PWM to switching regulators

Срд, 03/05/2025 - 16:20

It’s remarkable how many switching regulator chips use the same basic two-resistor network for output voltage programming. Figure 1 illustrates this feature in a typical (buck type) regulator. See R1 and R2 where:

Vout = Vsense(R1/R2 + 1) = 0.8v(11.5 + 1) = 10v

Figure 1 A typical regulator output programming network with a basic two-resistor network for output voltage programming.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Quantitatively, the Vsense feedback node voltage varies from type to type and recommended values for R1 can vary too, but the topology doesn’t. Most conform faithfully to Figure 1. This defacto uniformity is useful if your application needs digital control of Vout via PWM. 

Figure 2 shows the simplistic three-component solution it makes possible where:

Vout = Vsense(R1/(R2 + R3/DF) + 1) = 0.8v to 10v as DF = 0 to 1

All that’s required to add PWM control to Figure 1 is to split R2 into two equal halves, connect filter cap Cf to the middle of the pair, and add PWM switch Q1 in series with its ground end.

Figure 2 Simple circuit for regulator programming with PWM where Vout ranges from 0.8 V to 10 V as the duty factor (DF) goes from 0 to 1.

The Cf capacitance required for 1-lsb PWM ripple attenuation is 2(N-2)Tpwm/R2, where N is number of PWM bits and Tpwm is the PWM period. Since Cf will never see more than perhaps a volt, its voltage rating isn’t much of an issue.

A cool feature of this simple topology is that, unlike many other schemes for digital power supply control, only the regulator’s internal voltage reference matters to regulation accuracy. Precision is therefore independent of external voltage sources, e.g. logic rails. This is a good thing because, for example, the tempco of the TPS54332’s reference is only 15 ppm/oC.

Figure 3 graphs Vout versus the PWM DF for the Figure 2 circuit where the X-axis is DF, the Y-axis is Vout and,

Vout = Vsense(R1/(R2 + R3/DF) + 1)
Vout(min) = Vsense
Vout(max) = Vsense(R1/(R2 + R3) + 1)
R1/(R2 + R3) = Vout(max)/Vsense – 1

Figure 3 Graph showing Vout versus the Figure 2 PWM DF.

Figure 4 plots the inverse function with DF vs Vout where,

DF = R3/(R1/(Vout/Vsense – 1) – R2)

The nonlinearity of DF versus Vout does incur the cost of a bit of software complexity (two subtractions and three divisions) to do the conversion. But since it buys substantial circuitry simplification, it seems a reasonable (maybe zero) cost. Or, if the necessary memory is available, a lookup table is another (simple!) possibility.

 Figure 4 DF versus Vout; the non-linearity necessitates a bit of software complexity to perform the conversion.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Three discretes suffice to interface PWM to switching regulators appeared first on EDN.

How AI is changing the game for high-performance SoC designs

Срд, 03/05/2025 - 10:04

The need for intelligent interconnect solutions has become critical as the scale, complexity, and customizability of today’s systems-on-chip (SoC) continue to increase. Traditional network-on-chip (NoC) technologies have played a vital role in addressing connectivity and data movement challenges, but the growing intricacy of designs necessitates a more advanced approach. Especially, when high-end SoC designs are surpassing the human ability to create NoCs without smart assistance.

The key drivers for this demand can be summarized as follows:

  • Application-specific requirements: Many industries and applications, such as automotive, Internet of Things (IoT), consumer electronics, artificial intelligence (AI), and machine learning (ML), require highly specialized hardware tailored to unique workloads, such as real-time processing, low latency, or energy efficiency. Off-the-shelf chips often fall short of providing the precise blend of performance, power, and cost-efficiency these applications need.
  • Cost and performance optimization: Custom SoCs allow companies to integrate multiple functions into a single chip, reducing system complexity, power consumption, and overall costs. With advanced process nodes, custom SoCs can achieve higher levels of performance tailored to the application, offering a competitive edge.
  • Miniaturization and integration: Devices in areas like wearables, medical implants, and IoT sensors demand miniaturized solutions. Custom SoCs consolidate functionality onto a single chip, reducing size and weight.
  • Data-centric and AI workloads: AI and ML require processing architectures optimized for parallel computation and real-time inferencing. Custom SoCs can incorporate specialized processing units, like neural network accelerators or high-bandwidth memory interfaces, to handle these demanding tasks.

The market now demands a next-level approach, one that leverages AI and ML to optimize performance, reduce development time, and ensure efficient data movement across the entire system. Today’s high-end SoC designs are necessitating smarter, automated solutions to address evolving industry needs.

The solution is the introduction of a new type of smart NoC interconnect IP that can leverage smart heuristics using ML and AI technology to dramatically speed up the creation and increase the quality of efficient, high-performance SoC designs.

Today’s NoC technologies

Each IP in an SoC has one or more interfaces, each with its own width and frequency. A major challenge is the variety of standard interfaces and protocols, such as AXI, AHB, and APB, used across the industry. Adding to this complexity, SoCs often integrate IPs from multiple vendors, each with different interface requirements.

NoC technology helps manage this complexity by assigning a network interface unit (NIU) to each IP interface. For initiator IPs, the NIU packetizes and serializes data for the NoC. For target IPs, it de-packetizes and de-serializes incoming data.

Packets contain source and destination addresses, and NoC switches direct them to their targets. These switches have multiple ports, allowing several packets to move through the network at once. Buffers and pipeline stages further support data flow.

Without automation, designers often add extra switches, buffers, or pipeline stages as a precaution. However, too many switches waste area and power, excessive buffering increases latency and power use, and undersized buffers can cause congestion. Overusing pipeline stages also adds delay and consumes more power and silicon.

Existing NoC interconnect solutions provide tools for manual optimization, such as selecting topology and fine-tuning settings. However, they still struggle to keep pace with the growing complexity of modern SoCs.

Figure 1 SoC design complexity which has surpassed manual human capabilities, calls for smart NoC automation. Source: Arteris

Smart NoC IP

The typical number of IPs in one of today’s high-end SoCs ranges from 50 to 500+, the typical number of transistors in each of these IPs ranges from 1 million to 1+ billion, and the typical number of transistors on an SoC ranges from 1 billion to 100+ billion. Furthermore, modern SoCs may comprise between 5 to 50+ subsystems, all requiring seamless internal and subsystem-to-subsystem communication and data movement.

The result of all this is that today’s high-end SoC designs are surpassing human ability to create their NoCs without smart assistance. The solution is the introduction of a new type of advanced NoC IP, such as FlexGen smart NoC IP from Arteris. The advanced IP can leverage smart heuristics using ML technology to dramatically speed up the creation and increase the quality of efficient, high-performance SoC designs. A high-level overview of the smart NoC IP flow is illustrated in Figure 2.

Figure 2 A high-level overview of the FlexGen shows how smart NoC IP flow works. Source: Arteris

Designers start by using an intuitive interface to capture the high-level specifications for the SoC (Figure 2a). These include the socket specifications, such as the widths and frequencies of each interface. They also cover connectivity requirements, defining which initiator IPs need to communicate with which target IPs and any available floorplan information.

The designers can also specify objectives at any point in the form of traffic classes and assign performance goals like bandwidths and latencies to different data pathways (Figure 2b).

FlexGen’s ML heuristics determine optimal NoC topologies, employing different topologies for different areas of the SoC. The IP automatically generates the smart NoC architecture, including switches, buffers, and pipeline stages. The tool minimizes wire lengths and reduces latencies while adhering to user-defined constraints and performance goals (Figure 2c). Eventually, the system IP can be used to export everything for use with physical synthesis (Figure 2d).

NoC with smart assistant

The rapid increase in SoC complexity has exceeded the capabilities of traditional NoC design methodologies, making it difficult for engineers to design these networks without smart assistance. This has driven the demand for more advanced solutions.

Take the case of FlexGen, a smart NoC IP from Arteris, which addresses these challenges by leveraging intelligent ML heuristics to automate and optimize the NoC generation process. The advanced IP delivers expert-level results 10x faster than traditional NoC flows. It reduces wire lengths by up to 30%, minimizes latencies typically by 10% or more, and improves PPA metrics.

Streamlining NoC development accelerates time to market and enhances engineering productivity.

Andy Nightingale, VP of product management and marketing at Arteris, has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How AI is changing the game for high-performance SoC designs appeared first on EDN.

A pitch-linear VCO, part 1: Getting it going

Втр, 03/04/2025 - 16:33

Editor’s Note: This DI is a two-part series.

Part 1 shows how to make an oscillator with a pitch that is proportional to a control-voltage.

Part 2 will show how to modify the circuit for use with higher supply voltages, implement it using discrete parts, and modify it to closely approximate a sine wave.

Typical circuit

An ongoing project (or gadget) called for a means of generating an audio output to represent a varying voltage level. Ho hum: that sounds like a voltage-controlled oscillator. But this signal was bipolar, spanning peaks ranging from -1 to +1 V. A linear-in-frequency response just sounded wrong, and anyway could never deliver the symmetrical ±1-octave output that I wanted.

Wow the engineering world with your unique design: Design Ideas Submission Guide

A typical, well-known type of oscillator—though as drawn, lacks voltage control—is shown in Figure 1. At the start of a cycle, C1 is fully charged. It then discharges through R1 until the reference voltage, shown as mid-rail, is reached, when the monostable multivibrator is triggered, delivering a pulse to turn on Q1, which shorts C1 to the positive rail, thereby starting the next cycle. The output, an exponentially-decaying sawtooth having constant amplitude, is taken from the top of C1 via a buffer (not shown). (Strictly, the op-amp should be a comparator; it’s used as one.) C1 would normally be switched for different ranges, with R1 varied for tuning.

Figure 1 A typical relaxation oscillator with an exponentially decaying sawtooth output, is the starting point for this design.

Another method of tuning this is to keep R1 and C1 constant and vary the reference voltage. The output level now varies, the tuning law being exponential. If we want pitch-linearity, maybe that would be a good starting point?

Tweaking

An exponential decay may not give the exact curve we need, but with a little tweaking, parts of it are close enough to be useful. Some experimenting produced the workable circuit shown in Figure 2.

Figure 2 Varying the reference voltage instead of the R-C time-constant gives a tuning law that is close enough to being linear in pitch over a couple of octaves, especially after adding R2.

Just as described above, the bipolar control voltage is compared with the falling exponential(-ish) ramp to tune the oscillator’s frequency. When they coincide, U2a, used as a multi-supply multi-voltage (MSMV), is triggered to produce a reset pulse to turn Q1 on momentarily, thus resetting C1’s voltage to its maximum value. Figure 3 shows the key waveforms.

Figure 3 Waveforms from the circuit in Figure 2, at both extremes of its two-octave span.

Bending the law so we can do what we want

The single, simple, humble resistor R2 is the key to this design. By compressing and shifting the exponential decay curve, it allows a reasonably close approximation to a tuning law that is linear with pitch rather than frequency over a couple of octaves and more: an increment in the control voltage now changes the frequency by a fairly constant frequency ratio rather than a fixed amount. The match is worst at the low-frequency end, being around 5% off close to the low calibration point and way off even lower down. (A semitone is ~7%.) Using 51k for R2 gives the closest match for the bottom octave frequency itself, but 56k generally “sounds” better on average in that region.

With the values shown, the output frequency ranges from about 250 Hz to 1000 Hz for inputs from -1 to +1 V, which is close to the two octaves upwards from “C4” (middle C: ~262 Hz if we define A4 to be precisely 440 Hz) to “C6”. (The quotes are used here to distinguish pitch values from capacitors!) For different spans, just change C1 or both R1 and R2, whose ratio must be kept constant. If the control voltage falls below about -1.5 V, as determined by R1 and R2, oscillation will stop. Above +1 V, the match is still reasonable for another half octave and more.

U2b divides the oscillator’s pulse output by 2 to give a square wave, which the output network turns into a trapezoid of about 1.1 V pk-pk (~-6 dBu). While this has no pretensions to waveform purity, it does now have softer and “more analog” edges rather than sharp digital ones.

Other comments: MCP6002s are cheap and cheerful. The MCP6022 is better specified (much faster, and with <500 µV input offset) but more costly. The spare half of U1 could be used for further filtering of the output if desired. The spec for Q1 is not critical. A ZVP3306A has an RDS(ON) of up to 15 Ω, but the width of the pulse driving its gate ensures that C1 is fully charged under all conditions. The ~±1 V control range was just what I wanted, but that was a happy accident rather than being designed in.

It now does what’s needed and is ready for dropping into the project (or gadget). However, . . .

A few extra components give more octaves and accuracy

Contemplating the basic circuit threw up an interesting idea. Linear-in-frequency tuning can be done in two ways, one being to use a linear ramp and vary the control voltage much as we’re doing with the exponential one, while the other would be to replace R1 with a controllable current sink and delete R2. Use these together and the tuning law becomes “squared”, giving a power law that is inherently much closer to being linear-in-pitch. Figure 4 shows how to do that.

Figure 4 Adding a voltage-controlled current sink in place of tuning resistor R1 is the key to operation over more than 4 octaves with much better pitch accuracy.

Q2, U1b, and R1 form the current sink. Its control voltage is half of that at the input, ensuring that Q2 never saturates. C1 discharges linearly, the slope being governed by Vcon. The power rails are shown as 0 V / +5 V rather than ±2.5 V to reflect the wider tuning range, but the output frequency is still centered at around 520 Hz (close to a pitch of “C5”) with the component values shown.

The required control-voltage swing now measures ~840 mV/octave (or ~70 mV/semitone). The response is almost exactly linear-in-pitch over the middle two octaves, and still decent for the two octaves and more surrounding those. The errors are worst at the low-frequency end because the current sink is then running out of steam (or electrons). An MCP6022 is used because of its better performance but the rest of the circuit is almost unchanged.

While the range of 4-plus octaves is over the top for my target application, improved accuracy is always welcome, and this better performance opens the way to possible musical use.

In Part 2, that will be shown, but first we’ll see how to modify the circuit for use with higher supply voltages, how to implement it using only discrete parts apart from the op-amp, and how to end up with a respectable sine wave at the output.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A pitch-linear VCO, part 1: Getting it going appeared first on EDN.

Multi-sense MCU enables new HMI and sensing use cases

Втр, 03/04/2025 - 13:03

A new microcontroller featuring multi-sense capabilities claims to enable new human-machine interface (HMI) and sensing solutions, ranging from sleek metallic product designs with touch-on metal buttons to waterproof touch buttons. Infineon’s PSOC 4 is an Arm Cortex-M0+-based MCU that offers capacitive, inductive, and liquid sensing in a single device.

Figure 1 PSOC 4 integrates capacitive, inductive, and liquid sensing to accommodate a variety of HMI uses cases. Source: Infineon

This new MCU combines the company’s fifth-generation capacitive sensing technology, CAPSENSE, with inductive and liquid sensing to optimize performance, enable new use cases, and realize cost savings. For a start, the fifth-generation CAPSENSE featuring always-on capability enables sensing at 10x lower power consumption and offers a 10x higher signal-to-noise ratio (SNR) than previous devices.

Figure 2 Capacitive sensing (left) and inductive sensing (right) complement each other to enable new HMI and sensing use cases. Source: Infineon

Inductive sensing is based on a proprietary methodology that is less sensitive to noise; it complements capacitive sensing to enable new HMI use cases like touch-over-metal, force touch surfaces, and proximity sensing. This allows developers to create modern, metal-based and waterproof designs with sleek form factors such as metal touch buttons on refrigerators or robust HMI for underwater devices such as cameras and wearables.

Then there is non-invasive and non-contact liquid sensing, which employs an AI/ML algorithm to facilitate more cost-effective and accurate sensing than mechanical sensors and typical capacitive solutions. Liquid sensing is resistant to environmental factors like temperature and humidity and can detect liquid levels with up to 10-bit resolution in various container shapes.

As a result, it offers capabilities—such as foam and residue rejection and reliably working with varying air gaps between sensor and container—that other liquid sensors don’t support. So, liquid sensing on PSOC 4 can efficiently manage liquids in robot vacuum cleaners, washing machines, coffee machines, and humidifiers.

Figure 3 PSOC 4 multi-sense eliminates the need for a sensor in a liquid and is insensitive to process variations like gaps between sensor and container. Source: Infineon

Finally, CAPSENSE hover touch sensors enable applications that benefit CAPSENSE from having an air gap between the sensor and the touch surface. It leverages highly sensitive capacitive sensing capability to detect touch interactions from a significant distance. That eliminates the need for the gap to be bridged using a conductive material, typically a spring or conductive foam.

Figure 4 Hover touch sensing comes into play when a direct touch of a button is not required. Source: Infineon

PSOC 4000T with fifth-generation CAPSENSE and multi-sense capability is available now. Another MCU in PSOC 4 family, PSOC 4100T Plus, offering higher memory and more I/Os, will be available in the second quarter of 2025.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Multi-sense MCU enables new HMI and sensing use cases appeared first on EDN.

Google’s Chromecast Ultra: More than just a Stadia Consorta

Пн, 03/03/2025 - 16:42

I didn’t originally plan to begin this teardown with a language lesson, but it turned out that way. Skip ahead a couple of paragraphs if you insist on bypassing it 😉

As regular readers may already realize, likely to their dismay, I’ve spent the bulk of my nearly-30-year to-date tech journalism career attempting, among other things, to inject rhymes into writeup titles (and content, for that matter) whenever possible and management-blessed (or at least tolerated). Occasionally, I succeeded modestly. Often, I failed miserably. The challenge was particularly acute this time. See for yourself: visit RhymeZone for a listing of how many (or more accurate, few) options exist for rhyming pairings with the word “ultra”. I could have cheated and stuck “streamer” after “ultra” to expand the rhyming options list, but where’s the fun in that?

The Chromecast Ultra streaming receiver we’ll be dissecting today is (or more accurately was) among other things the “kit” partner with Google’s Stadia controller (also on my teardown pile), the usage nexus of the company’s now-defunct online-streamed gaming service. So, what popped into my head next was the word “consort”, specifically the noun defined as (among other things) an “associate”. But I needed something ending in an “a” to even sorta-rhyme. Fortunately, at least for me (your opinions may differ, understandably) the similar-meaning “consortaalso exists, at least in the Swiss Romansh language.

Thus concludes the etymology. Thanks for indulging me (see, another rhyme)! Now for the “meat” of the writeup. As I recently mentioned in my third-generation Google Chromecast teardown, I ended up reordering the publication cadence from the originally planned chronological sequence; the 2018 Chromecast 3 came first, after the 2015 Chromecast 2, followed by today’s 2016 Chromecast Ultra. That said, the calendar-year proximity between the Chromecast 2 and Chromecast Ultra may explain why the latter retained the former’s magnet-augmented HDMI connector and metal-augmented back-of-body, dropped from the Chromecast 3 successor.

As with the Chromecast 3, I wasn’t able to find a “nonfunctional, for parts only” device to tear down; instead, I resigned myself to picking a functional (albeit well-used) alternative off eBay:

for only $19.46 ($12 plus sales tax and $6.35 for shipping), shown here as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

The backside printing is a bit less faint this time compared to that in the Chromecast 3, but it’s still dim. Here’s what it says around the circumference, if your eyes are as old and tired as mine:

Model NC2-6A5-D
FCC ID A4RNC2-6A5-D
IC 10395A-NC26A5D
CAN ICES-3 B
NMB-3 B
Made in China
HDMI
Designed by Google
1600 Amphitheater Parkway
Mountain View, CA 94043
UL US LISTED
ITE E258392
6B11CYFWMB

The earlier “magnetic” image’s hinted-at tint may have tipped you off that the HDMI connector has an atypical orange-ish color (for a possible reason I’ll explain shortly) with this device:

The power supply’s micro-USB connector’s equally uncommon color scheme is similar:

Zooming out, here’s what the latter connector is attached to:

And zooming back in:

Flip the wall wart 180° to check out its specs:

And now rotate it 90° and…wait, what’s this (it’s “only” 100 Mpbs-supportive, BTW)?

The Chromecast Ultra differs from its conventional Chromecast siblings in that it, to quote the spec sheet, “supports all resolutions up to 4K Ultra HD and high dynamic range (HDR) for superior picture quality” (at up to 60 fps, too, content source- and display support-dependent). Here’s the twist: it apparently only delivers a 4K output if the original power supply is in use (thereby explaining, I suspect, albeit in an undocumented manner as far as I can tell, the usage-reminder color match between the micro-USB input power connector and the HDMI A/V output connector). Note that the Ethernet port doesn’t actually need to be in use, as this photo I just snapped of another Chromecast Ultra I own, connected to my master guest bedroom UHD TV (whose date and time settings beg for configuration) and to my LAN over Wi-Fi, reveals:

What I’m guessing is that, in actuality, the Chromecast Ultra is looking for a USB cable that supports both power and data transfer capabilities. Would a different supplier’s PSU with a functionally compatible integrated Ethernet port (as well as an adequate USB PD output, of course), thereby also satisfying the power-plus-data cable requirements, also work? Dunno.

Onward: let’s get inside. Specifically, there’s a seam along the edge, visible in this photo of the device’s micro-USB input:

and, rotating roughly 180°, this shot of its hardware reset button and (to the left) status LED:

I decided to try popping apart the two device halves absent preparatory heat application this time, which still proved successful:

That’s some seriously dry thermal paste in-between the top-half case insides and Faraday Cage:

which may at least in part explain the Chromecast Ultra’s reported propensity for overheating (especially, I’m suggesting, as the device ages and the paste dries out). This guy’s alternative “fix” involved sticking supplemental heatsinks to the outside top case (the video is worth a viewing if only to check out the measured temperature drop post-augmentation):

Next, let’s get that Faraday Cage off:

(No) surprise: more thermal paste!

Let’s apply some isopropyl alcohol to clean off that gray goo, so we can see what’s underneath:

Hold that “what’s underneath” thought until we get the PCB out of the remaining lower-case half. Two screws removed (I’ve already confirmed there are no more at the bottom of the PCB; read my Chromecast 3 teardown for the embarrassing-to-me details of why this was necessary):

followed by the bracket that holds the HDMI connector in place:

At this point, the PCB began to elevate itself out of the remaining case half, so I redirected my attention away from the HDMI cable:

and to the first-time revealed PCB bottom half:

Look, it’s another Faraday Cage!

And here’s (in the center) the metal plate that the HDMI connector magnetically mates with when not in use, along with (at upper right) the reset switch and LED light guide assemblies:

At this point, the HDMI cable disconnected itself (gravity-encouraged) from the other (upper) side of the PCB:

Next to go, Brian-encouraged this time, was the Faraday Cage:

And after one more thermal paste wipe-off session:

let’s get to identification. At the upper-left edge are the reset switch and status LED. Along both lower edges are the PCB-embedded antennae. The large rectangular IC at the right is a Samsung K4F8E304HB-MGCH 8 Gbit LPDDR4-3200 SDRAM (there’s nothing underneath the cage frame above it, trust me; I subsequently ripped it off to check. Also, there’s nothing below the frame at bottom). And in the lower left is another, smaller rectangular IC, labeled as follows:

MARVELL
W8997-A0
637BETP

which I think is now the NXP Semiconductors 88W8997 (NXP having acquired Marvell’s Wi-Fi and Bluetooth connectivity assets in late 2019) and implements the Chromecast Ultra’s dual-band 802.11ac Wi-Fi facilities.

Back to the now-case-free PCB topside, and more Marvell-branded chips come into view:

The one in the center is a real head-scratcher, labeled as follows:

MARVEL
DE3009-A0
633ARTE TJ

Do a Google search on “Marvell DE3009” (I’m assuming “A0” refers to the design stepping version) and you’ll find, unless you’re more adept than me…nothing, save for a Google suggestion that perhaps I meant “DE3005” instead. The DE3006, specifically the 88DE3006, was used in the Chromecast 2 and (in Synaptics-renamed form) the Chromecast 3, so on a hunch I did a search on “Marvell 88DE3009” instead. This was more fruitful, but only a bit; there was a short discussion on iFixit’s website concurring with my suspicion that it was a Google-only implemented device, along with a terse mention on WikiDevi indicating that post-Synaptics’ acquisition of Marvell’s Multimedia Solutions Business in mid-2017, the 88DE3009 was renamed the Synaptics BG4CDP (not that I can find much about it, either, save that it’s supposedly dual-core and runs at 1.25 GHz). More knowledgeable reader insights are as-always welcomed!

The markings on the small IC to the left of the DE3009 and peeking out from the frame edge are too faint for me to discern, other than that the first line is again “MRVL”. Below the DE3009 is a Toshiba TC58NVG1S3HBAI6 2 Gbit NAND flash memory. In the upper right corner of the PCB, again peeking out from under the frame, is a small IC with a Marvell logo mark in the upper left corner, along with the following:

52K
00B0G
624AK

And below it is another Marvell-sourced mystery IC:

MRVL
823AA0
634GAC

As I mentioned earlier specifically regarding the DE3009, reader insights on any of the chips I’ve been unable to identify (along with those I’ve sorta-kinda-maybe ID’d), along with any other thoughts on this teardown, are appreciated in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Google’s Chromecast Ultra: More than just a Stadia Consorta appeared first on EDN.

Arm setting up a design shop in Malaysia

Пн, 03/03/2025 - 08:48

Malaysia is serious about its bid to move up the semiconductor industry ladder by establishing an IC design presence, and Arm’s setting up a design shop there is a testament to this ambition. Malaysia’s Prime Minister Anwar Ibrahim told reporters late last week that he has been on a call with Arm CEO Rene Haas and SoftBank’s head Masayoshi Son regarding this matter.

He added that talks are in the final stage and the agreement will be finalized and signed this month. Ibrahim also said that this demonstrates confidence in Malaysia’s policies and its ambition to become a regional hub for semiconductor design and manufacturing.

Malaysia is keen to penetrate the IC design market to bolster its standing as a regional tech hub. Source: CNA

This initiative is part of Malaysia’s National Semiconductor Strategy (NSS), which calls for $110 billion of direct investment in IC design, advanced packaging, and front-end semiconductor manufacturing processes, which includes wafer fabs and manufacturing equipment.

Details on what kind of design work Arm will carry out in Malaysia are yet to emerge. Ibrahim calls it a major test for the country’s ambition to embrace IC design work. “Can we provide tens of thousands of young professionals?”

“This is a challenge for the youth,” he added. “A professional workforce is essential when we attract significant investments.” That also shows a lot of sense of excitement.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Arm setting up a design shop in Malaysia appeared first on EDN.

AC-Line Safety Monitor Brings Technical, Privacy Issues

Птн, 02/28/2025 - 21:24

There’s a small AC-line device that has received a lot of favorable media coverage lately. It’s called Ting from Whisker Labs, Inc. and its purpose is to monitor the home AC line, Figure 1. It then alerts the homeowner via smartphone to surges, brownouts, and arcing (arc faults) which could lead to house fires. It’s even getting glowing click-bait testimonials such as “This Device Saved My House From an Electrical Fire. And You Might Be Able to Get It for Free.” Let’s face it, accolades don’t get much better than that.

Figure 1 The Ting voltage monitor is a small, plug-in box with no user buttons except a reset. Source: Wisker Labs

(“Arcing”—which can ignite nearby flammable substances—occurs when electrical energy jumps across a gap between conductors; it usually but not always occurs at a connector and is often accompanied by sparks, buzzing sounds, and overheating; if it’s in a wall or basement, you might not know about it.)

The $99 device plugs into any convenient outlet—more formally, a receptacle—and once set up with your smartphone, it continuously monitors the AC line for conditions which may be detrimental. It needs no additional sensors or special wiring and looks like any other plug-in device. The vendor claims over a million homes have been protected, aggregating over 980,000 “home years” of coverage and that four of five electrical fires have been prevented.

When the Ting unit identifies a problem it recognizes, the owner receives an alert through the Ting app that provides advice on what to do, Figure 2. Depending on the issue, a live member of the company’s Fire Safety Team may contact you to walk you through whatever remediation steps might be required. In addition, if Ting finds a problem, the company will coordinate service by a licensed electrician and cover costs to remedy the problem up to $1,000.

Figure 2 All interaction between the homeowner and the Ting unit for alerts and reporting is via a Wi-Fi to a smartphone. Source: Wirecutter/New York Times

It all seems so straightforward and beneficial. However, whenever you are dealing with the AC line, there’s lots of room for oversimplification, misunderstanding, and confusion. Just look at the National Electrical Code (NEC) in the US (other countries have similar codes) and you’ll see that there’s more to safety in wiring than just using the appropriate gauge wire, making solid connection, and insulating obvious points. The code is complicated and there are good reasons for its many requirements and mandates.

My first thought on seeing this was “this is a great idea.” Then my natural skepticism kicked in and I wondered: does it really do what they claim? Exactly what does it do, and is that actually meaningful? And then the extra credit question: what else does it do that might not be so good or desirable?

For example, some home-insurance companies are offering it for free, and waive the monthly fee for the first year. That’s a tradeoff users might consider, or is it a clever subscription-service hook?

There is lots of laudatory and flowery language associated with the marketing of this device, but solid technical details are scant, see “How Ting Works.” They state, “Ting pinpoints and identifies the unique signals generated by tiny electrical arcs, the precursors to imminent fire risks. These signals are incredibly small but are clearly visible thanks to Ting’s advanced detection technology.”

Other online postings say that Ting samples the at 30 megasamples/second, looking for anomalies. When it identifies a problem it recognizes, the owner receives an alert through the Ting app that provides advice on what to do.

Let’s face it: the real-world AC line looks nothing like the smoothly undulating textbook sine wave with a steady RMS value. Instead, these are some voltage level variations which the vendor says Ting captured, Figure 3.

Figure 3 The real-world AC line has voltage variation, spikes, surges, and dropouts. Source: F150 Lightning Forum

As for arcing, that’s more complicated than just a low or high-voltage assessment, as it produces RF emissions which can be captured and analyzed.

I was about to sign up to try one out myself but realized the pointlessness of that. First, a sample of one doesn’t prove much. Also, how could I “inject” known faults into the system (my house wiring) to evaluate it? That would be difficult, risky, foolish, and almost meaningless.

Consider the split supply phases

Instead, I looked around the web to see what others said, knowing that you can’t believe everything you read there. One electrician noted that it is only monitoring one side of the two split phases feeding the house, so there’s a significant coverage gap. Another one responded by saying that it was true, but most issues come across on the neutral wire that is shared by both phases.

Even Ting addressed this “one side” concern with a semi-technical response: “The signals that Ting is looking for can be detected throughout the home’s electrical system even though it is installed on a single 120V phase. Fundamentally, Ting is designed to detect the tiny electro-magnetic emissions associated with micro-arcing characteristics of potential electrical faults and does so at very high frequencies. At high frequencies, your home wiring acts like a communications network.”

They continued: “Since each phase shares a common neutral back at your main breaker panel, arcing signals from one phase can be detected by Ting even if it is on the opposite phase. Thus, each outlet in the home will see the signal no matter its location of origin to some degree. With its sensitive detector and powerful post-processing algorithms, Ting can separate the signal from the noise and detect if there is unusual electrical activity. So, you only need one Ting for your home.”

This response brought yet another online response: “monitoring the voltage of both sides of the split phase would be far more ideal. One of the more common types of electrical fires is a damaged or open neutral coming from the transformer. This could send one side of your split phase low and the other high frying equipment and starting fires. But if you’re only monitoring one side of the split phase, you will only see a high or low voltage and have no way of knowing if that is from a neutral issue or voltage sagging on the street.”

As for arcing, every house built since 1999 in the US has been required by code to use AFCI (arc fault circuit interrupter) outlets; those can stop an electrical fire in nearly all cases, not just report it. However, using a single Ting is less costly and presumably has some value for an older home that isn’t going to be renovated or updated to code.

How big is the problem?

Data on house fires is collected and analyzed by various organizations including the National Fire Protection Association (NFPA), individual insurance companies and industry-insurance consortiums. Are house first due to electrical faults a problem? The answer is that it depends on how you look at it.

Depending on who you ask and what you count, there are about 1.5 million fires each year—but many are outdoor barbeque or backyard wood-pile fires. The blog “Predict & Prevent: From Data to Practical Insight” from the Insurance Information Institute deals with electrical house fires and Ting in a generally favorable way (of course, you have to consider the blog’s source) with some interesting numbers: The 10 years from 2012 through 2021 saw reduced cooking, smoking, and heating fires; however, electrical fires saw an 11 percent increase over that same period, Figure 4. Fire ignitions with an undetermined cause also increased by 11 percent.  

Figure 4 The causes of house fires have changed in recent years; electrical fires have increased while others have decreased. Source: U.S. Fire Administration via the Insurance Information Institute

Specific hazards are also detailed, Figure 5:

Figure 5 For those fires whose source has been identified, connected devices and appliances are the source of about half while infrastructure wiring is at about one quarter. Source: Whisker Labs via Insurance Information Institute

The blog also points out that there are many misconceptions regarding electrical fires. It’s easy to assume that most fires are due to older home-wiring infrastructure. However, their data found that 50 percent of home electrical-fire hazards are due to failing or defective devices and appliances, with the other half attributed to home wiring and outlets.

Further, it seems obvious that older homes have higher risk. This may be true only if all other things are equal when considering the effects of age and use on existing wiring infrastructure, but they rarely are. The data shows that assumption is suspect when considering all other factors such as materials, build quality, and the standards and codes at that time.

Other implications

If you get this unit through an insurance company (free or semi-free), that means there’s yet another player the story in addition to the homeowner and Whisker Labs. First, one poster claimed “Digging through the web pages I found each device sends 160 megabytes back to Ting every month…So that means you have to have a stable WiFi router to do the upload. As far as I know, the homeowner does not get a copy of the report uploaded to Ting, but the insurance company does.”

Further, there’s a clause in the agreement between the insurance company that supplied the unit and the homeowner. It says they “may also use the data for purposes of insurance underwriting, pricing, claims handling, and other insurance uses.” Will this information be used to increase your rates or worse cancel your home insurance for imperfect wiring?

It’s not easy to say that the Ting project is a good or bad idea, as that assessment depends on many technical factors and personal preferences. One thing is clear: it may be very useful for collecting and analyzing “big data” across the wiring of millions of homes, AC-line performance, and the relationships between house specifics and electrical risks (hello, AI). However, it can be very tricky when it starts looking at microdata related to a single residence, as it can tell others more about your lifestyle than you would like others to know or how affects how the insurance company rates your house.

What’s your sense of this device and its technical validity?  What about larger-scale technical data-collection value? Finally, how do you feel about personal security and privacy implications?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related content

References

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AC-Line Safety Monitor Brings Technical, Privacy Issues appeared first on EDN.

Arm’s AI pivot for the edge: Cortex A-320 CPU

Птн, 02/28/2025 - 13:55

For artificial intelligence (AI) at the edge moving from basic tasks like noise reduction and anomaly detection to more sophisticated use cases such as big models and AI agents, Arm has launched a new CPU core, the Cortex A-320, as part of the Arm v9 architecture. Combined with Arm’s Ethos-U85 NPU, Cortex A-320 enables generative and agentic AI use cases in Internet of Things (IoT) devices. EE Times’ Sally Ward-Foxton provides details of this AI-centric CPU upgrade while also highlighting key features like better memory access, Kleidi AI, and software compatibility.

Read the full story at EDN’s sister publication, EE Times.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Arm’s AI pivot for the edge: Cortex A-320 CPU appeared first on EDN.

Сторінки