EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 49 min ago

How to prevent overvoltage conditions during prototyping

Tue, 07/22/2025 - 16:40

The good thing about being a field applications engineer is that you get to work on many different circuits, often all at the same time. While this is interesting, it also presents problems. Jumping from one circuit to another involves disconnecting a spaghetti of leads and probes, and the chance for something going wrong increases exponentially with the number of wires involved.

It’s often the most basic things that are overlooked. While the probes and leads are checked and double checked to ensure everything is in place, if the voltage on the bench power supply is not adjusted correctly, the damage can be catastrophic, causing hours of rework.

The circuit described in this article helps save the day. Being a field applications engineer also results in a myriad of evaluation boards being collected, each in a state of modification, some of which can be repurposed for personal use. This circuit is based on an overvoltage/reverse voltage protection component, designed to protect downstream electronics from incorrect voltages being applied in automotive circuits.

Such events are caused by the automotive battery being connected the wrong way or a load dump event where the alternator becomes disconnected from the battery, causing a rise in voltage applied to the electronics.

Circuit’s design details

As shown in Figure 1, MAX16126 is a load dump protection controller designed to protect downstream electronics from over-/reverse-voltage faults in automotive circuits. It has an internal charge pump that drives two back-to-back N-channel MOSFETs to provide a low loss forward path if the input voltage is within a certain range, configured using external resistors. If the input voltage goes too high or too low, the drive to the gates of the MOSFETs is removed and the path is blocked, collapsing the supply to the load.

Figure 1 This is how over-/reverse-voltage protection circuit works. Source: Analog Devices Inc.

MAX16127 is similar to MAX16126, but in the case of an overvoltage, it oscillates the MOSFETs to maintain the voltage across the load. If a reverse voltage occurs on the input, an internal 1 MΩ between the GATE and SRC pins of the MAX16126 ensures MOSFETs Q1 and Q2 are held off, so the negative voltage does not reach the output. The MOSFETs are connected in opposing orientations to ensure the body diodes don’t conduct current.

The undervoltage pin, UVSET, is used to configure the minimum trip threshold of the circuit while the overvoltage pin, OVSET, is used to configure the maximum trip threshold. There is also a TERM pin connected via an internal switch to the input pin and this switch is open circuited when the part is in shutdown, so the resistive divider networks on the UVSET and OVSET pins don’t load the input voltage.

In this design, the UVSET pin is tied to the TERM pin, so the MOSFETs are turned on when the device reaches its minimum operating voltage of 3 V. The OVSET pin is connected to a potentiometer, which is adjusted to change the overvoltage trip threshold of the circuit.

To set the trip threshold to the maximum voltage, the potentiometer needs to be adjusted to its minimum value and likewise for the minimum trip threshold the potentiometer is at its maximum value. The IC switches off the MOSFETs when the OVSET pin rises above 1.225 V.

The overvoltage clamping range should be limited to between 5 V and 30 V, so resistors are inserted above and below the potentiometer to set the upper and lower thresholds. There are Zener diodes connected across the UVSET and OVSET pins to limit the voltage of these pins to less than 5.1 V.

Assuming a 47-kΩ resistor is used, the upper and lower resistor values of Figure 1 can be calculated.

To achieve a trip threshold of 30 V, Equation 1 is used:

To achieve a trip threshold of 5 V, Equation 2 is used:

Equating the previous equations gives Equation 3:

So,

From this,

Using preferred values, let R3 = 10 kΩ and R2 = 180 kΩ. This gives an upper limit of 29 V and a lower limit of 5.09 V. This is perfect for a 30 V bench power supply.

Circuit testing

Figure 2 shows the prototype PCB. The trip threshold voltage was adjusted to 12 V and the circuit was tested.

Figure 2 Modified evaluation kit illustrate the circuit testing. Source: Analog Devices Inc.

The lower threshold was measured at 5.06 V and the upper threshold was measured at 28.5 V. With a 10-V input and a 1-A load, the voltage measured between input and output was measured at 19 mV, which aligns with the MOSFET datasheet ON resistance of about 10 mΩ.

Figure 3 shows the response of the circuit when a 10-V step was applied. The yellow trace is the input voltage, and the blue trace shows the output voltage. The trip threshold was set to 12 V, so the input voltage is passed through to the output with very little voltage drop.

Figure 3 A 10-V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The input voltage was increased to 15 V and retested. Figure 4 shows that the output voltage stays at 0 V.

Figure 4 A 15-V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The input voltage was reversed, and a –7 V step was applied to the input, with the results shown in Figure 5.

Figure 5 A –7 V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The negative input voltage was increased to –15 V and reapplied to the input of the circuit. The results are shown in Figure 6.

Figure 6 A –15 V step is applied to the input of MAX16126. Source: Analog Devices Inc.

Caution should be exercised when probing the gate pins of the MOSFETs when the input is taken to a negative voltage. Referring to Figure 1, the body diode of Q1 pulls the two source pins toward VIN, which is at a negative voltage. There is an internal 1 MΩ resistor between the GATE and SRC connections of MAX16126, so when a ground referenced 1 MΩ oscilloscope probe is attached to the gate pins of the MOSFETs, the oscilloscope probe acts like a 1 MΩ pull-up resistor to 0 V.

As the input is pulled negative, a resistive divider is formed between 0 V, the gate voltage, and the source of Q2, which is being pulled negative by the body diode of Q1. When the input voltage is pulled to lower than twice the turn-on voltage of Q2, this MOSFET turns on and the output starts to go negative. Using a higher impedance oscilloscope probe overcomes this problem.

A simple modification to the MAX16126 evaluation kit provides reassuring protection from user-generated load dump events caused by momentary lapses in concentration when testing circuits on the bench. If the components in the evaluation kit are used, the circuit presents a low loss protection circuit that is rated to 90 V with load currents up to 50 A.

Simon Bramble specializes in analog electronics and power. He has spent his career in analog electronics and worked at Maxim and Linear Technology, both now part of Analog Devices Inc.

Related Content

The post How to prevent overvoltage conditions during prototyping appeared first on EDN.

Firmware-upgrade functional defection and resurrection

Mon, 07/21/2025 - 17:37

My first job out of college was with Intel, in the company’s nonvolatile memory division. After an initial couple of years dabbling with specialty EPROMs, I was the first member from that group to move over to the then-embryonic flash memory team to launch the company’s first BootBlock storage device, the 28F001BX. Your part number decode is correct: it was a whopping 1 Mbit (not Gbit!) in capacity 😂. Its then-uniqueness derived from two primary factors:

  • Two separately erasable blocks, asymmetrical in size
  • One of which (the smaller block) was hardware-lockable to prevent unintentional alteration of its contents, perhaps obviously to allow for graceful recovery in case the main (larger) block’s contents, the bulk of system firmware, somehow got corrupted.

The 28F001BX single-handedly (in admitted coordination with Intel’s motherboard group, the first to adopt it) kickstarted the concept of upgradable BIOS for computers already in the field. Its larger-capacity successors did the same thing for digital cellular phones, although by then I was off working on even larger capacity devices with even more (symmetrical, this time) erase blocks for solid-state storage subsystems…which we now refer to as SSDs, USB flash sticks, and the like. This all may explain why in-system firmware updates (which involve much larger code payloads nowadays, of course)—both capabilities and pitfalls—have long been of interest to me.

The concept got personal not too long ago. Hopefully, at least some of you have by now read the previous post in my ongoing EcoFlow portable power station (and peripheral) series, which covered the supplemental Smart Extra Battery I’d gotten for my DELTA 2 main unit:

Here’s what they look like stacked, with the smart extra battery on top and the XT150 cable interconnecting them, admittedly unkempt:

The timeline

Although that earlier writeup was published on April 23, I’d actually submitted it on March 11. A bit more than a week post-submission, the DELTA 2 locked up. A week (and a day) after the earlier writeup appeared at EDN.com, I succeeded in bringing it back to life (also the day before my birthday, ironically). And in between those two points in time, a surrogate system also entered my life. The paragraphs that follow will delve into more detail on all these topics, including the role that firmware updates played at both the tale’s beginning and end points.

A locked-up DELTA 2

To start, let’s rewind to mid-March. For about a week, every time I went into the furnace room where the gear was stored, I’d hear the fan running on the DELTA 2. This wasn’t necessarily atypical; every time the device fired up its recharge circuits to top off the battery, the fan would briefly go on. And everything looked normal remotely, through the app:

But eventually, the fan-running repetition, seemingly more than mere coincidence, captured my attention, and I punched the DELTA 2’s front panel power button to see what was going on. What I found was deeply disturbing. For one thing, the smart extra battery was no longer showing as recognized by the main unit, even though it was still connected. And more troubling, in contrast to what the app was telling me, the display indicated the battery pack was drained. Not to mention the bright red indicator, suggestive that the battery pack was actually dead:

So, I tried turning the DELTA 2 off, which led to my next bout of woe. It wouldn’t shut down, no matter how long I held the power button. I tried unplugging it, no luck. It kept going. And going. I realized that I was going to need to leave it unplugged with the fan whining away, while in parallel I reached out to customer support, until the battery drained (the zeroed-out integrated display info was obviously incorrect, but I had no idea whether the “full” report from the app was right, either). Three days later, it was still going. I eventually plugged an illuminated workbench light to one of its AC outlets, whose incremental current draw finally did the trick.

I tried plugging the DELTA 2 back in. It turned on but wouldn’t recharge. It also still ignored subsequent manual power-down attempts, requiring that I again drain the battery to force a shutoff. And although it now correctly reported a zeroed battery charge status, the dead-battery icon was now joined by another error message, this indicating overload of device output(s) (?):

At this point, I paused and pondered what might have gone wrong. I’d owned the DELTA 2 for about six months at that point, and I’d periodically installed firmware updates to it via the app running on my phone (and in response to new-firmware-available notices displayed in that app) with no issues. But I’d only recently added the Smart Extra Battery to the mix. Something amiss about the most recent firmware rev apparently didn’t like the peripheral’s presence, I guessed:

So, while I was waiting for customer service to respond, I hit up Reddit. And lo and behold, I found that others had experienced the exact same issue:

Resuscitation

It turns out that V1.0.1.182 wasn’t the most recent firmware rev available, but for reasons that to this day escape me (but seem to be longstanding company practice), EcoFlow didn’t make the V1.0.1.183 successor generally available. Instead, I needed to file a ticket with technical support, providing my EcoFlow account info and my unit’s serial number, along with a description of the issue I was having, and requesting that they “push” the new version to me through the app. I did so, and with less than 24 hours of turnaround, they did so as well:

Fingers crossed, I initiated the update to the main unit:

Which succeeded:

Unfortunately, for unknown reasons, the subsequent firmware update attempt on the smart extra battery failed, rendering it inaccessible (only temporarily, thankfully, it turned out):

And even on the base unit, I still wasn’t done. Although it was now once again responding normally to front-panel power-off requests, its display was still wonky:

However, a subsequent reset and recalibration of the battery management system (BMS), which EcoFlow technical support hadn’t clued me in on but Reddit research had suggested might also be necessary, kicked off (and eventually completed) the necessary recharge cycle successfully:

(Longstanding readers may remember my earlier DJI drone-themed tutorial on what the BMS is and why periodic battery cycling to recalibrate it is necessary for lithium-based batteries):

And re-attempt of the smart extra battery firmware update later that day was successful as well:

Voila: everything was now back to normal. Hallelujah:

That said, I think I’ll wait for a critical mass of other brave souls to tackle the V1.0.1.200 firmware update more recently made publicly available, before following their footsteps:

The surrogate

And what of that “surrogate system” that “also entered my life”, which I mentioned earlier in this piece? This writeup’s already running long, so I won’t delve into too much detail on this part of the story here, saving it for a separate planned post to come. But the “customer service” folks I mentioned I’d initially reached out to, prior to my subsequent direct connection to technical support, were specific to EcoFlow’s eBay storefront, where I’d originally bought the DELTA 2.

They ended up sending me a DELTA 3 Plus and DELTA 3 Series Smart Extra Battery (both of which I’ve already introduced in prior coverage) as replacements, presumably operating under the assumption that my existing units were dead parrots, not just resting. They even indicated that I didn’t need to bother sending the DELTA 2-generation devices back to them; I should just responsibly dispose of them myself. “Teardown” immediately popped into my head; here’s an EcoFlow-published video I’d already found as prep prior to their subsequent happy restoration:

And here are the DELTA 3 successors, both standalone:

and alongside their predecessors. The much shorter height (and consequent overall decreased volume) of the DELTA 3 Series Smart Extra Battery versus its precursor is particularly striking:

As previously mentioned, I’ll have more on the DELTA 3 products in dedicated coverage to come shortly. Until then, I welcome your thoughts in the comments on what I’ve covered here, whether in general or related to firmware-update snafus you’ve personally experienced!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Firmware-upgrade functional defection and resurrection appeared first on EDN.

Two new runtime tools to accelerate edge AI deployment

Mon, 07/21/2025 - 16:03

While traditional artificial intelligence (AI) frameworks often struggle in ultra-low-power scenarios, two new edge AI runtime solutions aim to accelerate the deployment of sophisticated AI models in battery-powered devices like wearables, hearables, Internet of Things (IoT) sensors, and industrial monitors.

Ambiq Micro, the company that develops low-power microcontrollers using sub-threshold transistors, has unveiled two new edge AI runtime solutions optimized for its Apollo system-on-chips (SoCs). These developer-centric tools—HeliosRT (runtime) and HeliosAOT (ahead-of-time)—offer deployment options for edge AI across a wide range of applications, spanning from digital health and smart homes to industrial automation.

Figure 1 The new runtime tools allow developers to deploy sophisticated AI models in battery-powered devices. Source: Ambiq

The industry has seen numerous failures in the edge AI space because users dislike it when the battery runs out in an hour. It’s imperative that devices running AI can operate for days, even weeks or months, on battery power.

But what’s edge AI, and what’s causing failures in the edge AI space? Edge AI is anything that’s not running on a server or in the cloud; for instance, AI running on a smartwatch or home monitor. The problem is that AI is power-intensive, and sending data to the cloud over a wireless link is also power-intensive. Moreover, the cloud computing is expensive.

“What we aim is to take the low-power compute and turn it into sophisticated AI,” said Carlos Morales, VP of AI at Ambiq. “Every model that we create must go through runtime, which is firmware that runs on a device to take the model and execute it.”

LiteRT and HeliosAOT tools

LiteRT, formerly known as TensorFlow Lite for microcontrollers, is a firmware version for TensorFlow platform. HeliosRT, a performance-enhanced implementation of LiteRT, is tailored for energy-constrained environments and is compatible with existing TensorFlow workflows.

HeliosRT optimizes custom AI kernels for the Apollo510 chip’s vector acceleration hardware. It also improves numeric support for audio and speech processing models. Finally, it delivers up to 3x gains in inference speed and power efficiency over standard LiteRT implementations.

Next, HeliosAOT introduces a ground-up, ahead-of-time compiler that transforms TensorFlow Lite models directly into embedded C code for edge AI deployment. “AOT interpretation, which developers can perform on their PC or laptop, produces C code, and developers can take that code and link it to the rest of the firmware,” Morales said. “So, developers can save a lot of memory on the code size.”

HeliosAOT provides a 15–50% reduction in memory footprint compared to traditional runtime-based deployments. Furthermore, with granular memory control, it enables per-layer weight distribution across the Apollo chip’s memory hierarchy. It also streamlines deployment with direct integration of generated C code into embedded applications.

Figure 2 HeliosRT and HeliosAOT tools are optimized for Apollo SoCs. Ambiq

“HeliosRT and HeliosAOT are designed to integrate seamlessly with existing AI development pipelines while delivering the performance and efficiency gains that edge applications demand,” said Morales. He added that both solutions are built on Ambiq’s sub-threshold power optimized technology (SPOT).

HeliosRT is now available in beta via the neuralSPOT SDK, while a general release is expected in the third quarter of 2025. On the other hand, HeliosAOT is currently available as a technical preview for select partners, and general release is planned for the fourth quarter of 2025.

Related Content

The post Two new runtime tools to accelerate edge AI deployment appeared first on EDN.

Did connectivity sunsetting kill your embedded-system battery?

Fri, 07/18/2025 - 22:12

You’re likely familiar with the concept of “sunsetting,” where a connectivity standard or application is scheduled to be phased out, such that users who depend on it are often simply “out of luck.” It’s frustrating, as it can render an established working system that is doing its job properly either partially or totally useless. The industry generally rationalizes sunsetting as an inevitable consequence of the progress and new standards not only superseding old ones but making them obsolete.

Sunsetting can leave unintended or unknowing victims, but it goes far beyond just loss of connectivity, and I am speaking from recent experience. My 2019 ICE Subaru Outback wouldn’t start despite its fairly new battery; it was totally dead as if the battery was missing. I jumped the battery and recharged it by running the car for about 30 minutes, but it was dead again the next morning. I assumed it was either a defective charging system or a low- or medium-resistance short circuit somewhere.

(As an added punch to the gut, with the battery dead, there was no way to electronically unlock the doors or get to the internal hood release, so it seemed it would have to be towed. Fortunately, the electronic key fob has a tiny “secret” metal key that can be used in its old-fashioned, back-up mechanical door lock just for such situations.)

I jump-started it again and drove directly to the dealer, who verified the battery and charging system were good. Then the service technician pulled a technical rabbit out of his hat—apparently, this problem was no surprise to the service team.

The vampire (drain) did it—but not the usual way

The reason for the battery being drained is subtle but totally avoidable. It was an aggravated case of parasitic battery drain (often called “vampire drain” or “standby power”; I prefer the former) where the many small functions in the car still drain a few milliamps each as their keep-alive current. The aggregate vampire power drawn by the many functions in the car, even when the car is purportedly “off,” can kill the battery.

Subaru used 3G connectivity to link the car to their basic Starlink Safety and Security emergency system, a free feature even if you don’t pay for its many add-on subscription functions (I don’t). However, 3G cellular service is being phased out or “sunsetted” in industry parlance. Despite this sunsetting, the car’s 3G transponder, formally called a Telematics Data Communication Module (TDCM or DCM), just kept trying, thus killing the battery.

The dealer was apologetic and replaced the 3G unit at no cost with a 4G-compatible unit that they conveniently had in stock. I suspect they were prepared for this occurrence all along and were hoping to keep it quiet. There have been some class-action suits and settlements on this issue, but the filing deadline had passed, so I was out of luck on that.

An open-market replacement DCM unit is available for around $500. While the dealer pays less, it’s still not cheap, and swapping them is complicated and time-consuming. It takes at least an hour for physical access, setup, software initialization, and check-out—if you know what you are doing. There are many caveats in the 12-page instruction DCM section for removal and replacement of the module (Figure 1) as well as in the companion 14-page guide for the alternative Data Communication Module (DCM) Bypass Box (Figure 2), which details some tricky wire-harness “fixing.”
Figure 1 The offending unit is behind the console (dashboard) and takes some time to remove and then replace. Source: Subaru via NHTSA

Figure 2 There are also some cable and connector issues of which the service technician must be aware and use care. Source: Subaru via NHTSA

While automakers impose strict limits on the associated standby drain current for each function, it still adds up and can kill the battery of a car parked and unused for anywhere from a few days to a month. The period depends on the magnitude of the drain and the battery’s condition. I strongly suspect that the 3G link transponder uses far more power than any of the other functions, so it’s a more worrisome vampire.

Sunsetting + vampire drain = trouble

What’s the problem here? Although 3G was being sunsetted, that was not the real problem; discontinuing a standard is inevitable at some point. Further, there could also be many other reasons for not being able to connect, even if 3G was still available, such as being parked in a concrete garage. After all, both short- and long-term link problems should be expected.

No, the problem is a short-sighted design that allowed a secondary, non-core function over which you have little or no control (here, the viability of the link) to become a priority and single-handedly drain power and deplete the battery. Keep in mind that the car is perfectly safe to use without this connectivity feature being available.

There’s no message to the car’s owner that something is wrong; it just keeps chugging away, attempting to fulfill its mission, regardless of the fact that it depletes the car’s battery. It has a mission objective and nothing will stop it from trying to complete it, somewhat like the relentless title character in the classic 1984 film The Terminator.

A properly vetted design would include a path that says if connectivity is lost for any reason, keep trying for a while and then go to a much lower checking rate, and perhaps eventually stop.

This embedded design problem is not just an issue for cars. What if the 3G or other link was part of a hard-to-reach, long-term data-collection system that was periodically reporting, but also had internal memory to store the data? Or perhaps it was part of a closed-loop measurement and control that could function autonomously, regardless of reporting functionality?

Continuously trying to connect despite the cost in power is a case of the connectivity tail not only wagging the core-function dog but also beating it to death. It is not a case of an application going bad due to forced “upgrades” leading to incompatibilities (you probably have your own list of such stories). Instead, it’s a design oversight of allowing a secondary, non-core function to take over the power budget (in some cases, also the CPU), thus disabling all the functionality.

Have you ever been involved with a design where a non-critical function was inadvertently allowed to demand and get excessive system resources? Have you ever been involved with a debug challenge or product-design review where this unpleasant fact had initially been overlooked, but was caught in time?

Whatever happens, I will keep checking to see how long 4G is available in my area. The various industry “experts” say 10 to 15 years, but these experts are often wrong! Will 4G connectivity sunset before my car does? Abd if it does, will the car’s module keep trying to connect and, once again, kill the battery? That remains to be seen!

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

The post Did connectivity sunsetting kill your embedded-system battery? appeared first on EDN.

Evaluation board powers small robotics and drones

Fri, 07/18/2025 - 21:05

The EPC91118 reference design from EPC integrates power, sensing, and control on a compact circular PCB for humanoid robot joints and UAVs. Driven by the EPC23104 GaN-based power stage, the three-phase BLDC inverter delivers up to 10 A RMS steady-state output and 15 A RMS pulsed.

Complementing the GaN power stage are all the key functions for a complete motor drive inverter, including a microcontroller, rotor shaft magnetic encoder, regulated auxiliary rails, voltage and current sensing, and protection features. Housekeeping supplies are derived from the inverter’s main input, with a 5-V rail powering the GaN stage and a 3.3-V rail supplying the controller, sensors, and RS-485 interface. All these functions fit on a 32-mm diameter board, expanding to 55 mm including an external frame for mechanical integration.

The inverter’s small size allows integration directly into humanoid joint motors. GaN’s high switching frequency allows the use of compact MLCCs in place of bulkier electrolytic capacitors, helping reduce overall size while enhancing reliability. With a footprint reportedly 66% smaller than comparable silicon MOSFET designs, the EPC91118 enables a space-saving motor drive architecture.

EPC91118 reference design boards are priced at $394.02 each. The EPC23104 eGaN power stage IC costs $2.69 each in 3000-unit reels. Both are available for immediate delivery from Digi-Key.

EPC91118 product page

Efficient Power Conversion

The post Evaluation board powers small robotics and drones appeared first on EDN.

Real-time AI fuels faster, smarter defect detection

Fri, 07/18/2025 - 21:05

TDK SensEI’s edgeRX Vision system, powered by advanced AI, accurately detects defects in components as small as 1.0×0.5 mm in real time. Operating at speeds up to 2000 parts per minute, it reduces false positives and enhances efficiency in high-throughput manufacturing.

AI-driven vision systems now offer real-time processing, improved label efficiency, and multi-modal interaction through integration with language models. With transformer-based models like DINOv2 and SAM enabling versatile vision tasks without retraining, edge-based solutions are more scalable and cost-effective than ever—making this a timely entry point for edgeRX Vision in high-volume manufacturing.

edgeRX Vision integrates with the company’s edgeRX sensors and industrial machine health monitoring platform. By enhancing existing hardware infrastructure, it helps minimize unnecessary machine stoppages. Together, the system offers manufacturers a smart, integrated approach to demanding production challenges.

Request a demonstration of the edgeRX Vision defect detection system via the product page link below.

edgeRX Vision product page

TDK SenseEI 

The post Real-time AI fuels faster, smarter defect detection appeared first on EDN.

Open-source plugin streamlines edge AI deployment

Fri, 07/18/2025 - 21:05

Analog Devices and Antmicro have released AutoML for Embedded, a tool that simplifies AI deployment on edge devices. Part of Antmicro’s hardware-agnostic, open-source Kenning framework, it automates model selection and optimization for resource-constrained systems. The tool helps users deploy models more easily without deep expertise in AI or embedded development.

AutoML for Embedded is a Visual Studio Code plugin designed to integrate seamlessly into existing development workflows. It works with CodeFusion Studio and supports direct deployment to ADI’s MAX78002 AI accelerator MCU and MAX32690 ultra-low power MCU. The tool also enables rapid prototyping and testing through Renode-based simulation and Zephyr RTOS workflows. Its support for general-purpose, open-source tools allows flexible model optimization without locking developers into a specific platform.

With step-by-step tutorials, reproducible pipelines, and example datasets, users can move from raw data to edge AI deployment quickly without needing data science expertise. AutoML for Embedded is available now on the Visual Studio Code Marketplace and GitHub. Additional resources are available on the ADI developer portal.

AutoML for Embedded product page 

Analog Devices

The post Open-source plugin streamlines edge AI deployment appeared first on EDN.

Foundry PDK drives reliable automotive chip design

Fri, 07/18/2025 - 21:05

SK keyfoundry, in collaboration with Siemens EDA Korea, has introduced a 130-nm automotive process design kit (PDK) compatible with Calibre PERC software. The process node supports both schematic and layout verification, including interconnect reliability checks. With this PDK, fabless companies in Korea and abroad can optimize automotive power semiconductor designs while performing detailed reliability verification.

According to Siemens, while the 130-nm process has been a reliable choice for analog and power semiconductor designs, growing design complexity has made it harder to meet performance targets. The new PDK from SK keyfoundry enables designers to use Siemens’ Calibre PERC with the foundry’s process technology, supporting layout-level verification that accounts for manufacturing constraints.

SK keyfoundry aims to deepen collaboration with Siemens through optimized design solutions, enhanced manufacturing reliability, and a stronger foundry market position.

To learn more about Siemen’s Calibre PERC reliability verification software, click here.

Siemens Digital Industries Software 

SK keyfoundry 

The post Foundry PDK drives reliable automotive chip design appeared first on EDN.

SiC diodes maintain stable, efficient switching

Fri, 07/18/2025 - 21:05

Nexperia’s 1200-V, 20-A SiC Schottky diodes contribute to high-efficiency power conversion in AI server infrastructure and solar inverters. The PSC20120J comes in a D2PAK Real-2-Pin (TO-263-2) surface-mount package, while the PSC20120L uses a TO-247 Real-2-Pin (TO-247-2) through-hole package. Both thermally stable plastic packages ensure reliable operation up to +175°C.

These Schottky diodes offer temperature-independent capacitive switching and virtually zero reverse recovery, resulting in a low figure of merit (QC×VF). Their switching performance remains consistent across varying current levels and switching speeds.

Built on a merged PiN Schottky (MPS) structure, the diodes also provide strong surge current handling, as shown by their high peak forward current (IFSM). This robustness reduces the need for external protection circuitry, helping engineers simplify designs, improve efficiency, and shrink system size in high-voltage, harsh-environment applications.

Use the product page links below to view datasheets and check availability for the PSC20120J and PSC20120L SiC Schottky diodes.

PSC20120J product page 

PSC20120L product page

Nexperia

The post SiC diodes maintain stable, efficient switching appeared first on EDN.

Electronic water softener design ideas to transform hard water

Thu, 07/17/2025 - 11:39

If you are tired of scale buildup, scratchy laundry, or cloudy glassware, it’s probably time to take hard water into your own hands, literally. This blog delves into inventive, affordable, and unexpectedly easy design concepts for building your own electronic water softener.

Whether you are an engineer armed with blueprints or a hands-on do-it-yourself enthusiast ready to roll up your sleeves, the pointers shared here will help you transform a persistent plumbing issue into a smooth-flowing success.

So, what’s an electronic water softener (descaler)? It’s a simple oscillator circuit tailored to create a magnetic field around a water pipe to reduce the chances of smaller deposits sticking to the inside of the pipes.

Not new, the concept of water conditioning dates back to the 1930s. Hard water has a high concentration of minerals, the most abundant of which is calcium particles. The makeup of deposits leads to the term hard water and reduces the effectiveness of soaps and detergents. Over time, these tiny deposits can stick to the inside of pipes, clog filters, faucets and shower heads, and leave residue on kettles.

The idea behind the electronic/electromagnetic water softener is that a magnetic field around the water pipe causes calcium particles to clump together. Such a system consists of two coils wound around the water pipe with a gap between them.

The circuit driving them is often a high frequency oscillator that generates pulses of 15 kHz or so. As a result, large particles are formed, which pass through the water pipe and do not cling to the inside.

Thus, the electronic water softener operates by wrapping coils of wire around the incoming water main to pass a magnetic field through the water. This causes the calcium in the water to stay in solution, thereby bottling it up from clinging to taps and kettles. Also, the impact of electromagnetic flux makes the water physically soft as the magnetic flux breaks the hard molecules and makes it soft by nature.

Below is a visual summary of the process.

Figure 1 The original image was sourced from Google Images and has been retouched by author for visual clarity.

Most electronic descalers typically operate with two coils to increases the time for which the water is exposed to the electromagnetic waveform, but a few use only one coil.

Figure 2 Here is how electronic descalers operate with two coils or one coil. Source: Author

A quick inspection of the most common water softener circuits found on the web shows that the drive frequency is about 2 to 20 kHz in the 5- to 15-V amplitude range. The coils to be wound outside the pipe are just about 20- to 30-turn inductors made of 18 to 24 SWG insulated or copper wire.

It has also been noted that neither the material of the water pipe (PVC or metal) nor its diameter has a significant effect on the efficiency of the lime solver.

When I stumbled upon a blogpost from 2013, it felt like the perfect moment to explore the idea more deeply. This marks the beginning of a hands-on learning journey—less of a formal project and more of a series of small, practical experiments and functional blueprints.

The focus is not on making a polished product, but on picking up new skills and exploring where the process leads. So, after learning from several sources about how electronic water softeners work, I decided to give it a try.

The first step in my process involved developing a universal (and exploratory) driver circuit for the pipe coil(s). The outcome is shown below.

Figure 3 The schematic shows a driver circuit for the pipe coil. Source: Author

Below is the list of parts.

  • C1 and C2: 470 uF/25 V
  • C3: 1,000 uF/25 V
  • D1: 1N4007
  • L1: 470 uH/1 A
  • IC1: MC34151

Note that the single-layer coil L2 on the 20-mm diameter PVC water pipe is made of around 60 turns of 18AWG insulated wire. The single-layer coil on pipe has an inductance of about 20 uH when measured with an LCR meter. The 470 uH drum core inductor L1 (empirically selected part) throttles the peak current through the pipe coil L2.

A single-channel MOSFET gate driver is adequate for IC1 in this setup; however, I opted for the MC34151 gate driver during prototyping as it was readily on hand. Next comes a bit different blueprint for the pipe coil driver.

Figure 4 Arduino Uno was used to drive the pulse input of the pipe coil driver circuitry. Source: Author

To drive the pulse input of the pipe coil driver circuitry, an Arduino Uno was used (just for convenience) to generate a sweeping frequency between 500 Hz and 5 kHz (the adapted code is available upon request). Although selected without a specific technical justification, this empirically optimized range has demonstrated enhanced performance in some targeted zones.

At this stage, opting for a microcontroller-based oscillator or pulse generator is advisable to ensure scalability and facilitate future enhancements. That said, a solution using discrete components continues to be a valid choice (an adaptable textbook pointer is provided below).

Figure 5 An adaptable textbook pointer highlights the above solution. Source: Author

Nevertheless, the setup ought to be capable of delivering a pulsed current that generates time-varying magnetic fields within the water pipe, thereby inducing an internal electric field. For optimal induction efficiency, a square-wave pulsed current is always advocated.

The experiment is still ongoing, and I am drawing a tentative conclusion at this stage. But for now, it’s your chance to dive in, experiment, and truly make it your own.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Electronic water softener design ideas to transform hard water appeared first on EDN.

What makes today’s design debugging so complex

Wed, 07/16/2025 - 17:51

Why does the task of circuit debugging keep getting complex year by year? It’s no longer looking at the schematic diagram and sorting out the signal flow path from input to output. Here is a sneak peek at the factors leading to a steady increase in challenges in debugging electronics circuits. It shows how the intermingled software/hardware approach has made prototyping electronic designs so complex.

Read the full blog on EDN’s sister publication, Planet Analog.

Related content

The post What makes today’s design debugging so complex appeared first on EDN.

Headlights In Massachusetts

Wed, 07/16/2025 - 16:25

From January 5, 2024, please see: “The dangers of light glare from high-brightness LEDs.”

I have just become aware that at least one state has wisely chosen to address the safety issue of automotive headlight glare. As to the remaining forty-nine states, I have not yet seen any indication(s) of similar statutes. Now please see the following screenshots and links:

One question at hand of course is how well the Massachusetts statute will be enforced. What may be on the books is one thing but what will happen on the road remains to be seen.

I am hopeful.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post Headlights In Massachusetts appeared first on EDN.

Another weird 555 ADC

Tue, 07/15/2025 - 20:11

Integrating ADCs that provide accurate results without requiring a precision integrator capacitor has been around for a long time. A venerable example is that multimeter favorite, the dual-slope ADC. That classic topology uses just one integrator to alternately accumulate both incoming signal and complementary voltage references with the same RC time constant. It thus automatically ratios out time constant tolerance. Slick. 

This Design Idea (DI) will describe a (possibly) new integrating converter that reaches a similar goal of accurate conversions without needing an accurate capacitor. But it gets there via a significantly different route. Along the route, it picks up some advantageous wrinkles.

Wow the engineering world with your unique design: Design Ideas Submission Guide

As Figure 1 shows, the design starts off with an old friend, the 555-analog timer.

Figure 1 Op-amp A1 continuously integrates the incoming Vin signal, thus minimizing noise. Conversion occurs in alternating phases, T- and T+. The T-/T+ phase duration ratio is independent of the RC time constant, is therefore insensitive to C1 tolerance, and contains both Vin magnitude and polarity information.

Incoming signal Vin is summed with the voltage at node X and accumulated by differential integrator A1. A conversion cycle begins when A1’s output (node Y) reaches 4.096 V and lifts timer U1’s threshold pin (Thr) through the R2/R3 divider to the 2.048-V reference supplied by voltage reference Z1. This switches on U1’s Dch pin, grounding A1’s noninverting input through the R4/R5 divider, outputs a zero to the GPIO bit (node Z), and begins the T- phase as A1’s output ramps down. The duration of this T- phase is given by:

T- = R1C1/(1 + Vin/Vfullscale)

Vfullscale = ±2.048v(R1/R6) = ±0.683v

The T- phase ends when A1’s output reaches U1’s trigger (Trg) voltage set to 1.024 V by Z1 and U1’s internal 2:1 divider. See the LMC555 datasheet for the gritty details.

This starts the T+ conversion phase with an output of one on the GPIO bit, and the release of Dch by U1, which drives A1’s noninverting input to 1.024 V, set by Z1 and the R4/R5 divider. The T+ positive-going ramp continues until A1’s output reaches the 4.096 VThr threshold described above and initiates the next conversion cycle. 

T+ phase duration is:

T+ = R1C1/(1 – Vin/Vfullscale)

 This frenetic frenzy of activity is summarized in Figure 2.

Figure 2 Various conversion signals found at circuit nodes X, Y, and Z.

Meanwhile, the GPIO pin is assumed to be connected to a suitable microcontroller counter/time peripheral that is accumulating T- and T+ durations for a chosen resolution and conversion rate. Something between 1 µs and 100 ns should work for the subsequent Vin calculation. This brings up that claim of immunity to integrator capacitor tolerance you might be wondering about.

The durations of the T+ and T- ramps are proportional to C1, as shown in Figure 3.

Figure 3 Black = Vin, Red = T+ duration in ms, Blue = T- duration, C1 = 0.001 µF.

However, software arithmetic saves the day (and maybe even my reputation!) because recovery of Vin from the raw phase duration timeouts involves a bit of divide-and-conquer.

Vin = Vfullscale ((1 – (T-/T+))/(1 + (T-/T+)))

And, of course, when T- is divided by T+, the R1C1 terms conveniently disappear, taking sensitivity to C1 tolerance away with them!

A final word about Vfullscale. The ±0.683 V figure derived above is a minimum value, but any larger span can be easily accommodated by adding one resistor (R8) and changing another (R1). Here’s the scale-changing arithmetic:

R1 = 1M * Vfullscale/0.683

R8 = 1/(1/1M – 1/R1)

 For example, ±10 V is illustrated in Figure 4.

Figure 4 A ±10-V Vin span is easily accommodated – if you can find a 15 MΩ precision resistor.

Note that R1 would probably need to be a series string to get to 15 MΩ using OTS resistors.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Another weird 555 ADC appeared first on EDN.

Circuits to help verify matched resistors

Tue, 07/15/2025 - 17:06

Analog designers often need matched resistors for their circuits [1]. The best solution is to buy integrated resistor networks [2], but what can you do if the parts vendors do not offer the desired values or matching grade?

Wow the engineering world with your unique design: Design Ideas Submission Guide

The circuit in Figure 1 can help. It is made of two voltage dividers (a Wheatstone bridge) followed by an instrumentation amplifier, IA, with a gain of 160. R3 is the reference resistor, and R4 is its match. The circuit subtracts the voltages coming out of the two dividers and amplifies the difference.

Figure 1 The intuitive solution is a circuit made of a Wheatstone bridge and an instrumentation amplifier.

Calculations show that the circuit provides a perfectly linear response between output voltage and resistor mismatch (see Figure 2). The slope of the line is 1 V per 1% of resistor mismatch; for example, a Vout of -1 V means -1% deviation between R3 and R4.

Figure 2 Circuit response is perfectly linear with a 1:1 ratio between output voltage and resistor mismatch.

A possible drawback is the price: instrumentation amplifiers with a power supply of ±5 V and more start at about 6.20 USD. Figure 3 shows another circuit using a dual op-amp, which is 2.6 times cheaper than the cheapest instrumentation amplifier.

Figure 3 This circuit also provides a perfect 1:1 response, but at a lower cost.

The transfer function is:

Assuming,

converts the transfer function into the form,

If the term within the brackets equals unity and R5 equals R6, the transfer function becomesIn other words, the output voltage equals the percentage deviation of R4 with respect to R3. This voltage can be positive, negative, or, in the case of a perfect match between R3 and R4, zero.

The circuit is tested for R3 = 10.001 kΩ and R4 = 10 kΩ ±1%. As Figure 4 shows, the transfer function is perfectly linear (the R2 factor equals unity) and provides a one-to-one relation between output voltage and resistor mismatch. The slope of the line is adjusted to unity using potentiometer R2 and the two end values of R4. A minor offset is present due to the imperfect match between R5 and R6 and the offset voltage VIO of the op-amps.  

Figure 4 The transfer function provides a convenient one-to-one reading.

A funny detail is that the circuit can be used to find a pair of matched resistors, R5 and R6, for itself. As mentioned before, it is better to buy a network of matched resistors. It may look expensive, but it is worth the money.

Equation 3 shows that circuit sensitivity can be increased by increasing R7 and/or VREF. For example, if R7 goes up to 402 kΩ, the slope of the response line will increase to 10 V per 1% of resistor mismatch. A mismatch of 0.01% will generate an output voltage of 100 mV, which can be measured with high confidence.

Watch the current capacity of VREF and op-amps when you deal with small resistors. A reference resistor of 100 Ω, for example, will draw 25 mA from VREF into the output of the first op-amp. Another 2.5 mA will flow through R5.

Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently, he teaches electrical and electronics courses at a Toronto community college.

 Related Content

References

  1. Bill Schweber. The why and how of matched resistors (a two part series). https://www.powerelectronictips.com/the-why-and-how-of-matched-resistors-part-1/.
  2. Art Kay. Should you use discrete resistors or a resistor network? https://www.planetanalog.com/should-you-use-discrete-resistors-or-a-resistor-network/ .

The post Circuits to help verify matched resistors appeared first on EDN.

Real-time motor control for robotics with neuromorphic chips

Tue, 07/15/2025 - 10:06

Robotic controls started with simplistic direct-current motors. Engineers had limited mobility because they had few feedback mechanisms. Now, neuromorphic chips are entering the field, mimicking the way the human brain functions. Their relevance in future robotic endeavors is unprecedented, especially as electronic design engineers persist through and surpass Industry 4.0.

Here is how to explore real-time controllers and create better robots.

Robotics is a resource-intensive field, especially when depending on antiquated hardware. As corporations aim for greater sustainability, neuromorphic technologies promise better energy efficiency. Studies are proving the value of adjusting mapping algorithms to lower electrical needs.

Implementing these chips at scale could yield substantial power cuts, saving operations countless dollars in waste heat and energy. Some are so successful because of their lightweight materials that they lower usage by 99% with only 180 kilobytes of memory.

The real-time capabilities are also vital. The chips react to event-specific triggers; that’s crucial because facilities managing high demand with complex processes require responsive motor controls. Every interaction is a chance for the chip to learn and adapt to the next situation. This includes recognizing patterns, experiencing sensory stimuli, and altering range of motion.

How neuromorphic chips enable real-time motor control

Neuromorphic models change operations by encouraging greater trust on human operators. Because of their event-driven processing, they move from task to task with lower latency than conventional microcontrollers. Engineers could also potentially communicate with technology using brain-computer interfaces to monitor activity or refine algorithms.

Parallelism is also an inherent aspect of these neural networks that allows robots to understand several informational streams simultaneously. In production or testing settings, understanding spatial or sensory cues makes neuromorphic chips superior because they make decision-making more likely to produce outcomes like a human.

Case studies of the SpiNNaker neural hardware demonstrated how a multicore neuromorphic platform can delegate tasks to different units such as synaptic processing. It validated how well these models achieve load balancing to optimize computational power and output.

Chips with robust parallelism are less likely to produce faulty results because the computations are delegated to separate parts, collating into a more reasonable action. Compared to traditional robotics, this also lowers the risk of system failure because the spiking neurons will not overload the equipment.

Design considerations for engineers

Neuromorphic chips are advantageous, but interoperability concerns may arise with existing motor drivers and sensors. Engineers can also encounter problems as they program the models and toolchains. They may not conventionally operate with spiking neural networks, commonly found in machinery replicating neuron activity. The chips could render some software or coding obsolete or fail to communicate signals effectively.

Experts will need to tinker with signal timing to ensure information processes promptly in response to specific events. They will also need to use tools and data to predict trends to stay ahead of the competition. Companies will be exploring the scalability of neuromorphic equipment and new applications rapidly, so determining various industries’ needs can inform an organization about the features to prioritize.

Some early applications that could expand include:

  • Swarm robotics
  • Autonomous vehicles
  • Cobots
  • Brain-computer interfaces

Engineers must feel inspired and encouraged to continue developing real-time motor controls with neuromorphic solutions. Doing so will craft self-driven, capable machinery that will change everything from construction sites to production lines. The applications will be as endless as their versatility, which becomes nearly infinite, considering how robots function with a humanlike brain.

Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.

 

 

Related Content

The post Real-time motor control for robotics with neuromorphic chips appeared first on EDN.

Dissecting (and sibling-comparing) a scorched five-port Gigabit Ethernet switch

Mon, 07/14/2025 - 17:17

As the latest entry in my “electronics devices that died in the latest summer-2024 lightning storm” series, I present to you v3.22 (the company’s currently up to v8) of TP-Link’s TL-SG1005D five-port GbE switch, the diminutive alternative to the two eight-port switches I tore down last month. Here’s a box shot to start, taken from a cool hacking project on it that I came across (and will shortly further discuss) during my online research:

WikiDevi says that the TL-GS1005D v3.22 dates from 2009 (here’s the list of all TP-Link TL-SG series variants there), which sounds about right; my email archive indicates that I bought it from Newegg on December 14, 2010, on sale for $16.99 (along with two $19.99 Xbox Live 1600 point cards, then minus a $10 promo code, a discount which you can allocate among the three items as you wish). Nearly 15 years later, I feel comfortable in saying I got my money’s worth out of it!

Here’s what mine looks like, from various perspectives and as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the switch has approximate dimensions, per my tape measure, of 6.5”x4.25”x1.125”):

I didn’t need to bother taping over which specific port had gone bad this time, because the switch was completely dead!

along with a close-up of the underside label:

Speaking of the “Power Supply” notated on the label, here it is:

In contrast, before continuing, here’s what the latest-gen TP-Link TL-SG1005D v8 looks like:

Usually, from my experience, redesigns like this are prompted by IC-supplier phaseouts that compel PCB redesigns. Clearly, in this case, TP-Link has tinkered with the case cosmetics, too!

Before diving in, I confirmed that a dead wall wart wasn’t the root cause of the device’s demise (it’s happened before). Nope, still seems to be functional:

Granted, while its measured output voltage is as expected, its output current may be degraded (that’s also happened before). But I’m sticking with my theory that the switch itself is expired.

Time to get inside. Unlike other devices like this that I’ve dissected in the past, the screws aren’t under the four rubber “feet” shown in the earlier underside photo. Instead, you’ll find them within the holes that are in proximity to the upper two “feet”:

We have liftoff (snapping a couple of plastic retaining clips in the process, but this device is destined only for the landfill, so no huge loss):

Mission (so far, at least) accomplished:

And at this point, the PCB simply lifts away from the top-half remainder of the plastic shell:

No light guides in this design; the LEDs shine directly on the enclosure’s front panel:

Here’s a PCB backside closeup of the cluster of passives, presumably location-associated with a processor on the other side of the circuit board:

And turning the PCB around:

I’m guessing I’m right, and it’s hiding underneath that honkin’ big passive heatsink.

Let’s start with close-ups of the two labels stuck to this side of the PCB:

And here’s what I assume (due to plug proximity, if nothing else) is the power subsystem:

So, what caused this switch to irrevocably glitch? The brown blobs on the corners of both choke coils were the first thing that caught my eye:

but upon further reflection, I think they’re just adhesive, intended to hold the coils in place.

Next up for demise-source candidacy was the scorch mark atop the 25 MHz crystal oscillator:

Again, though, I bet this happened during initial assembly, not in reaction to the lightning EMP.

Nothing else obvious caught my eye. Last, but not least, then, was to pry off that heatsink:

It was glued stubbornly in place, but the combination of a hair dryer, a slotted screwdriver and some elbow grease (accompanied by colorful commentary) ultimately popped it off:

revealing the IC underneath, with plenty of marking-obscuring glue still stuck to the top of it:

You’re going to have to take my word (not to mention my belated realization that the info was also on WikiDevi, which concurred with my magnifying glass-augmented squinting) that it’s a Realtek RTL8366SB (here’s a datasheet). Note the long scorch mark on the right edge, toward the bottom. While it might result from extended exposure to my hair dryer’s heat, I’m instead betting that it’s smoking-gun (or is that smoking-glue?) evidence of the switch’s point of failure.

I’ll conclude the teardown analysis with a few PCB side views:

leaving me only a few related bits of editorial cleanup to tackle before I wrap up. First off, what’s with the “sibling-comparing” bit in this writeup’s title? While doing preparatory research, I came across a Reddit discussion thread that compared the TL-SG1005D to a notably less expensive TP-Link five-port GbE switch alternative, the TL-LS1005G. More generally, TP-Link’s five-port switch series for “home networking” currently encompasses five products, all supporting Gigabit Ethernet speeds. What’s the difference between them?

Two variations are obvious; four of the five ports in the TL-SG105MPE also support power-over-Ethernet (PoE), and both it and the TL-SG605 have metal cases, versus the plastic enclosures of the other three devices (reminiscent of last month’s metal-vs-plastic product differentiation).

But what about those other three? TP-Link’s website comparison facility fortunately came through…sorta. The low-end “LS” variant is, surprisingly, the only one that publicly documents its performance specs:

  • Switching Capacity: 10 Gbps
  • Packet Forwarding Rate: 7.4 Mpps
  • MAC Address Table: 2K
  • Packet Buffer Memory: 1.5 Mb
  • Jumbo Frame: 16KB

This data is missing for the others, although I trust that they also support jumbo frame sizes of some sort, for example (the v3.22 TL-SG1005D jumbo frame size is apparently 4KB, by the way). That said, the LS1005G has nearly twice the power consumption of the TL-SF1005D; 3.7 V vs 1.9 W. And what about the latest v8 version of the TL-SG1005D? Its power draw—2.4 W—is in-between the other two. But it’s the only one of the three that supports (in a documented fashion, at least) 802.1p and DSCP QoS.

The ”support” is a bit deceptive, though. Like its siblings, it’s an unmanaged switch, versus a higher-end “smart” switch, so you can’t actually configure any of its port-and-protocol prioritization settings. But it will honor and pass along any QoS packet parameters that are already in place. And now, returning to my other bit of cleanup, per the aforementioned hacking project, it can actually transform into a “smart” switch in its own right:

On a hunch, I decided to crack open the switch and look at the internals. Hmm, seemed there was a RTL8366SB GBit switch IC in there. I managed to download the datasheet of the RTL8366, and whaddayaknow, it actually contains all the logic a managed switch has too! Vlan, port mirroring, you name it, and chances are the little critter can do it. It didn’t have a user-interface though; you have to send the config to it over I2C, as cryptic hexadecimal register settings…but that’s nothing an AVR can’t fix.

How friggin’ cool is that?

There’s one more bit of cleanup left, actually. If you’ve already read either last month’s teardown or my initial post in this particular series, you might have noticed that I mentioned the demise of two five-port GbE switches. Where’s the other one? Well, when I re-plugged it (a TRENDnet TEG-S50g v4.0R, whose $17.99 acquisition dated back to August 2014) in the other day prior to taking it apart, it fired right up. I reconnected it to the LAN and it’s working fine. 🤷‍♂️

I guess not all glitches are irrevocable, eh? That’s all I’ve got for today. Let me know your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post Dissecting (and sibling-comparing) a scorched five-port Gigabit Ethernet switch appeared first on EDN.

The design anatomy of a photodetector

Mon, 07/14/2025 - 15:18

A typical photodetector integrates a photodiode with a transimpedance amplifier (TIA). The photodiode converts light into an electrical current, which the transimpedance amplifier then converts into a voltage. So, while a photodiode alone produces a current output, a photodetector delivers a voltage output using sensing devices like LEDs.

Read the full post at EDN’s sister publication, Planet Analog.

Related Content

The post The design anatomy of a photodetector appeared first on EDN.

Why the transceiver “arms race” is turning network engineers toward versatility

Fri, 07/11/2025 - 16:46
Skyrocketing data consumption

The last few years have seen a tremendous amount of change in the mobile data world. Both in the United States and around the globe, data consumption is growing faster than ever before.

The number of internet users continues to rise, from 5.35 billion users in 2024 to an estimated 7.9 billion users in 2029—a 47% increase in just five years, according to Forbes. This has created an explosion in global mobile data traffic set to exceed 403 exabytes per month by 2029, up from an estimated 130 exabytes monthly at the end of 2023, according to the Ericsson Mobility Report. For context, in 2014, that amount was a mere 2.5 exabytes per month (Figure 1).

Figure 1 While some of the skyrocketing demand for data is associated with video conferencing, the vast majority is related to the increased usage of large language models (LLMs) like ChatGPT. Source: Infinite Electronics

A variety of simultaneous technological changes are also helping to drive this rapid increase in data consumption. Video-chat technology that went into wider usage during the pandemic has become a mainstay in office life, while autonomous vehicles and IoT devices continue to grow in variety and prevalence. The biggest sea change, however, has been the rapid integration of generative AI into mainstream culture since the introduction of ChatGPT4 in late 2023. Combined with state-of-the-art technology like Nvidia’s recently announced AI chips, these new innovations are placing an enormous strain on networks to keep up and maintain efficient data transfer.

Transceivers in high-speed data transfer

In response, internet service providers and data centers are hurriedly seeking solutions that enable the most efficient data transfer possible. Transceivers, which essentially provide the bridge between compute/storage systems and the network infrastructure, serve a critical but sometimes overlooked role in enabling high-speed data transfer over fiber or copper cables.

Driven by the need for increased data transfer capabilities, the window between transceiver data-rate upgrades continues to shorten. The 2023 introduction of 800G came roughly six years after its predecessor, 400G, and barely two years later, the latest iteration of optical transceivers, 1.6TB, could take place as soon as Q3 of this year. This so-called “arms race” of technology shifts and data growth creates various layers of concerns for network engineers that include validating new technology, maintaining quality, ensuring interoperability, speeding up implementation to maximize ROI and increasing network uptime. Network upgrades to boost speeds and bandwidth are crucial for staying ahead of competitors and driving new customer acquisition.

Data center power crunch

Unlike telecom sites, the massive power demands on data centers are a major consideration when evaluating upgrades. According to Goldman Sachs, the power demand from data centers is expected to grow 160% by 2030 due to the increased electricity needed to power AI usage. The massive power demands on data centers are even motivating some to build their own electrical substation facilities.

This push for data center upgrades doesn’t just include transceivers and other components, but full rack-level equipment changes as well, and the cost of making these upgrades can be significant. Hyperscalers like Google, Amazon, Microsoft and Facebook are continuously investing in cutting-edge infrastructure to support cloud services, AI, advertising and digital platforms. Despite the high cost, these companies feel compelled to invest in cutting-edge technology to ensure strong user experiences and avoid falling behind competitors. Similarly, enterprise data centers like those run by Equifax or Bloomberg often run their own infrastructure to support specific business operations and invest heavily in technology upgrades.

But in smaller data centers not built by hyperscalers or large enterprises—such as colocation providers, regional service providers, universities, or mid-sized businesses—the cost of transceivers can account for a significant portion of total network hardware spending, sometimes in excess of 50%, according to Cisco. Because these organizations may not upgrade transceivers as frequently, often skipping a generation, each purchasing decision is made with the goal of balancing performance, longevity, and cost.

Additional factors like uptime, reliability, and time to market are also shifting network engineers’ priorities, with a heavy focus on quality products that offer operational flexibility. Some engineers are aligning with vendors that have a strong track record of quality, technical support teams that can be leveraged, and strong financials to ensure that the vendors will be capable of supporting warranties in the future and have parts in inventory to support urgent needs. Network engineers know that lowering the cost of network equipment is crucial for maintaining ROI for their businesses, but they also understand that quality and reliability are vital for business operations by eliminating failures and liabilities due to outages.

Transceiver procurement

These considerations are leading engineers toward the choice of purchasing transceivers from original equipment manufacturers (OEMs) or from third-party vendors. While each option offers its own benefits, as shown in Table 1, there are meaningful differences between the two.

Table 1 The major differences between OEM transceivers and third-party transceivers in key categories. Source: Infinite Electronics

Transceivers from reputable third-party vendors are built to the same MSA (multi-source agreement) standards followed by optics from OEMs, ensuring they have the same electrical and optical capabilities. However, OEM transceivers often carry much higher costs (frequently between 2x and 5x) than equivalent third-party optics. In a data center with thousands of ports, the difference in cost can be significant, reaching hundreds of thousands of dollars.

Transceivers from OEMs come hard-coded to run on one specific platform: Cisco, Sienna, IBM or any of hundreds of others on the market. It’s common for a fiber-optic network to include multiple installations of different OEM equipment, but additional complexity can be created through the acquisition of a company that used transceivers from an entirely different set of vendors. This often forces organizations to maintain separate inventories of backup transceivers coded to each platform in current use. In addition, using optics from one OEM can tie an organization to it indefinitely, reducing its flexibility for future upgrades.

Vendor agnostic functionality

Third-party vendors often offer a wider variety of form factors, connector types, and reach options than brand-name vendors. It’s also possible to get custom-programmed optics for multi-vendor environments where compatibility is an issue. Some vendors are able to code or recode transceivers out in the field in minutes, effectively allowing organizations to cover the same range of operations with less inventory.

Whereas OEM optics tend to have long procurement cycles due to internal processes, certifications, or global supply chain issues, third-party suppliers often offer the ability to ship same day or within days, which can be crucial given the time constraints on maintenance windows and rapid expansion plans.

With data demands forecasted to continue escalating to the end of the decade, data providers will have to make a substantial investment to manage the shifts in technology and keep up with customer needs. To maintain network uptime, it will be increasingly critical to partner with vendors that can provide technical support as well as competitive products that maintain high quality and reliable performance.

Third-party transceiver benefits

For hospitals, banking, retail, and other businesses with employees working from home, connectivity will be essential for executing even the simplest daily tasks. Maintaining a business’s reputation and customer loyalty depends on limiting liability, making it critical to maintain a robust network that is built on uptime.

By providing versatility through shorter lead times and broader compatibility, third-party transceiver solutions help ensure that infrastructure upgrades can keep up with the pace of business needs. In a landscape defined by rapid change, having access to reliable, standards-compliant alternatives can offer organizations a crucial strategic advantage.

For organizations navigating the challenges of scaling their networks while managing costs, third-party transceivers offer a practical path forward, helping ensure that networks remain both resilient and future-ready.

Jason Koshy is Infinite Electronics’ global VP of sales and business development, leading its outside sales team and installations. He brings to this position more than 28 years of experience covering all facets of the business. His previous roles include applications engineer, quality and manufacturing engineer, new acquisition evaluations, regional sales manager, director of sales for North America and, most recently, VP of sales for the Americas and ROW. Jason also participated in the integration of Integra, PolyPhaser and Transtector into the Infinite Electronics brand family. He holds a Bachelor of Science in electrical engineering from the University of South Florida.

Related Content

The post Why the transceiver “arms race” is turning network engineers toward versatility appeared first on EDN.

MCUs power single-motor systems

Fri, 07/11/2025 - 16:45

With features optimized for motor control, Renesas’ RA2T1 MCUs drive fans, power tools, home appliances, and other single-motor systems. The MCUs integrate a 32-bit Arm Cortex-M23 processor running at 64 MHz and a 12-bit ADC with a 3-channel sample-and-hold function that simultaneously captures the 3-phase currents of BLDC motors for precise control.

A PWM timer supports automatic dead-time insertion and asymmetric PWM generation, features tailored for inverter drive and control algorithm implementation. Safety functions include PWM forced shutdown, SRAM parity check, ADC self-diagnosis, clock accuracy measurement, and unauthorized memory access detection.

Renesas’ Flexible Software Package (FSP) for the RA2T1 microcontroller streamlines development with middleware stacks for Azure RTOS and FreeRTOS, peripheral drivers, and connectivity, networking, and security components. It also provides reference software for AI, motor control, and cloud-based applications.

The RA2T1 series of MCUs is available now, along with the FSP software.

RA2T1 product page

Renesas Electronics 

The post MCUs power single-motor systems appeared first on EDN.

Cadence debuts LPDDR6 IP for high-bandwidth AI

Fri, 07/11/2025 - 16:45

Cadence taped out an LPDDR6/5X memory IP system running at 14.4 Gbps—up to 50% faster than previous-generation LPDDR DRAM. The complete PHY and controller system optimizes power, performance, and area, while supporting both LPDDR6 and LPDDR5X protocols. Cadence expects the IP to help AI infrastructure meet the memory bandwidth and capacity demands of large language models (LLMs), agentic AI, and other compute-heavy workloads.

The memory system features a scalable, adaptable architecture that draws on Cadence’s DDR5 (12.8 Gbps), LPDDR5X (10.7 Gbps), and GDDR7 (36 Gbps) IP lines. As the first offering in the LPDDR6 IP portfolio, it supports native integration into monolithic SoCs and enables heterogeneous chiplet integration through the Cadence chiplet framework for multi-die system designs.

Customizable for various package and system topologies, the LPDDR6/5X PHY is offered as a drop-in hardened macro. The LPDDR6/5X controller, provided as a soft RTL macro, includes a full set of industry-standard and advanced memory interface features, such as support for the Arm AMBA AXI bus.

The LPDDR6/5X memory IP system is now available customer engagements.

LPDDR product page

Cadence

The post Cadence debuts LPDDR6 IP for high-bandwidth AI appeared first on EDN.

Pages