Українською
  In English
Новини світу мікро- та наноелектроніки
Relentless Innovation is Driving Software-Defined Vehicles into the Future
Amazon and Google: Can you AI-upgrade the smart home while being frugal?

The chronological proximity of Amazon and Google’s dueling new technology and product launch events on Tuesday and Wednesday of this week was highly unlikely to have been a coincidence. Which company, therefore, reacted to the other? Judging solely from when the events were first announced, which is the only data point I have as an outsider, it looks like Google was the one who initially put the stake in the ground on September 2nd with an X (the service formerly known as Twitter) post, with Amazon subsequently responding (not to mention scheduling its event one day earlier in the calendar) two weeks later, on September 15.
Then again, who can say for sure? Maybe Amazon started working on its event ahead of Google, and simply took longer to finalize the planning. We’ll probably never know for sure. That said, it also seems from the sidelines that Amazon might have also gotten its hands on a leaked Google-event script (to be clear, I’m being completely facetious with what I just said). That’s because, although the product specifics might have differed, the overall theme was the same: both companies are enhancing their existing consumer-residence ecosystems with AI (hoped-for) smarts, something that they’ve both already announced as an intention in the past:
- Amazon, with a generative AI evolution-for-Alexa allusion two years ago, subsequently assigned the “Alexa+” marketing moniker back in February, and
- Google, which foreshadowed the smart home migration to come within its announcement of the Google Assistant-to-Gemini transition for mobile devices back in March.
Quoting from one of Google’s multiple event-tied blog posts as a descriptive example of what both companies seemingly aspire to achieve:
The idea of a helpful home is one that truly takes care of the people inside it. While the smart home has shown flashes of that promise over the last decade, the underlying AI wasn’t anywhere as capable as it is today, so the experience felt transactional, not conversational. You could issue simple commands, but the home was never truly conversational and seldom understood your context.
Today, we’re taking a massive step toward making the helpful home a reality with a fundamentally new foundation for Google Home, powered by our most capable AI yet, Gemini. This new era is built on four pillars: a new AI for your home, a redesigned app, new hardware engineered for this moment and a new service to bring it all together.
Amazon’s hardware “Hail Mary”Of the two companies, Amazon has probably got the most to lose if it fumbles the AI-enhancement service handoff. That’s because, as Ars Technica’s coverage title aptly notes, “Alexa’s survival hinges on you buying more expensive Amazon devices”:
Amazon hasn’t had a problem getting people to buy cheap, Alexa-powered gadgets. However, the Alexa in millions of homes today doesn’t make Amazon money. It’s largely used for simple tasks unrelated to commerce, like setting timers and checking the weather. As a result, Amazon’s Devices business has reportedly been siphoning money, and the clock is ticking for Alexa to prove its worth.
I’m ironically a case study of Amazon’s conundrum. Back in early March, when the Alexa+ early-access program launched, I’d signed up. I finally got my “Your free Early Access to Alexa+ starts now” email on September 24, a week and a day ago, as I’m writing this on October 2. But I haven’t yet upgraded my service, which is admittedly atypical behavior for a tech enthusiast such as myself.
Why? Price isn’t the barrier in my particular case (though it likely would be for others less Amazon-invested than me); mine’s an Amazon Prime-subscribing household, so Alexa+ is bundled versus costing $19.99 per month for non-subscribers. Do the math, though, and why anyone wouldn’t go the bundle-with-Prime route is the question (which, I’d argue, is Amazon’s core motivation); Prime is $14.99 per month or $139/year right now.
So, if it’s not the service price tag, then what alternatively explains my sloth? It’s the devices—more accurately, my dearth of relevant ones—with the exception of the rarely-used Alexa app on my smartphones and tablets (which, ironically, I generally fire up only when I’m activating a new standalone Alexa-cognizant device).
Alexa+ is only supported on newer-generation hardware, whereas more than half (and the dominant share in regular use) of the devices currently activated in my household are first-generation Echoes, early-generation Echo Dots, and a Tap. With the exception of the latter, which I sometimes need to power-cycle before it’ll start streaming Amazon Music-sourced music again, they’re all still working fine, at least for the “transactional” (per Google’s earlier lingo) functions I’ve historically tasked them with.
And therefore, as an example of “chicken and the egg” paralysis, in the absence of their functional failure, I’m not motivated to proactively spend money to replace them in order to gain access to additional Alexa+ services that might not end up rationalizing the upfront investment.
Speakers, displays, and stylus-augmented e-book readersAmazon unsurprisingly announced a bevy of new devices this week, strangely none of which seemingly justified a press release or, come to think of it, even an event video, in stark contrast to Apple’s prerecorded-only approach (blog posts were published a’plenty, however). Many of the new products are out-of-the-box Alexa+ capable and, generally speaking, they’re also more expensive than their generational precursors. First off is the curiously reshaped (compared to its predecessor) Echo Studio, in both graphite (shown) and “glacier” white color schemes:

There’s also a larger version of the now-globular Echo Dot (albeit still smaller than the also-now-globular Echo Studio), called the Echo Dot Max, with the same two color options:

And two also-redesigned-outside smart displays, the Echo Show 11 and latest-generation Echo Show 8, which basically (at least to me) look like varying-sized Echo Dots with LCDs stuck to their fronts. They both again come in both graphite and glacier white options:


and also have optional, added-price, more position-adjustable stands:

This new hardware begs the perhaps-predictable question: Why is my existing hardware not Alexa+ capable? Assuming all the deep learning inference heavy lifting is being done on the Amazon “cloud”, what resource limitations (if any) exist with the “edge” devices already residing in my (at least semi-) smart home?
Part of the answer might be with my assumption in the prior sentence; perhaps Amazon is intending for them to have limited (at least) ongoing standalone functionality if broadband goes down, which would require beefier processing and memory than that included with my archaic hardware. Perhaps, too, even if all the AI processing is done fully server-side, Amazon’s responsiveness expectations aren’t adequately served by my devices’ resources, in this case also including Wi-Fi connectivity. And yes, to at least some degree, it may just be another “obsolescence by design” case study. Sigh. More likely, my initial assumption was over-simplistic and at least a portion of the inference functions suite is running natively on the edge device using locally stored deep learning models, particularly for situations where rapid response time (vs edge-to-cloud-and-back round-trip extended latency) is necessary.
Other stuff announced this week included three new stylus-inclusive, therefore scribble-capable, Kindle Scribe 11” variants, one with a color screen, which this guy, who tends to buy—among other content—comics-themed e-books that are only full-spectrum appreciable on tablet and computer Kindle apps, found intriguing until he saw the $629.99-$679.99 price tag (in fairness, the company also sells stylus-less, but notably less expensive Colorsoft models):

and higher-resolution indoor and outdoor Blink security cameras, along with a panorama-stitching two-camera image combiner called the Blink Arc:

Speaking of security cameras, Ring founder Jamie Siminoff, who had previously left Amazon post-acquisition, has returned and was on hand this week to personally unveil also-resolution-bumped (this time branded as Retinal Vision) indoor- and outdoor-intended hardware, including an updated doorbell camera model:

Equally interesting to me are Ring’s community-themed added and enhanced services: Familiar Faces, Alexa+ Greetings, and (for finding lost dogs) Search Party. And then there’s this notable revision of past stance, passed along as a Wired coverage quote absent personal commentary:
It’s worth noting that Ring has brought back features that allow law enforcement to request footage from you in the event of an incident. Ring customers can choose to share video, and they can stay anonymous if they opt not to send the video. “There is no access that we’re giving police to anything other than the ability to, in a very privacy-centric way, request footage from someone who wants to do this because they want to live in a safe neighborhood,” Siminoff tells WIRED.
A new software chapterLast, but not least (especially in the last case) are several upgraded Fire TVs, still Fire OS-based:

and a new 4K Fire TV Stick, the latter the first out-of-box implementation example of Amazon’s newfound Linux embrace (and Linux-derived Android about-face), Vega OS:

We’d already known for a while that Amazon was shutting down its Appstore, but its Fire OS-to-Vega OS transition is more recent. Notably, there’s no more local app sideloading allowed; all apps come down from the Amazon cloud.
Google’s more modest (but comprehensive) response
Google’s counterpunch was more muted, albeit notably (and thankfully, from a skip-the-landfill standpoint) more inclusive of upgrades for existing hardware versus the day-prior comparative fixation on migrating folks to new devices, and reflective of a company that’s fundamentally a software supplier (with a software-licensing business model). Again from Wired’s coverage:
This month, Gemini will launch on every Google Assistant smart home device from the last decade, from the original 2016 Google Home speaker to the Nest Cam Indoor 2016. It’s rolling out in Early Access, and you can sign up to take part in the Google Home app.
There’s more:
Google is bringing Gemini Live to select Google Home devices (the Nest Audio, Google Nest Hub Max, and Nest Hub 2nd Gen, plus the new Google Home Speaker). That’s because Gemini Live has a few hardware dependencies, like better microphones and background noise suppression. With Gemini Live, you’ll be able to have a back-and-forth conversation with the chatbot, even have it craft a story to tell kids, with characters and voices.
But note the fine print, which shouldn’t be a surprise to anyone who’s already seen my past coverage: “Support doesn’t include third-party devices like Lenovo’s smart displays, which Google stopped updating in 2023.”
One other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, won’t ship until early next year. There was one other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, which won’t ship until early next year.

And, as the latest example of Google’s longstanding partnership with Walmart, the latter retailer has also launched a line of onn.-branded, Gemini-supportive security cameras and doorbells:

That’s what I’ve got for you today; we’ll have to see what, if anything else, Apple has for us before the end of the year, and whether it’ll take the form of an event or just a series of press releases. Until then, your fellow readers and I await your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disassembling the Echo Studio, Amazon’s Apple HomePod foe
- Amazon’s Echo Auto Assistant: Legacy vehicle retrofit-relevant
- Lenovo’s Smart Clock 2: A “charged” device that met a premature demise
- The 2025 Google I/O conference: A deft AI pivot sustains the company’s relevance
- Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche
The post Amazon and Google: Can you AI-upgrade the smart home while being frugal? appeared first on EDN.
Infrared Communication Made Simple for Everyday Devices
As technology advances, most everyday devices depend on short-range communication to exchange or gather data. Although wireless technologies such as Wi-Fi and Bluetooth dominate the market, they are not always the ideal option especially for low-power applications where efficiency, simplicity, and cost management are most important. In these instances, infrared (IR) communication is still an efficient option that energizes applications such as smart meters, wearable electronics, medical devices, and remote controls.
But using an infrared link is not always easy. An IR diode cannot just be attached to a microcontroller pin and be efficient. In order to avoid saturating the diode and to provide a robust signal, a low-frequency carrier is often employed, which then must be modulated by the data stream. Historically, this has involved using more modem chips, timers, and mixers increasing cost, complexity, and additional board space to the design.
The Inefficient Signal Generation Challenge
Fundamentally, infrared communication relies on two key signals:
- Carrier Frequency – a square wave that paces the IR diode at a suitable frequency.
- Data Stream – the content of the communication, which must modulate the carrier.
In most implementations, these signals are from various peripherals on a microcontroller and must be merged externally. This adds more components and uses multiple I/O pins, which is not conducive to small, battery-powered devices.
A Smarter Way Forward
Since recent microcontrollers started meeting this challenge, they now provide easier mechanisms for IR signal generation. Instead of needing a separate modem chip, some of these devices combine the timer output (carrier frequency) with the communication output (data) internally. The result is a ready modulation that can directly drive an infrared diode.
An example that offers such capability is RA4C1. Being an 80 MHz device with low-power operating modes down to 1.6 V, it offers an SCI/AGT mask function that combines a UART or IrDA interface output with a timer signal and thus makes it possible to generate the required modulated IR output without any external hardware.
Design Flexibility
The reason this method is efficient is because it is flexible:
- Developers have the option of utilizing a basic UART output that is modulated by a timer-generated carrier.
- Or they can implement an integrated IrDA interface, with provisions for direct modulation or phase-inverted output based on the application requirement.
Both schemes present a clean, stable signal while minimizing the amount of external components and I/O pins needed.
For designers of small electronics like handheld meters, fitness monitors, or household appliances space and power efficiency are key considerations. An IR communication solution with minimal IR circuitry saves cost and enhances reliability by eliminating outside circuitry. It also aids in speeding up product development as engineers no longer need to spend extra time connecting individual modem chips or modulation hardware.
Conclusion:
Infrared communication remains to provide a reliable, low-cost solution for short-range connectivity, particularly in environments where the inclusion of a full radio system is not warranted. With newer microcontrollers embracing built-in modulation capabilities, establishing an IR connection has never been simpler. This change makes it possible for developers to provide smarter, power-sensing products while maintaining simplicity and low cost.
(This article has been adapted and modified from content on Renesas.)
The post Infrared Communication Made Simple for Everyday Devices appeared first on ELE Times.
PoE basics and beyond: What every engineer should know

Power over Ethernet (PoE) is not rocket science, but it’s not plug-and-play magic either. This short primer walks through the basics with a few practical nudges for those curious to try it out.
It’s a technology that delivers electrical power alongside data over standard twisted-pair Ethernet cables. It enables a single RJ45 cable to supply both network connectivity and power to powered devices (PDs) such as wireless access points, IP cameras, and VoIP phones, eliminating the need for separate power cables and simplifying installation.
PoE essentials: From devices to injectors
Any network device powered via PoE is known as a powered device or PD, with common examples including wireless access points, IP security cameras, and VoIP phones. These devices receive both data and electrical power through Ethernet cables from power sourcing equipment (PSE), which is classified as either “endspan” or “midspan.”
An endspan—also called an endpoint—is typically a PoE-enabled network switch that directly supplies power and data to connected PDs, eliminating the need for a separate power source. In contrast, when using a non-PoE network switch, an intermediary device is required to inject power into the connection. This midspan device, often referred to as a PoE injector, sits between the switch and the PD, enabling PoE functionality without replacing existing network infrastructure. A PoE injector sends data and power together through one Ethernet cable, simplifying network setups.

Figure 1 A PoE injector is shown with auto negotiation that manages power delivery safely and efficiently. Source: http://poe-world.com
The above figure shows a PoE injector with auto negotiation, a safety and compatibility feature that ensures power is delivered only when the connected device can accept it. Before supplying power, the injector initiates a handshake with the PD to detect its PoE capability and determine the appropriate power level. This prevents accidental damage to non-PoE devices and allows precise power delivery—whether it’s 15.4 W for Type 1, 25.5 W for Type 2, or up to 90 W for newer Type 4 devices.
Note at this point that the original IEEE 802.3af-2003 PoE standard provides up to 15.4 watts of DC power per port. This was later enhanced by the IEEE 802.3at-2009 standard—commonly referred to as PoE+ or PoE Plus—which supports up to 25.5 watts for Type 2 devices, making it suitable for powering VoIP phones, wireless access points, and security cameras.
To meet growing demands for higher power delivery, the IEEE introduced a new standard in 2018: IEEE 802.3bt. This advancement significantly increased capacity, enabling up to 60 watts (Type 3) and circa 100 watts (Type 4) of power at the source by utilizing all four pairs of wires in Ethernet cabling compared to earlier standards that used only two pairs.
As indicated previously, VoIP phones were among the earliest applications of PoE. Wireless access points (WAPs) and IP cameras are also ideal use cases, as all these devices require both data connectivity and power.

Figure 2 This PoE system is powering a fixed wireless access (FWA) device.
As a sidenote, an injector delivers power over the network cable, while a splitter extracts both data and power—providing an Ethernet output and a DC plug.
A practical intro to PoE for engineers and DIYers
So, PoE simplifies device deployment by delivering both power and data over a single cable. For engineers and DIYers looking to streamline installations or reduce cable clutter, PoE offers a clean, scalable solution.
This brief session outlines foundational use cases and practical considerations for first-time PoE users. No deep dives: just clear, actionable insights to help you get started with smarter, more efficient connectivity.
Up next is the tried-and-true schematic of a passive PoE injector I put together some time ago for an older IP security camera (24 VDC/12 W).

Figure 3 Schematic demonstrates how a passive PoE injector powers an IP camera. Source: Author
In this setup, the LAN port links the camera to the network, and the PoE port delivers power while completing the data path. As a cautionary note, use a passive PoE injector only when you are certain of the device’s power requirements. If you are unsure, take time to review the device specifications. Then, either configure a passive injector to match your setup or choose an active PoE solution with integrated negotiation and protection.
Fundamentally, most passive PoE installations operate across a range of voltages, with 24 V often serving as practical middle ground. Even lower voltages, such as 12 V, can be viable depending on cable length and power requirements. However, passive PoE should never be applied to devices not explicitly designed to accept it; doing so risks damaging the Ethernet port’s magnetics.
Unlike active PoE standards, passive PoE delivers power continuously without any form of negotiation. In its earliest and simplest form, it leveraged unused pairs in Fast Ethernet to transmit DC voltage—typically using pins 4–5 for positive and 7–8 for negative, echoing the layout of 802.3af Mode B. As Gigabit Ethernet became common, passive PoE evolved to use transformers that enabled both power and data to coexist on the same pins, though implementations vary.
Seen from another angle, PoE technology typically utilizes the two unused twisted pairs in standard Ethernet cables—but this applies only to 10BASE-T and 100BASE-TX networks, which use two pairs for data transmission.
In contrast, 1000BASE-T (Gigabit Ethernet) employs all four twisted pairs for data, so PoE is delivered differently—by superimposing power onto the data lines using a method known as phantom power. This technique allows power to be transmitted without interfering with data, leveraging the center tap of Ethernet transformers to extract the common-mode voltage.
PoE primer: Surface touched, more to come
Though we have only skimmed the surface, it’s time for a brief wrap-up.
Fortunately, even beginners exploring PoE projects can get started quickly, thanks to off-the-shelf controller chips and evaluation boards designed for immediate use. For instance, the EV8020-QV-00A evaluation board—shown below—demonstrates the capabilities of the MP8020, an IEEE 802.3af/at/bt-compliant PoE-powered device.

Figure 4 MPS showcases the EV8020-QV-00A evaluation board, configured to evaluate the MP8020’s IEEE 802.3af/at/bt-compliant PoE PD functionality. Source: MPS
Here are my quick picks for reliable, currently supported PoE PD interface ICs—the brains behind PoE:
- TI TPS23730 – IEEE 802.3bt Type 3 PD with integrated DC-DC controller
- TI TPS23731 – No-opto flyback controller; compact and efficient
- TI TPS23734 – Type 3 PD with robust thermal performance and DC-DC control
- onsemi NCP1081 – Integrated PoE-PD and DC-DC converter controller; 802.3at compliant
- onsemi NCP1083 – Similar to NCP1081, with auxiliary supply support for added flexibility
- TI TPS2372 – IEEE 802.3bt Type 4 high-power PD interface with automatic MPS (maintain power signature) and autoclass
Similarly, leading semiconductor manufacturers offer a broad spectrum of PSE controller ICs for PoE applications—ranging from basic single-port controllers to sophisticated multi-port managers that support the latest IEEE standards.
As a notable example, TI’s TPS23861 is a feature-rich, 4-channel IEEE 802.3at PSE controller that supports auto mode, external FET architecture, and four-point detection for enhanced reliability, with optional I²C control and efficient thermal design for compact, cost-effective PoE systems.
In short, fantastic ICs make today’s PoE designs smarter and more efficient, especially in dynamic or power-sensitive environments. Whether you are refining an existing layout or venturing into high-power applications, now is the time to explore, prototype, and push your PoE designs further. I will be here.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- More opportunities for PoE
- A PoE injector with a “virtual” usage precursor
- Simple circuit design tutorial for PoE applications
- Power over Ethernet (PoE) grows up: it’s now PoE+
- Power over Ethernet (PoE) to Power Home Security & Health Care Devices
The post PoE basics and beyond: What every engineer should know appeared first on EDN.
Blue Laser Fusion wins US DOE 2025 INFUSE project award
Quintauris and Everspin Technologies Partner to Advance Dependable RISC-V Solutions for Automotive
Quintauris and Everspin Technologies, Inc. announced a strategic collaboration to bring advanced memory solutions into the Quintauris ecosystem.
The collaboration aims to strengthen the reliability and safety of RISC-V–based platforms, particularly for automotive, industrial and edge applications where data persistence, integrity, low latency and security are critical.
By integrating Everspin’s proven MRAM technologies with Quintauris’ reference architectures and real-time platforms, the partnership works to ensure memory subsystems meet the highest standards for performance and functional safety – one of the most pressing challenges in safety-driven markets.
Everspin’s strong commitment to the automotive market extends beyond technology to include proper certifications, manufacturing excellence, long-term supply and continuous quality improvement, values that align closely with Quintauris’ mission to make RISC-V commercially ready for automotive programs.
“Everspin’s leadership in MRAM and their track record of over 200 million products deployed make them a strong addition to our ecosystem,” said Pedro Lopez, Market Strategy Officer at Quintauris. “Together, we are closing the gap between innovation and dependability, enabling RISC-V to be confidently adopted in next-generation automotive programs.”
“RISC-V is opening new doors in safety-critical computing, but it also demands memory that can match its performance and reliability,” said David Schrenk, VP Business Development at Everspin Technologies. “By integrating our MRAM into the Quintauris platform, we’re helping developers build systems that retain data integrity under power loss, radiation or extreme temperatures, without compromising speed or security. This partnership strengthens the foundation for scalable, dependable platforms that will shape the future of automotive electronics.”
The post Quintauris and Everspin Technologies Partner to Advance Dependable RISC-V Solutions for Automotive appeared first on ELE Times.
EEVblog 1712 - CSIRO Mobile Space Mission Control Centre
Siemens Unveils ‘Groundbreaking’ Software for Automated Analog IC Tests
DMD powers high-resolution lithography

With over 8.9 million micromirrors, TI’s DLP991UUV digital micromirror device (DMD) enables maskless digital lithography for advanced packaging. Its 4096×2176 micromirror array, 5.4-µm pitch, and 110-Gpixel/s data rate remove the need for costly mask technology while providing scalability and precision for increasingly complex designs.

The DMD is a spatial light modulator that controls the amplitude, direction, and phase of incoming light. Paired with the DLPC964 controller, the DLP991UUV DMD supports high-speed continuous data streaming for laser direct imaging. Its resolution enables large 3D-print build sizes, fine feature detail, and scanning of larger objects in 3D machine vision applications.
Offering the highest resolution and smallest mirror pitch in TI’s Digital Light Processing (DLP) portfolio, the DLP991UUV provides precise light control for industrial, medical, and consumer applications. It steers UV wavelengths from 343 nm to 410 nm and delivers up to 22.5 W/cm² at 405 nm.
Preproduction quantities of the DLP991UUV are available now on TI.com.
The post DMD powers high-resolution lithography appeared first on EDN.
Co-packaged optics enables AI data center scale-up

AIchip Technologies and Ayar Labs unveiled a co-packaged optics (CPO) solution for multi-rack AI clusters, providing extended reach, low latency, and high radix. The joint development tackles AI infrastructure data-movement bottlenecks by replacing copper interconnects with CPO in large-scale accelerator deployments.

The offering integrates Ayar’s TeraPHY optical engines with AIchip’s advanced packaging on a common substrate, bringing optical I/O directly to the AI accelerator interface. This enables over 100 Tbps of scale-up bandwidth per accelerator and supports more than 256 optical scale-up ports per device. TeraPHY is also protocol agnostic, allowing flexible integration with customer-designed chiplets and fabrics.
The co-packaged solution scales multi-rack networks without the power and latency penalties of pluggable optics by shortening electrical traces and placing optical I/O close to the compute core. With UCIe support and flexible protocol endpoints at the package boundary, it integrates alongside compute tiles, memory, and accelerators while maintaining performance, signal integrity, and thermal requirements.
Both companies are working with select customers to integrate co-packaged optics into next-generation AI accelerators and scale-up switches. They will provide collateral, reference architectures, and build options to qualified design teams.
The post Co-packaged optics enables AI data center scale-up appeared first on EDN.
Platform speeds AI from prototype to production

Purpose-built for Lantronix Open-Q system-on-modules (SOMs), EdgeFabric.ai is a no-code development platform for designing and deploying edge AI applications. According to Lantronix, it helps customers move AI from prototype to production in minutes instead of months, without needing a team of AI experts.

The visual orchestration platform integrates with Open-Q hardware and leading AI model ecosystems, automatically configuring performance across Qualcomm GPUs, DSPs, and NPUs. It streamlines data pipelines with drag-and-drop workflows for AI, video, and sensors, while delivering real-time visualization. Prebuilt templates support common use cases such as surveillance, anomaly detection, and safety monitoring.
EdgeFabric.ai auto-generates production-ready code in Python and C++, making it easy to build and adjust pipelines, fine-tune parameters, and adapt workflows quickly.
Learn more about the EdgeFabric.ai platform here. For details on Open-Q SOMs, visit SOM solutions. Lantronix also offers engineering services for development support.
The post Platform speeds AI from prototype to production appeared first on EDN.
Dual-core MCUs drive motor-control efficiency

RA8T2 MCUs from Renesas integrate dual processors for real-time motor control in advanced factory automation and robotics. They pair a 1-GHz Arm Cortex-M85 core with an optional 250-MHz Cortex-M33 core, combining high-speed operation, large memory, timers, and analog functions on a single chip.

The Cortex-M85 with Helium technology accelerates DSP and machine-learning workloads, enabling AI functions that predict motor maintenance needs. In dual-core variants, the embedded Cortex-M33 separates real-time control from general-purpose tasks to further enhance system performance.
RA8T2 devices integrate up to 1 MB of MRAM and 2 MB of SRAM, including 256 KB of TCM for the Cortex-M85 and 128 KB of TCM for the Cortex-M33. For high-speed networking in factory automation, they offer multiple interfaces, such as two Gigabit Ethernet MACs with DMA and a two-port EtherCAT slave. A 32-bit, 14-channel timer delivers PWM functionality up to 300 MHz.
The RA8T2 series of MCUs is available now through Renesas and its distributors.
The post Dual-core MCUs drive motor-control efficiency appeared first on EDN.
Image sensor provides ultra-high dynamic range

Omnivision’s OV50R40 50-Mpixel CMOS image sensor delivers single-exposure HDR up to 110 dB with second-generation TheiaCel technology. It also reduces power consumption by ~20% compared with the previous-generation OV50K40, enabling longer HDR video capture.

Aimed at high-end smartphones and action cameras, the OV50R40 achieves ultra-high dynamic range in any lighting. Built on PureCel Plus‑S stacked die technology, the color sensor supports 100% coverage quad phase detection for improved autofocus. It features an active array of 8192×6144 with 1.2‑µm pixels in a 1/1.3‑in. format and supports premium 8K video with dual analog gain (DAG) HDR and on-sensor crop zoom.
The sensor also supports 4-cell binning, producing 12.5‑Mpixel resolution at 120 fps. For 4K video at 60 fps, it provides 3-channel HDR with 4× sensitivity, ensuring enhanced low-light performance.
The OV50R40 is now sampling, with mass production planned for Q1 2026.
The post Image sensor provides ultra-high dynamic range appeared first on EDN.
TI Unwraps Motor Control MCUs for Cost-Sensitive, Real-Time Designs
Thermally enhanced packages—hot or not?

The relentless pursuit of performance in sectors such as AI, cloud computing, and autonomous driving is creating a heat crisis. As the next generation of processors demand more power in smaller spaces, the switched-mode power supply (SMPS) is being pushed to its thermal limit. SMPS’s integrated circuit (IC) packages have traditionally used a large thermal pad on the bottom side of the package, known as a die attach paddle (DAP), to dissipate the majority of the heat through the printed circuit board (PCB). But as power density increases, relying on only one side of the package to dissipate heat quickly becomes a serious constraint.
A thermally enhanced package is a type of IC package designed to dissipate heat from both the top and bottom surfaces. In this article, we’ll explore the standard thermal metrics of IC packages, along with the composition, top-side cooling methods, and thermal benefits of a thermally enhanced package.
Thermal metrics of IC packagesIn order to understand what a thermally-enhanced package is and why it is beneficial, it’s important to first understand the terminology for describing the thermal performance of an IC package. Three foundational metrics of thermal resistance are the junction-to-ambient thermal resistance (RθJA), the junction-to-case (top) thermal resistance (RθJC(top)), and the junction-to-board thermal resistance (RθJB).
Thermal resistance measures the opposition to the flow of heat in a medium. In IC packages, thermal resistance is usually measured in Celsius rise per watt dissipated (°C/W), or how much the temperature rises when the IC dissipates a certain amount of power.
RθJA measures the thermal resistance between the junction (J) (the silicon die itself), and the ambient air (A) around the IC. RθJC(top) measures the thermal resistance specifically between (J) and the top (t) of the case (C) or package mold. RθJB measures the thermal resistance specifically between (J) and the PCB on which the package is mounted.
RθJA significantly depends on its subcomponents—both RθJC(top) and RθJB. The lower the RθJA, the better, because it clearly indicates that there will be a lower temperature rise per unit of power dissipated. Power IC designers spend a lot of time and resources to come up with new ways to lower RθJA. A thermally enhanced package is one such way.
Thermally enhanced package compositionA thermally enhanced package is a quad flat no-lead (QFN) package that has both a bottom-side DAP and a top-side cutout of the molding to directly expose the back of the silicon die to the environment. Figure 1 shows the gray backside of the die for the Texas Instruments (TI) LM61495T-Q1 buck converter.
Figure 1 The LM61495T-Q1 buck converter in a thermally enhanced package. Source: Texas Instruments
Exposing the die on the top side of the package does two things: it lowers the RθJC(top) compared to an IC package that completely molds over the die, and enables a direct connection between the die and an external heat sink, which can significantly reduce RθJA.
RθJC(top) in a thermally enhanced packageRθJC(top) allows heat to escape more effectively from the top of the device. Typically, heat escapes through the package mold and then to the air, but in a thermally enhanced package, it escapes directly to the air. This helps reduce the device temperature and reduces the risk of thermal shutdown and long-term heat stress issues. The thermally enhanced package also has a lower RθJA, which makes it possible for a converter to handle more current and operate in hotter environments.
Figure 2 shows a series of IC junction temperature measurements taken across output current for both the LM61495T-Q1 in the thermally enhanced package and TI’s LM61495-Q1 buck converter in the standard QFN package under two common operating conditions.

|
VOUT = 5V |
FSW = 400kHz |
TA = 25°C |
Figure 2 Output current vs. junction temperature for the LM61495-Q1 and LM61495T-Q1 with no heat sink. Source: Texas Instruments
Clearly, even with no heat sink attached, the thermally enhanced package runs slightly cooler, simply because more heat is dissipating out of the top of the package and into the air. The RθJA for a thermally enhanced package is slightly lower, demonstrating with certainty that, even if only marginally, this package type will provide better thermals compared to the standard QFN with top-side molding, even without any additional thermal management techniques. Table 1 lists the official thermal metrics found in both devices’ data sheets.
|
Part number |
Package type |
RθJA (evaluation module)(°C/W) |
RθJC(top) |
RθJB |
|
LM61495-Q1 |
Standard QFN |
21.6 |
19.2 |
12.2 |
|
LM61495T-Q1 |
Thermally enhanced package QFN |
21 |
0.64 |
11.5 |
Table 1 Comparing data sheet-derived thermal metrics for the LM61495-Q1 and LM61495T-Q1. Source: Texas Instruments
Top-side cooling vs QFNCombining its near-zero RθJC(Top) top side with an effective heat sink significantly reduces the RθJA of an IC in a thermally enhanced package. There are three significant improvements when compared to the same IC in a standard QFN package under otherwise similar operating conditions:
- Higher switching-frequency operation.
- Higher output-current capability.
- Operation at higher ambient temperatures.
For any SMPS under a given input voltage (VIN), output voltage (VOUT) condition and supplying a given output current, the maximum switching frequency will be thermally limited. Within every switching period, there are switching losses and conduction losses that dissipate as heat. Switching more frequently dissipates more power in the IC, leading to an increased IC junction temperature. This can be frustrating for engineers because switching at higher frequencies enables the use of a smaller buck inductor, and therefore a smaller overall solution size and lower cost.
Under the same operating conditions, using the thermally enhanced package and a heat sink, the heat dissipated in each switching period is now more easily channeled out of the IC, leading to a lower junction temperature and enabling a higher switching frequency without hitting the IC’s junction temperature limit. Just don’t exceed the maximum switching frequency recommendation of the device as outlined in the data sheet.
The benefits of using a smaller inductor are especially pronounced in higher-current multiphase designs that require an inductor for every phase. Figure 3 shows a simplified four-phase design capable of supplying 24 A at 3.3 VOUT at 2.2 MHz using the TI LM64AA2-Q1 step-down converter. If the design were to overheat and the switching frequency had to be reduced to 400 kHz, you would have to replace all four inductors with larger inductors (in terms of both inductance and size), inflating the overall solution cost and size substantially.

Figure 3 Simplified schematic of a single-output, four-phase step-down converter design using the LM644A2-Q1 step-down converter in the thermally enhanced package. Source: Texas Instruments
Conversely, for any SMPS under a given VIN, VOUT condition, and operating at a specific switching frequency, the maximum output current will be thermally limited. When discussing the current limit of an IC, it’s important to clarify that for all high-side FET integrated SMPSs, there is a data sheet-specified high-side current limit that bounds the possible output current.
Upon reaching the current-limit setpoint, the high-side FET turns off, and the IC may enter a hiccup interval to reduce the operating temperature until the overcurrent condition goes away. But even before reaching the current limit, it is very possible for an IC to overheat from a high output-current requirement. This is especially true, again, at higher frequencies. As long as you don’t exceed the high-side current limit, using an IC in the thermally enhanced package with a heat sink can extend the maximum possible output current to a level at which the standard QFN IC alone would overheat.
There is another constant to make the thermally enhanced package versus the standard QFN package comparison valid, and that is the ambient temperature (TA). TA is a significant factor when considering how much power an SMPS can deliver before it starts to overheat.
For example, a buck converter may be able to easily do a 12VIN-to-5VOUT conversion and support a continuous 6 A of current while switching at 2.2 MHz when the TA is 25°C, but not at 105°C. So, there is yet a third way to look at the benefit that a thermally enhanced package can provide. Assuming the VIN, VOUT, output current, and maximum switching frequency are constant, a thermally enhanced package used with a heat sink can enable an SMPS to operate at a meaningfully higher TA compared to a standard QFN package with no heat sink.
Figure 4 uses a current derating curve to demonstrate both the higher output current capability and operation at a higher TA. In an experiment using the LM61495-Q1 and LM61495T-Q1 buck converters, we measured the output current against the TA in a standard QFN package without a heat sink and in a thermally enhanced package QFN connected to an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Other than the package and the heat sink, all other conditions are constant: operating conditions, PCB, and measurement instrumentation.

|
VIN = 12V |
VOUT = 3.3V |
FSW = 2.2MHz |
Figure 4 Output current vs. ambient temperature of the LM61495-Q1 with no heat sink and the LM61495T-Q1 with an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Source: Texas Instruments
When TA reaches about 83˚C, the standard QFN package hits its thermal shutdown threshold, and the output current begins to collapse. As TA increases further, the device cycles into and out of thermal shutdown, and the maximum achievable output current that the device can deliver is necessarily reduced until TA reaches a steady 125˚C. At this point, the converter may not be able to sustain even 5 A without overheating.
Compare this to the thermally enhanced package QFN connected to a heat sink. The first instance of thermal shutdown now doesn’t occur until about 117˚C. That’s an increase in TA before hitting a thermal shutdown of 34˚C, or 40%. The LM61495-Q1 is a 10-A buck converter, meaning that its recommended maximum output current is 10 A. But in this case, with a thermally enhanced package and effective heat sinking, a continuous 11 A output was clearly achievable up to 117˚C – in other words, a 10% increase in maximum continuous output current even at a high TA.
Methods of top-side coolingFigure 5, Figure 6, and Figure 7 show some of the most common methods of top-side cooling. Stand-alone heat sinks are simple and readily available in many different forms, materials, and sizes, but are sometimes impractical in small-form-factor designs.
Figure 5 Stand-alone fin-type heat sink, these are simple and readily available but sometimes impractical in small form factor designs. Source: Texas Instruments
Cold plates are very effective in dissipating heat but are more complex and costlier to implement (Figure 6).

Figure 6 Cold plate-type heat sink, these are very effective in dissipating heat but are more complex and costlier to implement. Source: Texas Instruments
Using the metal housing containing the power supply and the surrounding electronics as a heat sink is compact, effective, and relatively inexpensive if the housing already exists. As shown in Figure 7, this is done by creating a pillar or dimple that connects the IC to the housing to enable efficient heat transfer. For power supplies powering processors, it’s likely that this method is already helping dissipate heat on the processor. Adding an additional dimple or pillar that now gives heat-sink access to the power supply is often a simple change, making it a very popular method, especially for processor power.

Figure 7 Contact-with-housing heat sink where a pillar or dimple connects the IC to the housing to enable efficient heat transfer. Source: Texas Instruments
There are many ways to implement heat sinking, but that doesn’t mean that they are all equally effective. The size, material, and form of the heat sink matter. The type and amount of thermal interface material used between the IC and the heat sink matter, as does its placement. It is important to optimize all of these factors for the design at hand.
Comparing heat sinksFigure 8 shows another current derating curve. It compares two different types of heat sinks, each mounted on the LM61495T-Q1. For reference, the figure includes the performance of the standard QFN package with no heat sink.

|
VIN = 24V |
VOUT = 3.3V |
FSW = 2.2MHz |
Figure 8 Output current versus the ambient temperature of the LM61495-Q1 with no heat sink, the LM61495T-Q1 with an off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink, and with an aluminum plate heat sink. Source: Texas Instruments
For a visualization of these heat sinks, see Figure 9 and Figure 10, which show a top-down view of the PCB and a clear view of how the heat sinks are mounted to the IC and PCB. The heat sink shown in Figure 9 is a commercially available, off-the-shelf product. To reiterate, it is a 45 mm by 45 mm aluminum alloy heat sink with a base that is 3mm thick and pin-type fins that extend the surface area and allow omnidirectional airflow.
Figure 9 The LM61495T-Q1 evaluation board with the off-the-shelf 45 x 45 x 15 mm stand-alone fin-type heat sink. Source: Texas Instruments
Figure 10 shows a custom heat sink that is essentially just a 50 mm by 50 mm aluminum plate with a 2 mm thickness and a small pillar that directly touches the IC. This heat sink was designed to mimic the contact-with-housing method, as it is very similar in size and material to the types of housing seen in real applications.

Figure 10 The LM61495T-Q1 evaluation board with a custom aluminum plate heat sink to mimic the contact-with-housing method. Source: Texas Instruments
Under the same conditions, the stand-alone heat sink provides a major benefit compared to the standard QFN package with no heat sink. The standard QFN package hits thermal shutdown around 67°C TA. For the stand-alone heat-sink setup, thermal shutdown isn’t triggered until the TA reaches about 111°C, which is a major improvement. However, the aluminum plate heat-sink setup doesn’t hit thermal shutdown at all. With the aluminum plate setup, the converter is still able to supply a continuous 10-A current at the highest TA tested (125˚C), demonstrating both the importance of choosing the correct heat sink for the system requirements as well as the popularity of the contact-with-housing method.
Addressing modern thermal challengesPower supply designers increasingly deal with thermal challenges as modern applications demand more power and smaller form factors in hotter spaces. Standard QFN packaging has long relied on dissipating the majority of generated heat through the bottom side of the package to the PCB. A thermally enhanced package QFN uses both the top and bottom sides of the package to improve heat flow out of the IC, essentially paralleling the thermal impedance paths and reducing the effective thermal impedance.
Combining a thermally enhanced package with effective heat sinking results in significant thermal benefits and enables higher-power-density designs. Because these benefits are derived from reducing the effective RθJA, designers can realize just one or all of these benefits in varying degrees. Increase the maximum switching frequency and reduce solution size and cost. Enabling a higher maximum output current for higher power conversion. Enable operation at a higher TA.

Jonathan Riley is a Senior Product Marketing Engineer for Texas Instruments’ Switching Regulators organization. He holds a BS in Electrical Engineering from the University of California Santa Cruz. At TI, Jonathan works in the crossroads of marketing and engineering to ensure TI’s Switching Regulator product line continues to evolve ahead of the market and enable customers to power the technologies of tomorrow.
Related Content
- Power Tips #101: Use a thermal camera to assess temperatures in automotive environments
- IC packages and thermal design
- Keeping space chips cool and reliable
- QFN? QFP? QFWHAT?
Additional resources
- Read the TI application note, “Semiconductor and IC Package Thermal Metrics.”
- Check out these TI application reports:
- See the TI application brief, “PowerPAD
Made Easy.” - Watch the TI video resource, “Improve thermal performance using thermally enhanced packaging (TEP).”
The post Thermally enhanced packages—hot or not? appeared first on EDN.
Past, present, and future of hard disk drives (HDDs)

Where do HDDs stand after the advent of SDDs? Are they a thing of the past now, or do they still have a life? While HDDs store digital data, what’s their relation to analog technology? Here is a fascinating look at HDD’s past, present, and future, accompanied by data from the industry. The author also raises a very valid point: while their trajectory is very similar to the world of semiconductors, why don’t HDDs have their own version of Moore’s Law?
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- When big data crashes against small problems
- The Hottest Data-Storage Medium Is…Magnetic Tape?
- Audio cassette tapes are coming back, this time for mass storage
The post Past, present, and future of hard disk drives (HDDs) appeared first on EDN.
Axcelis and Veeco to merge, forming fourth largest US wafer fabrication equipment supplier
NUBURU implements dual-CEO structure to drive transformation plan
Injection Molding: The Backbone of Modern Mass Production
Manufacturing today depends on processes that balance speed, precision, and scalability. Among them, injection molding has become indispensable for industries ranging from healthcare to consumer goods. Its ability to deliver identical, high-quality parts in massive volumes makes it one of the most reliable and cost-effective production methods. But what makes this process so vital, and how exactly does it work?
Understanding Injection Molding
Fundamentally, injection molding is about thrusting molten material into a precisely crafted mold, where it solidifies and takes on its final shape. Plastics are the stalwart of the operation, but producers also apply it to metals and testing uses in new industries. The greatest strength of injection molding is consistency and efficiency once a mold has been made, it can be used to churn out hundreds of thousands of duplicate parts with little deviation.
Unlike subtractive methods such as CNC machining, injection molding is less wasteful of material and can be more flexible in terms of design, with the ability to create everything from small medical devices to large automotive panels.
Industries that Depend on Injection Molding
- Food and Beverage
From yogurt cups to condiment containers, the packaging business relies heavily on injection molding for its light, disposable products. Moving beyond packaging, researchers at one of the University are testing whether this process can be used to mass-produce plant-based meat substitutes, demonstrating how versatile the method can be. In contrast to 3D printing, injection molding offers cost savings and is able to maintain taste and texture in food applications.
- Healthcare and Medical Devices
The medical sector applies injection molding in the production of syringes, implants, and wearables. Due to the stringent regulatory conditions, manufacturers tend to include sensors within the mold to check for temperature and pressure, allowing for perfect outcomes. Robotic equipment is also utilized, which removes faulty components automatically to ensure high levels of safety in patient-care products.
- Sporting Goods and Consumer Products
Leisure goods used daily picnic tableware, coolers, and even high-precision golf clubs are produced with this process as well. Metal injection molding enables golf club manufacturers to create products that improve performance and feedback. Molding single-piece coolers thinner but stronger walls speaks to the process’s efficiency and resilience.
The Injection Molding Process
In any industry and whether small, medium, or large, the injection molding process adheres to a systematic approach:
- Material Selection – Companies select metals or polymers according to strength, flexibility, durability, or resistance characteristics. Polypropylene is suitable for packaging food, while polycarbonate resists UV exposure for use outside.
- Design of Mold – Designers make precise steel or aluminum molds with orientation, core, cavity, and mold base in mind. CNC machining is usually employed to cut the mold exactly.
- Clamping – A clamping mechanism provides pressure to keep the mold halves tightly closed, preventing any leak during the process of injection.
- Injection – Pellets are melted into molten form, blended by a reciprocating screw, and injected into the mold at regulated velocities and pressures.
- Dwelling – Pressure is held for a temporary period to guarantee the molten material fills all the cavities of the mold.
- Cooling – The part solidifies within the mold, a phase often constituting the bulk of cycle time.
- Opening and Removal – After cooling, the mold is opened and ejector pins force the part out. Any remaining flash material is removed and sometimes recycled.
- Inspection – Finished parts are visually inspected and tested to detect defects, maintaining consistent quality control.
Why Injection Molding Remains Essential
The scalability, accuracy, and versatility to perform in various industries of the process make injection molding a corner stone of contemporary manufacturing. From life-saving medical technologies to common consumer products, the process continues to transform with automation, robotics, and intelligent sensors, which guarantee ever-greater levels of quality and efficiency.
As industries seek faster, more sustainable, and more innovative ways to produce goods, injection molding remains a cornerstone technology that bridges traditional manufacturing with future possibilities.
(This article has been adapted and modified from content on Revolutionized.)
The post Injection Molding: The Backbone of Modern Mass Production appeared first on ELE Times.
Improve PWM controller-induced ripple in voltage regulators

Simple linear and switching voltage regulators with feedback networks of the type shown in Figure 1 are legion. Their output voltages are the reference voltage at the feedback (FB) pin multiplied by 1 + Rf / Rg. Recommended values of Cf from 100 pF to 10nF increase the amount of feedback at higher frequencies, or at least ensure it is not reduced by stray capacitances at the feedback pin.
Figure 1 The configurations of common regulators and their feedback networks. A linear regulator is shown on the left and a switcher on the right.
Modifying this structure to incorporate PWM control of the output voltage requires some thought, and both Stephen Woodward and I have presented several Design Ideas (DIs) that address this.
Wow the engineering world with your unique design: Design Ideas Submission Guide
I’ve suggested disconnecting Rg from ground and driving it from a heavily filtered (op-amp-based) PWM signal supplied by a 74xx04-type logic inverter. Although this can result in excellent ripple suppression, it has a disadvantage—the need for an inverter power supply, which does not degrade the accuracy of the regulator’s 1% or better reference voltage.
Stephen has proposed switching the disconnected Rg leg between ground and open with a MOSFET. The beauty of this is that no new reference is needed. Although the output voltage is no longer a linear function of the PWM duty cycle, a simple software-based lookup table renders this a mere inconvenience. (Yup, “we can fix it in software!”)
A general scheme to mitigate PWM controller-induced ripple should be flexible enough to accommodate different regulators, regulator reference voltages, output voltage ranges, and PWM frequencies. In selecting one, here are some possible traps to be aware of:
- Nulling by adding an out-of-phase version of the ripple signal is at the mercy of component tolerances.
- Cheap ceramics, such as the ubiquitous X7R, have DC voltage and temperature-sensitive capacitances. If used, the circuit must tolerate these undesirable traits.
- Schemes which connect capacitors between ground and the feedback pin will reduce loop feedback at higher frequencies. The result could be degradation of line and load transient responses.
With this in mind, consider the circuit of Figure 2, capable of operation from 0.8 V to a little more than 5 V.

Figure 2 A specific instance of a PWM-controlled regulator with ripple suppression. Only a linear regulator is shown, but the adaptation for switcher operation entails only the addition of an inductor and a filter capacitor.
The low capacitance MOSFET has a maximum on-resistance of under 2 Ω at a VGS of 2.5 V or more. Cg1 and Cg2 see maximum DC voltages of 0.8 V (up to 1.25 V in some regulators). Their capacitive accuracies are not critical, and at these low voltages, they barely budge when 10-V or higher-rated X7R capacitors are employed.
Cf can see a significant DC voltage, however. Here, you might get away with an X7R, but a 10-nF (voltage-insensitive) C0G is cheap. The value of Cf was chosen to aid in ripple management. If it were not present, the ripple would be larger and proportional to the value of Rf. With a 10-nF Cf, larger values of Rf for higher output voltages would have no effect on the PWM-induced ripple; smaller ones could only reduce it. The largest peak-to-peak ripple occurs at duty cycles from 30 to 40%.
The filtering supplied by the three capacitors produces a sinusoidal ripple waveform of amplitude 5.7 µV peak-to-peak. For a 16-bit ADC with a full scale of 5 V, the peak-to-peak amplitude is less than 1 LSbit.
FlexibilityYou might have a requirement for a wider or narrower range of output voltages. Feel free to modify Rf accordingly without a penalty in ripple amplitude.
Ripple amplitude will scale in proportion to the regulator’s reference voltage. The design assumes a regulator whose optimum FB-to-ground resistance is 10 kΩ. If it’s necessary to change this for the regulator of your choice, scale the three Rg resistors by the same factor Z. Because the resistors and three capacitors implement a 3rd order filter, the ripple will scale in accordance with Z-3. To keep the same ripple amplitude, scale the three capacitors by 1/Z. You might want to scale the capacitors’ values for some other reason, even if the resistors are unchanged.
Changing the PWM frequency by a factor F will change the ripple amplitude by a factor of F-3. But too high a frequency could encounter accuracy problems due to the parasitic capacitances and unequal turn-on/turn-off times of the MOSFET.
Some regulators might not tolerate a Cf of a value large enough to aid in ripple suppression. Usually, these will tolerate a resistor Rcf in series with Cf. In such cases, ripple will be increased by a factor K equal to the square root of ( 1 + Rcf · 2π · fPWM · Cf ), and the waveform might no longer be sinusoidal. But increasing Cg1 and Cg2 by the square root of K will compensate to yield approximately the same suppression as offered by the design with Rcf equal to 0. If all else fails, there is always the possibility of adding an Rg4 and a Cg3 to provide another stage of filtering.
Tying it all togetherA flexible approach has been introduced for the suppression of PWM control-induced ripple in linear and switching regulators. Simple rules have been presented for the use and modification of the Figure 2 circuit for operation over different output voltage ranges, PWM frequencies, preferred resistances between ground and the regulator’s feedback pin, and tolerances for moderately large capacitances between the FB pins and the output.
The limitations of capacitors with sensitivities to DC voltages are recognized. These components are used appropriately and judiciously. Dependency on component matching is avoided. Standard feedback network structures are maintained or, at worst, subjected to minor modifications only; specifically, feedback at higher frequencies is not reduced from that recommended by the regulator manufacturer. This maintains the specified line and load transient responses.
AddendumOnce again, the Comments section of DIs has shown its worth. And it’s Deja vu all over again; value was provided by the redoubtable Stephen Woodward. In an earlier DI, he pointed out that regulators generally do not tolerate negative voltages at their feedback pins. But if there is a capacitor Cf of more than a few hundred picofarads connected from the output to this pin, as I have recommended in this DI, and the output is shorted or rapidly discharged, this capacitor could couple a negative voltage to that pin and damage the part. To protect against this, add the components shown in the following figure.

Figure 3 Add these components to protect the FB pin from output rapid negative voltage changes.
In normal operation and during startup, the CUS10S30 Schottky diode looks like an open circuit and it, Cc, and the 1 MΩ resistor have a negligible effect on circuit operation. Cc prevents the flow of diode reverse current, which could otherwise produce output voltage errors. If Vout transitions to ground rapidly, Cc and the diode prevent any negative voltage from appearing at the junction of the capacitors. Rc provides a cheap “just in case” limit of the current into the FB pin from that voltage transient if it somehow saw a negative voltage. (Check the maximum FB pin current to ensure that no significant error-inducing voltages develop across Rc.) When the circuit has settled, the voltage across Cc is discharged, and the circuit is ready to restart normally.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- A nice, simple, and reasonably accurate PWM-driven 16-bit DAC
- Brute force mitigation of PWM Vdd and ground “saturation” errors
- A transistor thermostat for DAC voltage references
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- PWM buck regulator interface generalized design equations
The post Improve PWM controller-induced ripple in voltage regulators appeared first on EDN.



