Українською
  In English
EDN Network
AC-Line Safety Monitor Brings Technical, Privacy Issues

There’s a small AC-line device that has received a lot of favorable media coverage lately. It’s called Ting from Whisker Labs, Inc. and its purpose is to monitor the home AC line, Figure 1. It then alerts the homeowner via smartphone to surges, brownouts, and arcing (arc faults) which could lead to house fires. It’s even getting glowing click-bait testimonials such as “This Device Saved My House From an Electrical Fire. And You Might Be Able to Get It for Free.” Let’s face it, accolades don’t get much better than that.
Figure 1 The Ting voltage monitor is a small, plug-in box with no user buttons except a reset. Source: Wisker Labs
(“Arcing”—which can ignite nearby flammable substances—occurs when electrical energy jumps across a gap between conductors; it usually but not always occurs at a connector and is often accompanied by sparks, buzzing sounds, and overheating; if it’s in a wall or basement, you might not know about it.)
The $99 device plugs into any convenient outlet—more formally, a receptacle—and once set up with your smartphone, it continuously monitors the AC line for conditions which may be detrimental. It needs no additional sensors or special wiring and looks like any other plug-in device. The vendor claims over a million homes have been protected, aggregating over 980,000 “home years” of coverage and that four of five electrical fires have been prevented.
When the Ting unit identifies a problem it recognizes, the owner receives an alert through the Ting app that provides advice on what to do, Figure 2. Depending on the issue, a live member of the company’s Fire Safety Team may contact you to walk you through whatever remediation steps might be required. In addition, if Ting finds a problem, the company will coordinate service by a licensed electrician and cover costs to remedy the problem up to $1,000.
Figure 2 All interaction between the homeowner and the Ting unit for alerts and reporting is via a Wi-Fi to a smartphone. Source: Wirecutter/New York Times
It all seems so straightforward and beneficial. However, whenever you are dealing with the AC line, there’s lots of room for oversimplification, misunderstanding, and confusion. Just look at the National Electrical Code (NEC) in the US (other countries have similar codes) and you’ll see that there’s more to safety in wiring than just using the appropriate gauge wire, making solid connection, and insulating obvious points. The code is complicated and there are good reasons for its many requirements and mandates.
My first thought on seeing this was “this is a great idea.” Then my natural skepticism kicked in and I wondered: does it really do what they claim? Exactly what does it do, and is that actually meaningful? And then the extra credit question: what else does it do that might not be so good or desirable?
For example, some home-insurance companies are offering it for free, and waive the monthly fee for the first year. That’s a tradeoff users might consider, or is it a clever subscription-service hook?
There is lots of laudatory and flowery language associated with the marketing of this device, but solid technical details are scant, see “How Ting Works.” They state, “Ting pinpoints and identifies the unique signals generated by tiny electrical arcs, the precursors to imminent fire risks. These signals are incredibly small but are clearly visible thanks to Ting’s advanced detection technology.”
Other online postings say that Ting samples the at 30 megasamples/second, looking for anomalies. When it identifies a problem it recognizes, the owner receives an alert through the Ting app that provides advice on what to do.
Let’s face it: the real-world AC line looks nothing like the smoothly undulating textbook sine wave with a steady RMS value. Instead, these are some voltage level variations which the vendor says Ting captured, Figure 3.
Figure 3 The real-world AC line has voltage variation, spikes, surges, and dropouts. Source: F150 Lightning Forum
As for arcing, that’s more complicated than just a low or high-voltage assessment, as it produces RF emissions which can be captured and analyzed.
I was about to sign up to try one out myself but realized the pointlessness of that. First, a sample of one doesn’t prove much. Also, how could I “inject” known faults into the system (my house wiring) to evaluate it? That would be difficult, risky, foolish, and almost meaningless.
Consider the split supply phasesInstead, I looked around the web to see what others said, knowing that you can’t believe everything you read there. One electrician noted that it is only monitoring one side of the two split phases feeding the house, so there’s a significant coverage gap. Another one responded by saying that it was true, but most issues come across on the neutral wire that is shared by both phases.
Even Ting addressed this “one side” concern with a semi-technical response: “The signals that Ting is looking for can be detected throughout the home’s electrical system even though it is installed on a single 120V phase. Fundamentally, Ting is designed to detect the tiny electro-magnetic emissions associated with micro-arcing characteristics of potential electrical faults and does so at very high frequencies. At high frequencies, your home wiring acts like a communications network.”
They continued: “Since each phase shares a common neutral back at your main breaker panel, arcing signals from one phase can be detected by Ting even if it is on the opposite phase. Thus, each outlet in the home will see the signal no matter its location of origin to some degree. With its sensitive detector and powerful post-processing algorithms, Ting can separate the signal from the noise and detect if there is unusual electrical activity. So, you only need one Ting for your home.”
This response brought yet another online response: “monitoring the voltage of both sides of the split phase would be far more ideal. One of the more common types of electrical fires is a damaged or open neutral coming from the transformer. This could send one side of your split phase low and the other high frying equipment and starting fires. But if you’re only monitoring one side of the split phase, you will only see a high or low voltage and have no way of knowing if that is from a neutral issue or voltage sagging on the street.”
As for arcing, every house built since 1999 in the US has been required by code to use AFCI (arc fault circuit interrupter) outlets; those can stop an electrical fire in nearly all cases, not just report it. However, using a single Ting is less costly and presumably has some value for an older home that isn’t going to be renovated or updated to code.
How big is the problem?Data on house fires is collected and analyzed by various organizations including the National Fire Protection Association (NFPA), individual insurance companies and industry-insurance consortiums. Are house first due to electrical faults a problem? The answer is that it depends on how you look at it.
Depending on who you ask and what you count, there are about 1.5 million fires each year—but many are outdoor barbeque or backyard wood-pile fires. The blog “Predict & Prevent: From Data to Practical Insight” from the Insurance Information Institute deals with electrical house fires and Ting in a generally favorable way (of course, you have to consider the blog’s source) with some interesting numbers: The 10 years from 2012 through 2021 saw reduced cooking, smoking, and heating fires; however, electrical fires saw an 11 percent increase over that same period, Figure 4. Fire ignitions with an undetermined cause also increased by 11 percent.
Figure 4 The causes of house fires have changed in recent years; electrical fires have increased while others have decreased. Source: U.S. Fire Administration via the Insurance Information Institute
Specific hazards are also detailed, Figure 5:
Figure 5 For those fires whose source has been identified, connected devices and appliances are the source of about half while infrastructure wiring is at about one quarter. Source: Whisker Labs via Insurance Information Institute
The blog also points out that there are many misconceptions regarding electrical fires. It’s easy to assume that most fires are due to older home-wiring infrastructure. However, their data found that 50 percent of home electrical-fire hazards are due to failing or defective devices and appliances, with the other half attributed to home wiring and outlets.
Further, it seems obvious that older homes have higher risk. This may be true only if all other things are equal when considering the effects of age and use on existing wiring infrastructure, but they rarely are. The data shows that assumption is suspect when considering all other factors such as materials, build quality, and the standards and codes at that time.
Other implicationsIf you get this unit through an insurance company (free or semi-free), that means there’s yet another player the story in addition to the homeowner and Whisker Labs. First, one poster claimed “Digging through the web pages I found each device sends 160 megabytes back to Ting every month…So that means you have to have a stable WiFi router to do the upload. As far as I know, the homeowner does not get a copy of the report uploaded to Ting, but the insurance company does.”
Further, there’s a clause in the agreement between the insurance company that supplied the unit and the homeowner. It says they “may also use the data for purposes of insurance underwriting, pricing, claims handling, and other insurance uses.” Will this information be used to increase your rates or worse cancel your home insurance for imperfect wiring?
It’s not easy to say that the Ting project is a good or bad idea, as that assessment depends on many technical factors and personal preferences. One thing is clear: it may be very useful for collecting and analyzing “big data” across the wiring of millions of homes, AC-line performance, and the relationships between house specifics and electrical risks (hello, AI). However, it can be very tricky when it starts looking at microdata related to a single residence, as it can tell others more about your lifestyle than you would like others to know or how affects how the insurance company rates your house.
What’s your sense of this device and its technical validity? What about larger-scale technical data-collection value? Finally, how do you feel about personal security and privacy implications?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related content
- Ground-fault interruption protection—without a ground?
- AC-DC adapters get their GaN shrink
- Cable Self-Heating: The Other Side of IR Drop
- ‘Mistakes Were Made’,” Even in a Simple 3-Wire AC Hookup
References
- The Wall Street Journal, “This (Possibly) Free Smart Device Listens to Your Home’s Wiring — and Could Prevent a Fire”
- Electrician Talk, “Ting Power Quality device”
- F150 Lightning Forum. “Ting Electrical Fire Safety Device”
- Insurance Information Institute, “Predict & Prevent: From Data to Practical Insight”
- Wirecutter, “This Device Saved My House From an Electrical Fire. And You Might Be Able to Get It for Free.”
- Reddit, “Does Ting actually work and if so, how?”
- Reddit, “Do you recommend Ting electrical monitors?”
- Wikipedia, “Arc-fault circuit interrupter”
- Rainbow Restoration Blog, “28 House Fire Statistics: How Common Are House Fires?”
- Whisker Labs, “2023 Data Analysis Update: Internet of Things (IoT) System Preventing 4 of 5 Home Electrical Fires”
The post AC-Line Safety Monitor Brings Technical, Privacy Issues appeared first on EDN.
Arm’s AI pivot for the edge: Cortex A-320 CPU

For artificial intelligence (AI) at the edge moving from basic tasks like noise reduction and anomaly detection to more sophisticated use cases such as big models and AI agents, Arm has launched a new CPU core, the Cortex A-320, as part of the Arm v9 architecture. Combined with Arm’s Ethos-U85 NPU, Cortex A-320 enables generative and agentic AI use cases in Internet of Things (IoT) devices. EE Times’ Sally Ward-Foxton provides details of this AI-centric CPU upgrade while also highlighting key features like better memory access, Kleidi AI, and software compatibility.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- Edge Artificial Intelligence (AI) Game Changer
- Edge AI chip features “at-memory” architecture
- SiMa.ai’s Second-Gen Edge AI Chip Goes Multi-Modal
- New Arm architecture brings enhanced security and AI to IoT
- Arm adds new Cortex-M processor for AI on small IoT devices
The post Arm’s AI pivot for the edge: Cortex A-320 CPU appeared first on EDN.
VNA enables fast, accurate RF measurements

With high measurement speed and stability, the R&S ZNB3000 vector network analyzer (VNA) supports large-scale RF component production. Its PCB-based frontend minimizes thermal drift, enabling reliable measurements for days without recalibration. The analyzer is also useful in RF labs.
The ZNB3000 is available with two or four ports and covers frequency ranges of 9 kHz to 4.5 GHz, 9 GHz, 20 GHz, and 26.5 GHz. R&S states that it offers the highest dynamic range and output power in its class, achieving up to 150 dB RMS with trace noise below 0.0015 dB RMS and providing +11 dBm output power at 26.5 GHz. Further, the VNA completes a 1-MHz to 26.5-GHz frequency sweep with 1601 points, 500-kHz IF bandwidth, and two-port error correction in 21.2 ms.
Understanding measurement uncertainty under test conditions is essential. Previously, calculating uncertainty for DUT S-parameters was only possible in a metrology lab. With the R&S ZNB3-K50(P) option, developed with METAS, the R&S ZNB3000 now calculates and displays uncertainty bands alongside measured S-parameters.
The ZNB3000 VNA is available now. To request pricing information, use the link to the product page below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post VNA enables fast, accurate RF measurements appeared first on EDN.
Multiprotocol SoCs ease IoT integration

Silicon Labs’ MG26 family of wireless SoCs enables mesh IoT connectivity through Matter, OpenThread, and Zigbee protocols. By supporting concurrent multiprotocol capabilities, the MG26 chips simplify the integration of smart home and building devices—such as LED lighting, switches, sensors, and locks—into both Matter and Zigbee networks simultaneously.
The MG26 SoCs offer up to 3 MB of flash and 512 KB of RAM, doubling the memory of other Silicon Labs multiprotocol devices. Powered by an Arm Cortex-M33 CPU with dedicated cores for radio and security subsystems, these devices offload tasks from the main core, optimizing performance for customer applications. Embedded AI/ML hardware acceleration enables up to 8x faster processing of machine learning algorithms, consuming just 1/6th the power compared to running them on the CPU.
Silicon Labs’ Secure Vault and Arm TrustZone meet all Matter security requirements. Secure OTA firmware updates and secure boot protect against malicious software installation and enable vulnerability patching. Through Silicon Labs’ Custom Part Manufacturing Service, MG26 devices can be programmed with customer-specific Matter device attestation certificates, security keys, and other features during fabrication.
The MG26 family of wireless SoCs is now generally available through Silicon Labs and its distribution partners.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Multiprotocol SoCs ease IoT integration appeared first on EDN.
MPUs enhance HMI application performance

Microchip’s SAMA7D65 MPUs, based on an Arm Cortex-A7 core running up to 1 GHz, integrate a 2D GPU, LVDS, and MIPI DSI. These features enhance data transmission and processing for improved graphics performance, optimizing HMI applications in industrial, medical, and transportation markets.
The SAMA7D65 microprocessors feature dual Gigabit Ethernet MACs with Time Sensitive Networking (TSN) support, ensuring precise synchronization and low-latency communication for industrial and building automation HMI systems. This enables seamless data exchange and deterministic networking, essential for responsive user interfaces.
Microchip also offers a system-in-package (SiP) variant of the SAMA7D65 MPU, the SAMA7D65D2G, which integrates a 2-Gb DDR3L DRAM for high-speed synchronization. Its low-voltage design reduces power consumption and optimizes energy efficiency. SiPs streamline development by addressing high-speed memory interface challenges and simplifying memory supply, accelerating time to market. Additionally, a system-on-module (SOM) variant is available for early access.
SAMA7D65 MPUs are available now in production quantities.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post MPUs enhance HMI application performance appeared first on EDN.
GNSS receivers achieve precise positioning

TeseoVI GNSS receivers from Microchip integrate multi-constellation and quad-band signal processing on a single die. This series of ICs and modules provides centimeter-level accuracy for high-volume automotive and industrial applications, such as ADAS, autonomous driving, asset trackers, and mobile robots for home deliveries.
Three standalone chips—the STA8600A, STA8610A, and STA9200MA—include dual independent Arm Cortex-M7 processing cores for local control of IC functions, along with ST’s phase-change memory to remove external memory needs. The STA9200MA runs dual cores in lockstep, providing hardware redundancy that meets ISO26262 ASIL-B functional safety requirements.
The TeseoVI family also includes two GNSS automotive modules, the VIC6A (16×12 mm) and ELE6A (17×22 mm), which integrate the chipset along with key external components—TCXO, RTC, SAW filter, and RF frontend—into a larger package with fewer pins and an EMI shield. These modules simplify development by eliminating the need for RF path design.
Samples of the TeseoVI GNSS receivers are available on request. Read the blogpost here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post GNSS receivers achieve precise positioning appeared first on EDN.
GaN, DPD tech improve 5G RU energy efficiency

MaxLinear and RFHIC have collaborated on a power amp solution for high-power macro cell radio units (RUs) that lowers power consumption, weight, and volume. The setup combines RFHIC’s GaN power amplifiers with MaxLinear’s digital predistortion (DPD) technology running on the Sierra radio SoC. The companies will showcase the solution at next week’s Mobile World Congress 2025.
MaxLinear’s DPD technology (MaxLIN) and Sierra radio chip, combined with RFHIC’s ID19801D GaN power amplifier and SDM19007-30H drive amplifier, achieve 55.2% line-up power efficiency with ACLR < -61 dBc and EVM < 3% at 49.6 dBm (91 W). The setup operates in the PCS band (1930–1995 MHz) with 2×NR10MHz carriers.
The Sierra radio SoC supports all major RU applications, including conventional macro, massive MIMO, pico, and all-in-one small cells. It integrates an RF transceiver supporting up to 8 transmitters, digital frontend with MaxLIN, low-PHY baseband processor, O-RAN split 7.2x fronthaul interface, and Arm Cortex-A53 quad-core CPU subsystem.
RFHIC’s ID series GaN power transistors operate from 1.8 GHz to 4.2 GHz, delivering saturated power levels of 410 W, 460 W, 700 W, and 800 W. The SDM series two-stage GaN hybrid drive amplifiers, internally matched to 50 Ω, cover 1.8 GHz to 4.1 GHz with output power options of 40 W, 60 W, and 80 W.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post GaN, DPD tech improve 5G RU energy efficiency appeared first on EDN.
1 A, 20V PWM DAC current source with tracking preregulator

This design idea reprises another “1A, 20V, PWM controlled current source.” Like the earlier circuit, this design integrates an LM3x7 adjustable regulator with a PWM DAC to make a programmable 20 V, 1 A current source. It profits from the accurate internal voltage reference and overload and thermal protection features of this time proven Bob Pease masterpiece!
However, unlike the earlier design idea that requires a floating, fixed output 24-VDC power adapter, this sequel incorporates a ground-referred boost preregulator that can run from a 5-V regulated or unregulated supply rail. The previous linear design has limited power efficiency that actually drops below single-digit percentages when driving low voltage loads. The preregulator in this version fixes that by tracking the input-output voltage differential across the LM3x7, maintaining it at a constant 3 V. This provides adequate dropout-suppressing headroom for the LM3x7 while minimizing wasted power and unnecessary heat.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Here’s how it works. LM317 fans will recognize Figure 1 as the traditional LM317 constant current source topology that maintains Iout = Vadj/Rs by forcing the ADJ pin to be 1.25 V more negative (a.k.a. less positive) than the OUT pin. It has worked great for 50 years, but of course the only way you can vary Iout is by changing R.
Figure 1 A classic LM317 constant current source where: Iout = Vadj/R = 1.25v/Rs.
Figure 2 shows another (easier) way to make Iout programmable. The circuit enables control of ampere-scale Iout with only milliamps of Ic control current.
Figure 2 A modification that makes the current source variable where: Iout = (Vadj – IcRc)/Rs – Ic.
Figure 3 shows this idea fleshed out and put to practical use. Note that Rs = R4 and Rc = R5.
Figure 3 U2 current source programmed by U1 PWM DAC and powered by U3 tracking preregulator.
Figure 2’s Ic control current is provided by the Q2 Q3 complementary pair. Since Q3 provides tempco compensation for Q2, it should be closely thermally coupled with its partner. Q4 does some nonlinearity compensation by providing curvature correction to Q2’s Ic control current generation. The daisy chain of three 1N4001 diodes provides bias for Q2 and Q4.
The PWM input frequency is assumed to be 10 kHz or thereabouts. Ripple filtering is the purpose of C1 and C2 and gets some help from an analog subtraction cancellation trick first described in “Cancel PWM DAC ripple with analog subtraction.”
About that tracking preregulator thing: Control of U3 to maintain the 3 V of headroom required to hold U2 safe from dropout relies on Q1 acting as a simple differential amplifier. Q1 drives U3’s Vfb voltage feedback pin to maintain Vfb = 1.245 V. Therefore (if Vbe = Q1’s base-emitter bias, typically ~0.6 V for Ie = ~500 µA)
Vfb/R7 = ((U2in – U2out) – Vbe)/R6
1.245v = (U2in – U2out – 0.6v)/(5100/2700)
U2in – U2out = 1.89 * 1.245v + 0.6v = 3v
Note, if you want to use this circuit with a different preregulator with a different Vfb, just adjust:
R7 = R6 Vfb/2.4v
Finally, a note about overvoltage. Current sources have the potential (no pun!) for output voltage to soar to damaging levels (destructive of U3’s internal switch and downstream circuitry too) if deprived of a proper load. R11 and R12 protect against this by utilizing U3’s built in OVP feature to limit max open circuit voltage to about 30 V if the load is lost.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- 1-A, 20-V, PWM-controlled current source
- A precision digital rheostat
- Tracking preregulator boosts efficiency of PWM power DAC
- Cancel PWM DAC ripple with analog subtraction
The post 1 A, 20V PWM DAC current source with tracking preregulator appeared first on EDN.
The Apple iPhone 16e: No more fiscally friendly “SE” for thee (or me)

Truthfully, I originally didn’t plan on covering the new iPhone 16e, Apple’s latest “entry-level” phone preceded by three generations worth of iPhone SE devices.
I knew a fourth-generation offering was coming (and sooner vs later), since European regulations had compelled Apple to phase out the SEs’ proprietary Lightning connector in favor of industry-standard USB-C. The iPhone SE 3, announced in 2022, had already been discontinued at the end of last year in Europe, in fact, along with the similarly Lightning-equipped iPhone 14, both subsequently also pulled from Apple’s product line across the rest of the world coincident with the iPhone 16e’s unveiling on February 19, 2025. Considering the heavy company-wide Apple Intelligence push, the iPhone SE 3 was also encumbered by its sub-par processor (A15 Bionic) and system memory allocation (4 GBytes), both factors suggesting the sooner-vs-later appearance of a replacement.
But how exciting could a new “entry-level” (translation: cost-optimized trailing-edge feature set) smartphone be, after all? Instead, I was planning on covering Amazon’s unveiling of its new AI-enhanced (and Anthropic-leveraging, among others) Alexa+ service, which happened earlier today as I write these words on the evening of February 26. That said, as Amazon’s launch event date drew near, I began hearing that no new hardware would be unveiled, just the upgraded service (in spite of the fact that Amazon referred to it as a “Devices & Services event”), and that what we now know of as Alexa+ would only beta-demoed, not actually released until weeks or months later. Those rumors unfortunately all panned out; initial user upgrades won’t start until sometime in March, more broadly rolling out over an unspecified-duration period of “time”.
What those in attendance in New York (no virtual livestream was offered) saw were only tightly scripted, albeit reportedly impressive (when they worked, that is, which wasn’t always the case), demos. As we engineers know well, translating from curated demos to real-world diverse usage experiences rarely goes without a hitch or few. Then there were the indications of longstanding (and ongoing) struggles with AI “hallucinations”, another big-time technology hit. Add in the fact that Alexa+ won’t run on any of the numerous, albeit all geriatric, Amazon devices in my abode, and I suspect at last for a while, I’ll be putting my coverage plans on hold.
Pricing deviations from prior generationsBack to the iPhone 16e then, which I’m happy to report ended up being far more interesting than I’d anticipated, both for Apple’s entry-level and broader smartphone product line and more generally for the company’s fuller hardware portfolio. Let’s begin with the name. “SE” most typically in the industry refers to “Special Edition”, I’ve found, but Apple has generally avoided clarifying the meaning here, aside from a brief explanation that Phil Schiller, Apple’s then-head of Worldwide Product Marketing (and now Apple Fellow), gave a reporter back in 2016 at the first-generation iPhone SE unveiling.
And in contrast to the typical Special Edition reputation, which comes with an associated price tag uptick, Apple’s various iPhone SE generations were historically priced lower than the mainstream and high-end iPhone offerings that accompanied them in the product line at any point in time. To accomplish this, they were derivations of prior-generation mainstream iPhones, for which development costs had already been amortized. The iPhone SE 3, for example, was form factor-reminiscent of the 2017-era, LCD-based iPhone 8, albeit with upgraded internals akin to those in the 2021-era iPhone 13.
The iPhone 16e marks the end of the SE generational cadence, at least for now. So, what does “e” stand for? Once again, Apple isn’t saying. I’m going with “economy” for now, although reality doesn’t exactly line up with that definitional expectation. The starting price for the iPhone SE 3 at introduction was $429. In contrast, the iPhone 16e begins at $599 and goes up from there, depending on how much internal storage you need. Not only did Apple ratchet up the price tag, as it’s more broadly done in recent years, it also broke through the perception-significant $499 barrier, which frankly shocked me. In contrast, if you’ll indulge a bit of snark, I chuckled when I noticed Google’s response to Apple’s news: a Pixel 8a price cut to $399.
UpgradesThat said, RAM jumps from 4 GBytes on the iPhone SE 3 to (reportedly: as usual, Apple didn’t reveal the amount) 8 GBytes. The iPhone SE 3’s storage started at 64 GBytes; now it’s 128 GBytes minimum. The 4.7” diagonal LCD has been superseded by a 6.1” OLED; more generally, Apple no longer sells a single sub-6” smartphone. And the front and rear cameras are both notably resolution-upgraded from those in the iPhone SE. The front sensor array also now supports TrueDepth for (among other things) FaceID unlock, replacing the legacy Touch ID fingerprint sensor built into the no-longer-present Home button, and the rear one, although still only one, includes 2x optical zoom support.
Turning now to the internals, there are three particularly notable (IMHO) evolutions that I’ll focus on. Unsurprisingly, the application processor was upgraded for the Apple Intelligence era, from the aforementioned A15 Bionic to the A18. But this version of the newer SoC is different than that in the iPhone 16, only enabling 4 GPU cores versus 5 on the mainstream iPhone 16 (and 6 on the iPhone 16 Pro). Again, as I mentioned before, I suspect that all three A18 variants are sourced from the same sliver of silicon, with the iPhone 16e’s version detuned to maximize usable wafer yield. Similarly, there may also be clock speed variations, another spec that Apple unfortunately doesn’t make public, between the three A18 versions.
In-house 5G chipMore significant to me is that this smartphone marks the initial unveil of Apple’s first internally developed LTE-plus-5G cellular subsystem. A quick history lesson; as regular readers already know, the company generally prefers to be vertically integrated versus external supplier-dependent, when doing so makes sense. One notable example was the transition from Intel x86 to Apple Silicon Arm-based computer chipsets that began in 2020. Notable exceptions (at least to date) to this rule, conversely, include volatile (DRAM) and nonvolatile (flash) memory, and image sensors. As with Intel in CPUs, Apple has long had a “complicated” (among other words) relationship with Qualcomm for cellular chipsets. Specifically, back in April 2019, the two companies agreed to drop all pending litigation between them, shortly after Qualcomm had won a patent infringement lawsuit, and which had begun two years earlier. Three months later, Apple dropped $1B to buy the bulk of Intel’s (small world, eh?) cellular modem business.
Six years later, the premier C1 cellular modem marks the fruits (Apple? Fruit? Get it?) of the company’s longstanding labors. Initial testing results on pre-release devices are encouraging from performance and network-compatibility standpoints, and Apple’s expertise in power consumption coupled with the tight coupling potential with other internally developed silicon subsystems, operating systems and applications are also promising. That said, this initial offering is absent support for ultra-high-speed—albeit range-restrictive, interference-prone and coverage-limited—mmWave, i.e., ultrawideband (UWB) 5G. For that matter, speaking of wireless technologies, there’s no short-range UWB support for AirTags and the like in the iPhone 16e, either.
Whose modem—Apple’s own, Qualcomm’s, or a combination—will the company be using in its next-generation mainstream and high-end iPhone 17 offerings due out later this year? Longer term, will Apple integrate the cellular modem—at minimum, the digital-centric portions of it—into its application processors, minimally at the common-package or perhaps even the common-die level? And what else does the company have planned for its newfound internally developed technology; cellular service-augmented laptops, perhaps? Only time will tell. Apple is rumored to also be developing its own Wi-Fi transceiver silicon, with the aspiration of supplanting today’s Broadcom-supplied devices in the future.
Wireless charging supportSpeaking of wireless—and cellular modems—let’s close out with a mention of wireless charging support. The iPhone 16e still has it. But in a first since the company initially rolled out its MagSafe-branded wireless charging capabilities with the iPhone 12 series in October 2020, there are no embedded magnets this time around (or in future devices as well?).
Initial speculation suggested that perhaps they got dropped because they might functionally conflict with the C1 cellular modem, a rumor that Apple promptly squashed. My guess, instead, is that this was a straightforward bill-of-materials cost reduction move on the company’s part, perhaps coupled with aspirations toward system weight and thickness reductions, and maybe even a desire to otherwise devote the available internal cavity volume for expanded battery capacity and the like. After all, as I’ve mentioned before, anyone using a case on their phone needs to pick a magnet-inclusive case option anyway, regardless of whether magnets are already embedded in the device. That all said, I’m still struck by the atypical-for-Apple backstep the omission of magnets represents, not to mention the Android-reminiscent aspect of it.
Future announcements?The iPhone 16e isn’t the only announcement that Apple has made so far in 2025. Preceding it were, for example:
- Apple TV+ service support for Android (Messages next? I jest), and
- The second-generation Beats Powerbeats Pro (again, unlike Apple-branded earbuds, with Android support)
And what might be coming down the pike? Well, with today’s heavy discounts on current offerings as one possible indication of the looming next-generation queue’s contents, there’s likely to be:
- An M4 upgrade to the 13” and 15” MacBook Air, and
- An Apple Intelligence-supportive hardware update to the baseline iPad
Further down the road, I’m guessing we’ll also see:
- Next-generation AirTags
- The third generation of the AirPods Pro
- The aforementioned iPhone 17 family, reportedly including an “Air” variant, accompanied by next-generation Apple Watches
- M5 SoC-based devices (leading with the iPad Pro again, or back to the traditional computer-first cadence? The latter would be my guess), and
- Maybe even that HomePod Hub, aka HomePad we keep hearing rumors about
You’ll note that I mindfully omitted a Vision Pro upgrade from the 2025 wishlist Stay tuned for more press release-based unveilings to come later this spring, the yearly announcement-rich WWDC this summer, and the company’s traditional yearly smartphone and computer family generational-upgrade events this fall. I’ll of course cover the particularly notable stuff here in the blog. And for now, I welcome your thoughts on today’s coverage in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- It’s September in Apple land, so guess what it’s time for?
- Apple on Arm: How did we get here?
- Apple’s MagSafe technology: Sticky, both literally and metaphorically
- The 2025 CES: Safety, Longevity and Interoperability Remain a Mess
- Apple’s Spring 2024: In-person announcements no more?
- The 2024 WWDC: AI stands for Apple Intelligence, you see…
The post The Apple iPhone 16e: No more fiscally friendly “SE” for thee (or me) appeared first on EDN.
The QLDPC code breakthrough in quantum error correction

The quantum low-density parity check (QLDPC) codes, the “holy grail” of quantum error correction research and development for 30 years, have a breakthrough, according to the Vancouver-based Photonic Inc. These codes use fewer quantum bits (qubits) than traditional surface code approaches. The company’s chief quantum officer, Stephanie Simmons, sat with EE Times to explain how this low-overhead error correction technology works to realize the promised exponential speedups in quantum computing.
Read the full story at EDN’s sister publication EE Times.
Related Content
- Quantum Computers Explained
- The Basics Of Quantum Computing
- Error correction may stall quantum computing
- Error Correction Key to Achieving Quantum Advantage
- New chip reveals Microsoft’s quantum computing playbook
The post The QLDPC code breakthrough in quantum error correction appeared first on EDN.
Overdesign

How much needless stuff is designed into modern products? How much do we suffer when we’re trying to use software products sometimes described as “bloatware”? What was the origin of the term “bells and whistles” as an attribute of products that are overly complex?
Back in 1940, a then-new drawbridge was opened for service along the Belt Parkway in Brooklyn, NY. It was called the Mill Basin Bridge.
That structure has since been replaced by a higher bridge with enough vertical clearance above the underlying waterway so that boat traffic can pass in and out unimpeded, but back then, the roadway of that 1940 bridge had to be repeatedly raised and lowered as boat traffic came and went underneath. Figure 1 is a screenshot of a roadway section in its up position.
Figure 1 A roadway section of the Mill Basin Bridge in the up position where the attendant must deploy two large steel barriers that stop traffic so that they can raise the roadway.
There was an observation tower at that bridge in which a bridge attendant would be stationed. It was his job to get that roadway raised and lowered when needed and to halt automobile traffic when the roadway was up. Part of his task was to operate two huge steel barriers that would cross the roadway in both directions when the roadway was impassable. Those barriers were multi-ton behemoths designed to ensure that no car was ever going to traverse a raised roadbed and fall into the water below.
One day, as a water vessel needed to get by, the attendant activated those huge barriers to go into position, but as he did so, he spotted a speeding motorist coming along who was not going to be able to stop in time before crashing into the barrier ahead. The attendant reversed the barrier control motors but because it looked like the barrier would not be out of the way in time, he left his post in the tower and tried to physically push that barrier by hand away from the car’s path. He did not succeed, the oncoming car crashed into the barrier and that attendant was killed.
My father was one of the emergency crew who responded to that disaster. He told me all about it the following day. Dad was the foreman of that part of the NYC Department of Bridges which serviced that bridge. There had been many other incidents of this ilk as well involving those same barriers and when they occurred, our house telephone would ring at any time of day or night and my father would have to go off to work as a result.
Years later, those steel barriers were removed and replaced with slender wooden crossing gates painted with red and white stripes. Those stripes could be seen by oncoming motorists from quite far away so no car ever went up a raised roadbed. Yes, the painted gates may have been now and then smithereened, but no repeat of the above tragedy ever took place, at least so far as I was ever told.
The overdesign aspect of all this is that those immense steel barriers were not only unnecessary, they were a danger in their own right. Putting them in was a major cost item with negative impact (no pun intended) on the drawbridge’s operating history.
Those barriers were a tragic example of overdesign.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Loud noise makers and worker safety
- RF power amplifier safety
- Fractured wires and automotive safety
- The headlights and turn signal design blunder
The post Overdesign appeared first on EDN.
Power Tips #138: 3 ways to close the control loop for totem-pole bridgeless PFC

Among all power factor correction (PFC) topologies, totem-pole bridgeless PFC provides the best efficiency; therefore, it is widely used in servers and data centers. However, closing the current control loop of a continuous conduction mode (CCM) totem-pole bridgeless PFC is not as straightforward as it is for a traditional PFC. A traditional PFC operating in CCM employs an average current-mode controller [1], as shown in Figure 1, where VREF is the voltage-loop reference, VOUT is the sensed PFC output voltage, Gv is the voltage loop, VIN is the sensed PFC input voltage, IREF is the current-loop reference, IIN is the sensed PFC inductor current, GI is current loop, and d is the duty ratio of pulse-width modulation (PWM). Since the bridge rectifier is used in a traditional PFC, all these values are positive, and current feedback signal IIN is the rectified input current signal.
Figure 1 Average current-mode controller for PFC where all the parameters listed have positive values and IIN is the rectified input current signal. Source: Texas Instruments
Since the inductor current in the totem-pole bridgeless PFC is bidirectional, the current-sense method used in traditional PFC will not work. Instead, you will need a bidirectional current sensor such as Hall-effect sensor to sense the bidirectional inductor current and provide a feedback signal to the control loop.
The output of the Hall-effect sensor will not 100% match the sensed current, though. For example, if the sensed current is a sine wave, then the output of the Hall-effect sensor is a sine wave with a DC offset, as shown in Figure 2. Thus, you can’t use it as the feedback signal in the current-mode controller shown in Figure 1, and you will have to modify the controller to accommodate this new feedback signal. In this power tip, I’ll describe three ways to close the current control loop with this new feedback signal.
Figure 2 Totem-pole bridgeless PFC and its current-sense signal showing that the Hall-effect sensor output will not 100% match the sensed current. Source: Texas Instruments
Method 1: Controllers without a negative loop referenceSome digital controllers, such as the UCD3138 from Texas Instruments (TI), use a hardware state machine to implement the control loop; therefore, all of the input signals to the state machine must be greater or equal to zero. In such cases, follow these steps to close the current control loop:
- Sense the AC line and AC neutral voltage through two analog-to-digital-converters (ADCs) separately.
- Use firmware to rectify the sensed VAC signal, as shown in Equation 1 and Figure 3.
Figure 3 Using the firmware shown in Equation 1 to rectify the sensed input voltage VAC. Source: Texas Instruments
- Calculate the sinusoidal reference, VSINE, using the same method as when calculating IREF in traditional PFC, as shown in Equation 2 and Figure 4.
Figure 4 Calculating a sinusoidal reference (VSINE) using the same method as when calculating IREF in traditional PFC. Source: Texas Instruments
- Use a Hall-effect sensor output as the current feedback signal IIN directly (Equation 3).
- During the positive AC cycle, if you compare the shape of VSINE and the Hall-effect sensor output, they have the same shape. The only difference is the DC offset. Use Equation 4 to calculate the current-loop reference, IREF.
- The control loop has standard negative feedback control. Use Equation 5 to calculate the error that goes to the control loop:
- During the negative AC cycle, if you compare the shape of VSINE and the Hall-effect sensor output, the difference is not only the DC offset; their shapes are opposite as well. Use Equation 6 to calculate the current-loop reference, IREF.
- During the negative AC cycle, the higher the inductor current, the lower the value of the Hall-effect sensor output. The control loop needs to change from negative feedback to positive feedback. Use Equation 7 to calculate the error going to the control loop.
For a pure firmware-based digital controller such as the TI C2000 microcontroller, the control loop is implemented with firmware, which means that the internal calculation parameters can be positive or negative. In such cases, follow these steps to close the current control loop:
- Sense the AC line and AC neutral voltage through two ADCs. Then use the line voltage to subtract the neutral voltage to obtain VIN, as shown in Equation 8 and Figure 5.
Figure 5 Calculating VIN after using the line voltage to subtract the neutral voltage. Source: Texas Instruments
- Calculate the sinusoidal current-loop reference, IREF, using the same method as in traditional PFC, as shown in Equation 9 and Figure 6.
Figure 6 Calculating IREF using the same method as the traditional PFC. Source: Texas Instruments
- If you compare the shape of IREF and the Hall-effect sensor output, they have the same shape; the only difference is the DC offset. Use Equation 10 to calculate the input current feedback signal, IIN. Figure 7 shows the waveform.
Figure 7 The waveform of the Hall sensor output and DC offset to calculate IIN. Source: Texas Instruments
- During the positive AC cycle, the control loop has standard negative feedback control. Use Equation 11 to calculate the error going to the control loop:
- During the negative AC cycle, the higher the inductor current, the lower the value of the Hall-effect sensor output; thus, the control loop needs to change from negative feedback to positive feedback. Use Equation 12 to calculate the error going to the control loop.
Total harmonic distortion (THD) requirements are becoming stricter, especially in server and data-center applications. Reducing THD necessitates pushing the control-loop bandwidth higher and higher. High bandwidths reduce phase margins, resulting in loop instability. The limited PFC switching frequency also prevents bandwidths from going very high. To solve this problem, you can add a precalculated duty cycle to the control loop to generate PWM; this is called duty-ratio feedforward control (dFF) [2], [3].
For a boost topology operating in CCM mode, Equation 13 calculates dFF as:
This duty-ratio pattern effectively produces a voltage across the switch whose average over a switching cycle is equal to the rectified input voltage. A regular current-loop compensator changes the duty ratio around this calculated duty-ratio pattern. Since the impedance of the boost inductor at the line frequency is very low, a small variation in the duty ratio produces enough voltage across the inductor to generate the required sinusoidal current waveform so that the current-loop compensator does not need to have a high bandwidth.
Figure 8 depicts the resulting control scheme. Adding the calculated dFF to the traditional average current-mode control output, dI, results in the final duty ratio, d, used to generate the PWM waveform to control PFC.
Figure 8 Duty-ratio feedforward control for PFC where adding the calculated dFF to the traditional average current-mode control output, dI, results in the final duty ratio, d, used to generate the PWM waveform to control PFC. Source: Texas Instruments
To leverage the advantages of dFF in a totem-pole bridgeless PFC, follow these steps to close the current loop:
- Follow steps 1, 2, 3, 4 and 5 from Method 2.
- Calculate dFF, as shown in Equation 14. Since VIN is a sine wave and its value is negative in a negative AC cycle, use its absolute value for the calculation.
- Use Equation 15 to add dFF to the GI output, dI, and obtain the final d.
You can also use dFF control for a hardware state machine-based controller; for details, see reference [2].
Closing the current loopClosing the current loop of a totem-pole bridgeless PFC is not as straightforward as in a traditional PFC; it may also vary from controller to controller. This power tip can help you eliminate the confusion around control-loop implementations in a totem-pole bridgeless PFC, and choose the appropriate method for your design.
Bosheng Sun is a systems engineer in Texas Instruments, focusing on developing digital controlled high-performance AC/DC solutions for server and industry applications. Bosheng received an M.S. degree from Cleveland State University in 2003, and a B.S degree from Tsinghua University in Beijing in 1995, both in Electrical Engineering. He holds 5 US patents.
Related Content
- Power Tips #108: Current sensing considerations in a bridgeless totem pole PFC
- A comparison of interleaved boost and totem-pole PFC topologies
- Power Tips #116: How to reduce THD of a PFC
- Power Tips #132: A low-cost and high-accuracy e-meter solution
References
- Dixon, Lloyd. “High Power Factor Preregulator for Off-Line Power Supplies.” Texas Instruments Power Supply Design Seminar SEM600, literature No. SLUP087, 1988.
- Sun, Bosheng. “Duty Ratio Feedforward Control of Digitally Controlled PFC.” Power Systems Design, Dec. 3, 2014.
- Van de Sype, David M., Koen De Gussemé, Alex P.M. Van den Bossche, and Jan A. Melkebeek. “Duty-Ratio Feedforward for Digitally Controlled Boost PFC Converters.” Published in IEEE Transactions on Industrial Electronics 52, no. 1 (February 2005): pp. 108-115.
The post Power Tips #138: 3 ways to close the control loop for totem-pole bridgeless PFC appeared first on EDN.
RRAM: Non-volatile memory for high-performance embedded applications

Non-volatile memory is an important component in a wide range of high-performance embedded applications. Especially, many consumer, industrial, and medical applications need increased re-writability to support both more frequent code updates as well as increased data logging.
These applications require greater memory density to store either a substantially larger code footprint and/or more extensive data logs. Moreover, developers need to be able to improve power efficiency while lowering system cost.
Today, there are numerous non-volatile memory technologies available to developers, including EEPROM, NOR flash, NAND flash, MRAM, and FRAM. Each has its own distinct advantages for specific applications. However, the combination of 1) manufacturing process technologies continuing to scale smaller, 2) the need for higher densities at lower power, and 3) re-writability becoming increasing important has led to increased interest in RRAM for these applications.
This article will explore RRAM technology and how it provides developers with a new approach to meeting the changing memory requirements of high-performance embedded systems.
Memory in high-performance embedded systems
Emerging connected systems face a number of tough design challenges. For instance, medical devices—such as hearing aids, continuous glucose monitors (CGMs), and patches—must fit into a smaller form factor despite increasing data and event logging requirements necessary to enable remote monitoring and compliance with industry standards.
Next, smart equipment in Industry 4.0 systems require significantly greater code storage to facilitate functionality like remote sensing, edge processing, and firmware over-the-air (FOTA) updates for remote maintenance. Furthermore, the addition of artificial intelligence (AI) at the edge in wearables and Internet of Things (IoT) devices is driving the need for high-performance, energy-efficient non-volatile memory in smaller form factors.
The increased code size and data logging requirements of such systems exceeds the embedded non-volatile memory capabilities of microcontrollers. External memory is needed to match increasing density and performance requirements. However, code and data often need varying capabilities depending upon performance, density, endurance, and data-write size.
Thus, multiple non-volatile memories may have to be used, such as NOR flash for data logging and high-density EEPROM for code storage. This can lead to systems that use several types of external memory, increasing system cost, complexity, and energy consumption.
Ideally, systems can use a single memory type that supports both external code and data storage without compromising performance or functionality for either. An emerging non-volatile technology to fill this gap as a standalone external memory is RRAM.
Resistive RAM
Resistive RAM (RRAM) is a non-volatile random-access memory that was made available commercially in the early 2000s. It operates by changing the resistance of a switching material sandwiched between two electrodes, show on left in Figure 1.
Figure 1 Typical RRAM memory cell consists of one transistor and one resistor (left), and the memory state is altered by applying an external bias across the metal electrodes (right). Source: Infineon
The switching material can be metal oxide or a conductive bridging switching media. A typical RRAM memory cell consists of one transistor and one resistor pair (1T1R) where the resistance of the RRAM can be altered with an external bias applied across the metal electrodes, shown on the right side of Figure 1.
Initially, RRAM was developed as a potential replacement for flash memory. At the time, the cost and performance benefits of RRAM weren’t enough to supersede the advantages of other non-volatile memory technologies, especially as an external memory. However, in recent years, several factors have changed to make RRAM a compelling non-volatile alternative.
Specifically, as embedded systems become more integrated and implemented in smaller manufacturing process nodes with substantially larger code and data storage requirements, the following advantages of RRAM for external memory overtake traditional non-volatile options:
- Scalability
Some non-volatile memory technologies are limited in their ability to scale, translating to limitations in overall memory density due to footprint, power, and cost. A major advantage of RRAM is that it can be manufactured in a compatible CMOS process, enabling it to scale to process nodes below 45 nm and even down as low as 10 nm.
For example, the memory industry has had difficulty cost-effectively scaling NOR flash memory as the technology seems to be physically limited to between 35 and 40 nm. Scalability has a direct impact on performance, density, footprint, and energy efficiency.
- Direct write
Data storage for a NOR flash memory requires two operations: an erase operation to clear the target address followed by a write operation. The “direct write” functionality of RRAM eliminates the need to first erase memory. Thus, only a write operation is required to store data. Figure 2 shows the operations required for writing to both NOR flash and RRAM.
Figure 2 NOR flash requires an erase operation before every write operation, increasing write time, energy consumption, and wear on memory cells. RRAM’s ability to direct write speeds write operations, conserves energy, and extends cell endurance. Source: Infineon
This leads to much faster large-scale write operations for RRAM, such as during FOTA updates.
- Byte re-writeable
Some non-volatile memories perform writes based on page size. For example, NOR flash page size is typically either 256 or 512 bytes. This means every write impacts the entire page. To change one byte, the page must be read and stored in a temporary buffer; the change is made to the temporary duplicate.
The flash must then erase the page and write the entire page back in from the buffer. This process is time-consuming and wears the flash (typically 100k+ writes). In addition, data cells that are not changed are worn unnecessarily. Consequently, data logging with NOR flash requires that data is cached and then written in page-sized chunks, adding complexity and potential data loss during a power event.
In contrast, RRAM write size is much smaller (few bytes) with higher endurance than NOR flash. This is more manageable and accommodates data logging requirements well since cells are worn only when written to. Thus, RRAM is robust and efficient for both code storage and data logging in the same memory device.
- Energy efficiency
Through optimizations such as byte re-writability and eliminating erase operations during data writes, RRAM achieves better energy efficiency, up to 5x lower write energy and up to 8x lower read energy compared to traditional NOR flash.
- Radiation tolerance and electromagnetic immunity
RRAM technology is inherently tolerant to radiation and electromagnetic interference (EMI). This makes RRAM an excellent choice for those applications where environmental robustness is essential.
Consolidate code storage and data logging
RRAM is a proven technology whose time has come. It’s an established technology that has been in embedded form in chips for over a decade as an internal non-volatile memory. With its ability to scale to smaller process nodes, provide higher endurance and re-writability at low power, and minimize write time and power consumption through direct write functionality, RRAM delivers high performance without compromising robustness or efficiency (Table 1).
Table 1 The above data shows a comparison between RRAM and other non-volatile memory technologies. Source: Infineon
RRAM is an ideal memory for consolidating both code storage and data logging in a single external memory to simplify design and reduce system complexity, making RRAM a compelling alternative to traditional non-volatile memories for many consumer, industrial, and medical applications.
Bobby John is senior product marketing manager for memory solutions at Infineon Technologies.
Related Content
- Resistive RAM Memory is Finally Here
- RRAM set to follow 3-D flash, says IMEC
- RRAM: A New Approach to Embedded Memory
- RRAM Startup Raises £7M to Support Data-Hungry Applications
- Monolithic embedded RRAM presents challenges, opportunities
The post RRAM: Non-volatile memory for high-performance embedded applications appeared first on EDN.
Easing the development burden for embedded systems

Embedded developers have an ongoing struggle to keep the pace by upgrading compilers and debugging tools, while ensuring the processing hardware going into their systems are right-sized for the application. Conventional embedded development tends to force engineers to stick with an established, often vendor-specific, ecosystem where shifting to a different IDE comes with its own costly learning curves. MCU companies and other manufacturers with a foothold in embedded processing are beginning to release solutions that embrace third-party development environments to appeal to the larger market while also maintaining their place with established customers.
Microchip’s moves to ease hardware-software codesignMicrochip’s recent release of their AI coding assistant and unified compiler is this company’s signal that they are set to follow this theme of lowering development barriers. The AI coding assistant enables their established MPLAB platform to function within the eminently popular Visual Studio (VS) framework while the unified compiler license combines the MPLAB C compilers: XC8, XC16, XC-DSC, and XC32.
In a conversation with Rodger Richey, the VP of development systems and academic programs, EDN was able to see how a developer would use the AI coding assistant within the VS IDE.
AI coding assistant in VSThere are essentially three main pieces to this tool:
- A microchip chatbot for product questions such as coding-specific questions
- An autocomplete function that predicts with inline code suggestions
- Direct access to technical data in vectorized datasheets allowing users to search for information such as block diagrams without leaving the coding environment
“We have actually been using this internally within Microchip for the last nine months and, depending upon the user, we’ve seen anywhere between a 20 to 40% productivity enhancement,” said Richey, offering a live demo of the VS tool.
Creating project and searching datasheetAs shown in Figure 1, he began by asking the assistant about the features of the new PIC32CM JH family of MCUs where the assistant replied with details such as core, memory, and peripheral features as well as part numbers. At this point, the tool downloads the vectorized datasheet so that it is searchable within the AI coding assistant, “I’m going to put the part number in and select the compiler, now what’s happened in the background is the coding assistant has downloaded the vector datasheet.”
Figure 1 The VS AI coding assistant tool showing the details of the PIC32CM JH family of MCUs.
As shown in Figure 2, Richey begins by asking the assistant to generate code to initialize the UART and ADC, sample once per second, and write to the UART an array of 32. At this point, the tool returns the code and interrupt routines that Richey arbitrarily places within the source code file.
Figure 2 Code generation to initialize the UART and ADC within the PIC32CM JH MCU.
Checking code for errorsThe source code is then copied and pasted into the chatbot to check for errors and a number of issues are found (Figure 3). “I inserted the code in the middle of an existing function and what it does is creates a corrected version of the code,” Richey then places the corrected code into the existing file.
Figure 3 Error is copied and pasted into the chatbot and a request to check for errors is placed whereby the coding assistant returns a number of errors found within the code.
AutocompleteAt this point, the autocomplete function is highlighted. First, code is created that adds two numbers together (Figure 4). As Richey accepts autocomplete inline suggestions, the tool begins predicting the next steps in the code: creating a function to average the numbers within the array. Figure 4 shows some of these inline code suggestions. Comments can also be added with relative ease by simply asking the assistant to comment the code where the user’s job is limited to rejecting/accepting the comment suggestions. “We want to keep the developers in the flow of writing code. Anything you do is a distraction, even if you have to leave the development environment or even write comments.”
Figure 4 Autocomplete functions create inline comment/code suggestions that the user can accept or reject.
Exploring the datasheet within the IDERichey brings up a block diagram of the analog comparator within the development environment by prompting the tool with “show me the block diagram of the analog comparator,” as shown in Figure 5. “If I want to look at what the registers are for, it’ll show me all of the registers and the bits within them.”
Figure 5 Block diagrams for the analog comparators can be viewed within the IDE with a hyperlink that takes the user to the datasheet.
At this point, Richey has the tool write code to initialize the comparator which he can then, once more, place arbitrarily within the source code, and check for errors. “Now I’ve got a project open that’s got a lot of directories, and each one of these directories has a lot of files,” Richey goes on to show the “@Codebase” prompt checking for a function to initialize the SERCOM0 module, a UART-to-serial port that is on the PIC32CM JH (Figure 6). At this point, the tool pulls out the relevant piece of code so that the user can drop it into any other file. “I can do an @file where I can go search for a particular file, @folder to bring up all the folders within the project.”
Figure 6 Checking the codebase for a function to initialize the SERCOM0 module, a UART-to-serial port that is on the PIC32CM JH.
Unified compilerThe MPLAB XC unified compiler licenses just released today are another initiative by the company to simplify the process of working with Microchip’s hardware and software. “Typically, you’ll have a free compiler and an optimizing compiler that you have to pay for. The difference with the optimizing compilers is that they have implemented core- or product-specific optimizations where you’ll typically see ~35% smaller code for a vendor supply compiler relative to, say a GCC [GNU Compiler Collection],” says Richey. The marginal gains can be the factor that pushes engineers into a higher memory, more expensive, device where the cost of a compiler would likely pale in comparison to the per unit savings on a mass-manufactured product.
“A vendor today will typically supply core specific compilers, so there might be an ARM core, a MIPS core, or a supplier-specific core. As a client this can be difficult to manage. You have to ask yourself how many compilers you need.” And, as the needs for the project change, so do the mix of compilers used potentially creating more overhead for the client since payments for compilers are generally issued on an annual basis. For more stringent industries like industrial and automotive, the clients might stay on a fixed version of a tool for decades, where they must pay for that same version every year.
“Microchip supports a lot of automotive, medical, industrial, and aerospace and defense clients, and they’re typically fixed on one version, because to upgrade the compiler version forces them to go back through an approval loop and it’s very costly for them, creating a purchasing nightmare.” Microchip is attempting to sidestep these issues with the unified compiler where a single, perpetual license grants access to the MPLAB XC8, XC16, XC-DSC and XC32 C compilers. There is however a voluntary 12-month maintenance fee that is 20% the cost of the compiler.
Richey finished by highlighting Microchip’s initiatives for easing the development process for embedded engineers using Microchip hardware/software, “Whether it’s IR, Keil, SEGGER, VS, or the Eclipse IDE, we realize that not everyone that’s developing for Microchip is using Microchip tools. So we want to meet the developer and the ecosystem of their choice. We don’t want to force them into our ecosystem.”
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
- Analog Devices’ approach to heterogeneous debug
- Elevating embedded systems with I3C
- 8 pillars of embedded software
- Optimizing motor control for energy efficiency
The post Easing the development burden for embedded systems appeared first on EDN.
EcoFlow’s RIVER 2: Svelte portable power with lithium iron phosphate fuel

As I discussed in early December, my first purchase attempt of a lithium battery-based portable power generator, Energizer’s PowerSource Pro, didn’t pan out. But I didn’t give up. In fact, as I’d alluded to in a writeup published two weeks earlier, I bought two successors, both from the same company, EchoFlow. The smaller RIVER 2 (and no, I don’t know why its marketing-anointed name is all-caps):
is what I’ll be discussing here, with coverage of its more sizeable, flexible DELTA 2 sibling saved for another day (that said, currently scheduled to arrive in your web browser shortly):
With the stock photo out of the way, here’s a shot of my particular unit fresh out of the box and on the workbench, first in overview with its SLA battery-based Phase2 Energy PowerSource 660Wh 1800-Watt Power Station precursor in the background:
And then in closeup, specifically of its front panel display during initial AC-fed charging:
I picked up the RIVER 2 from EcoFlow’s eBay store, supposedly refurbished, for $109.68 after a 20%-off promo coupon, inclusive of sales tax and with free shipping, at the beginning of September. I say “supposedly refurbished” because, in stark contrast to the Energizer PowerSource Pro Battery Generator I’d bought earlier, which was supposedly brand new but arrived in clearly pre-used condition, this one seemingly came fresh and pristine straight from the factory manufacturing line. Why? I suspect it had something to do with the fact that shortly after I bought mine, EcoFlow introduced its gen-3 devices. The RIVER 3’s incremental benefits were modest, as it turned out and at least in my typical use case, such as:
- GaN-based circuitry, resulting in even smaller dimensions than before, along with higher-efficiency (therefore cooler and quieter) operation
- Higher USB-C PD peak output power (100 W vs 60 W), and
- Faster output switching speed from direct to inverter-generated AC, thereby enabling the RIVER 3 to more robustly act as an UPS (<20 ms vs 30 ms)
Although curiously, the RIVER 3’s peak stored charge capacity decreased to 245 Wh versus the RIVER 2’s 256 Wh. My guess is that my good fiscal fortunes were due to a preparatory stealth warehouse-purging move on EcoFlow’s part.
What was my rationale for getting the RIVER 2 in addition to the more sizeable Energizer PowerSource Pro-like DELTA 2? The price tag was certainly an effective temptation. More generally, here’s what I wrote back in December:
[It] is passable for overnight camping trips in the van, for example. Or a day’s worth of drone flying. Or for powering my CPAP machine and oxygen concentrator overnight. And I can use the aforementioned 100-W portable solar panel to also recharge it during the day (albeit not at the same time as the Phase2), in conjunction with an Anderson-to-XT60i connector adapter cable.
That “aforementioned 100-W portable solar panel” is this:
which I’d covered back in September. And in the spirit of “the proof of the pudding is in the eating” (which for today I’m rephrasing as “the proof of the concept is in the pictures”), here are some shots of it hooked up to on my back deck. First, the solar panel itself:
An illuminated light means “working”:
As does this output-voltage reading:
This particular solar panel was originally intended for use with (and still works fine with) the Phase2 Energy PowerSource, whose backside includes an Anderson Powerpole PP15-45 solar charging connector:
To adapt the panel to that generator, as mentioned in more recent coverage, “required both the female-to-female DC5521 that came with the Foursun F-SP100 solar panel and a separate male DC5521-to-Anderson adapter that I bought off Amazon:”
And what about the RIVER 2? Its solar (as well as car, via an included “cigarette lighter”-source adapter cable) charging connector of choice, as mentioned in that same more recent coverage, is an orange-color XT60i:
the higher current-capable, backwards-compatible successor to the original yellow-tint XT60 used in prior-generation EcoFlow models:
So, what did I do to bridge the connection-discrepancy gap? I added yet another cable to the chain, of course:
a third-party Anderson to XT60i adapter I’d found on Amazon:
The resultant setup does indeed passably bump up the RIVER 2 battery charge, assuming there’s adequate available sunshine, although fastest charging results are unsurprisingly achieved with the RIVER 2 tethered to AC as shown earlier. To wit, by the way, my stopwatch happily confirms EcoFlow’s website claim that you can “charge 0-100% in only 60 mins”.
To the “svelte” adjective in this writeup’s title, I’ll offer the following representative specs:
- Dimensions: 9.6 x 8.5 x 5.7 inches
- Weight: approximately 7.7 lbs.
The RIVER 2 standard maximum AC-inverter power output (pure sine wave, not simulated) is 300W, and it also offers a feature branded as X-Boost Mode, which doubles the output AC power (at a reduced voltage tradeoff that not all powered devices are guaranteed to accept, albeit obviously counterbalanced by higher current) to 600W. And speaking of powered devices, what are its AC and DC power output options? I thought you’d never ask:
- Two AC (one two-prong, one three-prong with ground): 120V, 50Hz/60Hz, 300W (along with surge 600W at sub-120V, as just described), pure sine wave, not simulated
- Two USB-A DC: 5V, 2.4A, 12W max
- Cigarette lighter” car DC: 12.6V, 8A, 100W max
- And USB-C (which does double-duty as both an output and another battery-charge input option, along with aforementioned AC and XT60i) DC: 5/9/12/15/20V 3A, 60W max
One other generational comment (adding on to my earlier gen-2 vs -3 comparisons), specifically related to the “lithium iron phosphate fuel” bit in this post’s title. First-generation EcoFlow devices were based on NMC (lithium nickel manganese cobalt) battery technology, which as I mentioned back in late November and again a few weeks later is only capable of a few hundred recharge cycles before its maximum storage capacity degrades to unusable levels in realistic usage scenarios. For gen-2 and beyond, EcoFlow switched to LiFePO4 (lithium iron phosphate), also known as LFP (lithium ferrophosphate), battery formulations. The comparative specs bear out the technology-transition improvements; the first-generation RIVER was guaranteed for only 500 recharge cycles, whereas with the RIVER 2 it’s 3,000. The RIVER 2 product page further elaborates and elucidates on the claimed benefits, which also include a 5 year warranty:
Safe, for up to 10 years of use.
LiFePO4 Battery Chemistry
With upgraded long-lasting LFP battery chemistry at its core, charge and empty RIVER 2 Series over 3000 times. That’s pretty much 10 years of everyday use and 6x longer than the industry average. With LFP cells, RIVER 2 Series is safe, durable, and highly efficient, even in warm temperatures.
One final note: the RIVER 2 integrates both Bluetooth and (believe it or not) Wi-Fi connectivity:
You can, for example, both monitor the status of the RIVER 2:
and over-the-air update its embedded firmware:
via a mobile-device intermediary using the company’s Android and iOS app versions.
And with that, I’ll wrap up for today. Let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Energizer’s PowerSource Pro Battery Generator: Not bad, but you can do better
- Experimenting with a modern solar cell
- The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile
- Multi-solar panel interconnections: Mind the electrons’ directions
The post EcoFlow’s RIVER 2: Svelte portable power with lithium iron phosphate fuel appeared first on EDN.
Designing better listening experiences with multi-driver IEMs

Once primarily used by stage performers, modern in-ear monitors (IEMs) have expanded into the personal audio space over the past decade, transforming how we experience music, games, and digital content through unprecedented levels of sonic detail and spatial accuracy. This interest from audio enthusiasts, gamers, and a growing segment of consumers requires manufacturers to meet higher demands for premium sound.
At the heart of these immersive listening experiences lies a sophisticated engineering approach: multi-driver design. While traditional earphones rely on a single driver to reproduce all sounds, today’s premium IEMs employ arrays of specialized drivers, each precisely tuned to handle specific frequencies.
Leading manufacturers are pushing these boundaries further—from FiiO’s FA19 with its intricate 10-driver architecture to Earsonics’ EM96 featuring a refined three-way crossover system. As consumer demand for premium audio experiences grows, the multi-driver IEM market has evolved from its roots in the professional stage to become a pillar of high-fidelity personal audio.
This evolution brings both opportunities and challenges, requiring manufacturers to master complex acoustic engineering while delivering comfortable, practical designs for daily use. Success in this competitive landscape demands more than just adding drivers—it requires a deep understanding of how to harmoniously integrate these components to create superior listening experiences, requiring a deep understanding of acoustics, crossover integration, and component synergy.
To succeed in this evolving market, brands must navigate technical complexities in multi-driver design—ensuring seamless integration, optimizing crossovers, and balancing comfort with performance—without compromising on sound quality.
Why multi-driver designs?
Unlike single-driver IEMs, which are tasked with reproducing the entire frequency range within one transducer, multi-driver designs distribute the workload across specialized drivers. This approach mirrors full-sized speaker systems, where woofers, midrange drivers, and tweeters work together to create an immersive soundstage.
By assigning dedicated drivers to specific frequency ranges—such as bass, midrange, and treble—manufacturers can achieve exceptional clarity and depth. This targeted separation ensures each frequency range is handled by specialized drivers, reducing distortion and delivering a cohesive, high-fidelity listening experience.
A key component of modern multi-driver IEMs is the balanced armature (BA) driver. Originally developed for hearing aids, BAs have since become a cornerstone of in-ear audio due to their compact size and precision tuning capabilities. BAs use stationary coil and pivoting armature, which enables them to reproduce detailed frequencies with remarkable efficiency.
Figure 1 BAs are becoming critical for in-ear audio designs. Source: Knowles
Because of their small form factor and specialization, BAs are ideal for multi-way driver configurations, where multiple units work together to optimize frequency response, enhance clarity, and improve overall sound separation.
In fact, multiple BAs are the industry standard, though some manufacturers introduce alternative technologies—such as planar drivers, dynamic drivers, or even microphones—for novelty. Highly versatile and available in several variations for specialized applications, BAs can function in multiples or in tandem with other technologies, as often seen in hybrid-driver true wireless stereo (TWS) earphones.
Addressing design challenges
Integrating multiple drivers into an IEM presents both opportunities and engineering challenges. While multi-driver designs enable more refined tuning and enhanced performance, they require precise crossover implementation, seamless driver integration, and compact form factor solutions to deliver the best user experience. Manufacturers must balance sound quality, consistency, and ergonomic constraints while also delivering a competitive and signature sound experience.
- Crossover design
Multi-driver IEMs rely on crossover circuits to distribute frequencies across different drivers. Poorly executed crossovers can cause phase issues (cancellation of energy rather than addition and vice versa), frequency dips, and distortion, particularly in the midrange, where driver overlaps are most sensitive.
By strategically distributing the audio signal across multiple drivers, each driver operates within its optimal range, reducing the likelihood of distortion. This ensures that no single driver is overburdened, leading to cleaner, more accurate sound reproduction with better clarity and separation compared to single-driver IEMs.
For additional ease of integration, choosing multi-way drivers with pre-configured crossover implementations can reduce the complexity of designing systems from scratch and ensure clean performance upfront.
Figure 2 BAs allow finer control over the interaction between drivers. Source: Knowles
BAs are helpful for crossover design, as they enable finer impedance control, ensuring seamless transitions between drivers. Due to their stationary coil design, which can be wound with different impedances, BAs enable finer control over the interaction between drivers. Specialty BAs with closed-back designs can further reduce acoustic irregularities, producing more natural sound even in complex setups.
- Signature sounds
Modern IEM manufacturers distinguish themselves through unique sound signatures that define their brand identity. The precision and flexibility of multi-driver configurations enable manufacturers to create these distinctive audio profiles with unprecedented control. BAs play a pivotal role in signature sound development.
Specialty BAs engineered for specific acoustic tasks—such as extended treble or enhanced midrange—allow manufacturers to tailor sound signatures precisely. Each BA configuration can be customized to achieve different target sound signatures in a multi-driver layout, making it easier for manufacturers to create distinctive audio profiles without extensive R&D time.
Figure 3 BAs can be engineered for specific acoustic tasks. Source: Knowles
Multi-way drivers can deliver pre-tuned frequency ranges, alleviating the work of internal teams and enabling faster progression in product development. Selecting multi-way BA models with dedicated drivers optimized for bass, midrange, and treble reduces the need for extensive manual tuning. Properly tuned BAs ensure each driver operates within its ideal range, avoiding common issues like frequency dips or distortion in the midrange.
The multi-driver designs also offer manufacturers greater flexibility in tuning their unique signature sound. By integrating newer technologies and adjusting the crossover points of different driver types, engineers can define specific characteristics—such as enhanced bass, detailed midrange, or sparkling highs—to make their output one-of-a-kind.
- Form-factor flexibility
Despite advancements in miniaturization, integrating multiple drivers into a compact, ergonomic earpiece remains a challenge. Comfort and versatility are essential for an optimal user experience, and sound quality must be balanced with design and functionality.
The compact size of BAs offers greater flexibility in placement within an earpiece, freeing up valuable space for designing with multiple drivers. This enables the incorporation of specialty BAs—engineered for high impact in exceptionally small sizes—maximizing room for additional drivers and advanced crossover designs.
Unlike other driver types, which require larger enclosures for optimal functionality, BA drivers are significantly smaller. They also have adjustable port placements. This allows multiple units to be arranged within the same IEM shell.
Pre-configured multi-way BA configurations help manufacturers create ergonomic, form-fitting IEMs without needing large nozzles or vents. These factors allow for a more minimalist design, making it easier to achieve a comfortable form factor.
4 Scalability across multiple markets
As demand for high-performance in-ear monitors continues to grow across various listener segments, manufacturers must develop scalable solutions that cater to a wide range of users. Achieving this requires flexible driver configurations that maintain high sound quality standards while accommodating different price points.
One of the most effective ways to achieve this scalability is through multi-way driver configurations and hybrid technology. By combining drivers, manufacturers can fine-tune crossover points to create sound profiles suited for different applications. This versatility allows brands to produce IEMs that offer precise, high-fidelity audio at multiple tiers—whether for entry-level consumer models or high-end audiophile monitors.
Reliability and consistency in production also play a crucial role in meeting market demand. Automated manufacturing processes ensure tight tolerances, reducing batch-to-batch inconsistencies in multi-driver designs. Additionally, the availability of pre-configured multi-way BA systems simplifies product development, allowing manufacturers to expand their product lines efficiently without extensive redesigns.
By leveraging these scalable design strategies, companies can provide high-quality IEMs across various market segments, ensuring a balance between performance, affordability, and accessibility without compromising sound integrity.
IEMs shaping personal premium sound
Multi-driver designs have redefined what’s possible in IEM performance, enabling richer, more detailed soundscapes than ever before. Through advancements in BA technology and thoughtful integration of multiple drivers, manufacturers are overcoming traditional limitations to meet the rising demand for premium audio experiences.
For research and development teams in the personal audio space, mastering the complexities of multi-driver design is crucial for maintaining competitiveness in today’s rapidly evolving market. If done well, new IEM designs could shape the future of personal premium sound.
Cristina Downey is senior electroacoustic engineer for R&D at Knowles Corp.
Related Content
- Audio design and technology
- TWS reference design features hybrid dual speaker
- Hear This: ‘Ear-Worn Computing’ Around the Corner
- Exploring Trends and Opportunities for True Wireless Audio
- Understanding superior professional audio design: A block-by-block approach
The post Designing better listening experiences with multi-driver IEMs appeared first on EDN.
New chip reveals Microsoft’s quantum computing playbook

We took a step back and said, ‘OK, let’s invent the transistor for the quantum age, said Chetan Nayak, corporate VP of Quantum Hardware at Microsoft. He was talking about the company’s Majorana 1 chip, which marks a notable development in quantum computing. EDN’s sister publication EE Times takes a closer look at this chip’s topological qubit architecture while providing a technical glimpse of competing products: Google’s Willow processor and the University of Science and Technology of China’s Zuchongzhi 3.0 chip.
Read the full story at EE Times.
Related Content
- The Basics Of Quantum Computing
- Exploring the Frontiers of Quantum Computing
- Power supply management in quantum computers
- Hardware security entering quantum computing era
- How Intel Quantum Chips Could Retransform Silicon-Based Computing
The post New chip reveals Microsoft’s quantum computing playbook appeared first on EDN.
GaN transistors fit standard Si packages

Infineon is advancing industry-wide standardization by offering its CoolGaN Generation 3 (G3) transistors in silicon MOSFET packages. The IGD015S10S1 100-V transistor will be housed in a 5×6-mm routable QFN (RQFN) package, while the IGE033S08S1 80-V variant will come in a 3.3×3.3-mm RQFN package.
These two CoolGaN G3 transistors, compatible with industry-standard silicon MOSFET packages, enable easy multi-sourcing and complementary layouts for silicon-based designs. The 100-V IGD015S10S1 provides a typical on-resistance of 1.1 mΩ. The 80-V IGE033S08S1 has a typical on-resistance of 2.3 mΩ. Their new packages, combined with GaN technology, ensure low-resistance connections and minimal parasitics.
Infineon’s chip and package combination enhances robustness in thermal cycling and improves thermal conductivity. The larger exposed surface area and higher copper density aid in better heat distribution and dissipation.
Samples of the IGE033S08S1 and IGD015S10S1 GaN transistors in RQFN packages will be available in April 2025. For more information, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post GaN transistors fit standard Si packages appeared first on EDN.
Secure MCUs provide segment LCD drive

Low-power 32-bit MCUs in the Renesas RA4L1 group integrate a segment LCD controller, capacitive touch sensing unit, and robust security. Based on an 80-MHz Arm Cortex-M33 processor with TrustZone support, the MCUs can be used for metering, IoT sensing, smart locks, and home appliances.
RA4L1 microcontrollers operate down to 1.6 V, consuming 168 µA/MHz when active and just 1.70 µA in standby mode with all SRAM retained. The series, which comprises 14 devices, offers 256 KB or 512 KB of dual-bank code flash, 64 KB of SRAM, and 8 KB of data flash. They provide a variety of peripherals and a wide range of communication interfaces.
In addition to Arm Trust Zone, the MCUs feature Renesas Secure IP (RSIP-E11A) supporting AES, ECC, hash value generation, and a 128-bit unique ID. They offer up to three tamper pins and secure pin multiplexing. The devices come in a variety of small packages, including a 3.64×4.28-mm WLCSP.
The RA4L1 MCUs, along with an evaluation board and capacitive touch starter kit, are available now. Samples and kits can be ordered from the Renesas website or distributors.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Secure MCUs provide segment LCD drive appeared first on EDN.
Wideband DF antenna hones radio location

Compact and lightweight, the R&S ADD507 direction finding (DF) antenna covers 9 MHz to 8 GHz, reducing the need for multiple antennas. Expanded VHF coverage improves weak signal detection, making the antenna well-suited for mobile interference hunting, emitter tracking, and close-range monitoring.
The ADD507 features active and passive antenna elements with an active/passive switch that adjusts to the signal environment with a mouse click. Passive mode bypasses all active components, boosting resistance to strong unwanted signals.
Antenna polarization is vertical, and system DF accuracy is typically 2° RMS in a reflection-free environment. The AD507 is approximately 0.33×0.27 m (13×10.63 in.) and weighs about 4.5 kg (9.9 lb). An optional vehicle adapter with a magnetic mount simplifies roof mounting.
To request pricing information for the ADD507 DF antenna, use the product page link below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wideband DF antenna hones radio location appeared first on EDN.