Українською
  In English
Збирач потоків
Apple’s spring 2026 soirée: The rest of the story

With smartphone and tablet news already discussed, what else did Apple unveil this week? Read on for all the goodies and their details.
As I teased at the end of my prior piece, computers and displays were also on the plate for Apple’s “big week of news” announcements suite. With today’s (as I write this on Wednesday in the late afternoon) New York, London, and Shanghai “Experience” in-person events now concluded:

(No, alas, I wasn’t invited)
I’m guessing that Apple’s wrapped up its rollouts for now, therefore compelling me to revisit my keyboard for concluding part 2. That said, I realized in retrospect that there was one additional earlier hardware announcement that, had I remembered at the time (and in time), I would have also included in part 1, since it also covered mobile devices. So, let’s start there.
AirTag 2
In late April 2021, Apple introduced its first-generation AirTag trackers, leveraging Bluetooth LE connectivity to mate them with owner-paired smartphones and tablets and more broadly (when a tagged device is lost) the broader Find My crowdsourced network ecosystem to assist in identifying their whereabouts and monitoring their movements. Integrated ultrawideband (UWB) support, when also comprehended by the paired mobile device, affords even more precise location discernment (i.e., not just somewhere in the living room, but having fallen between the sofa cushions). And built-in NFC support assists anyone who might find a tag (and whatever it’s attached to), to notify the person that it belongs to. Here’s my first-gen teardown.
Nearly five years later, and quoting Wikipedia:
An updated model with the U2 chip, upgraded Bluetooth, and a louder speaker was released in January 2026 [editor note: Monday the 26th, to be precise]. It has enhanced range for precision detection with iPhones equipped with a U2 chip such as the iPhone 15/Pro or later (excluding iPhone 16e), and also allows an Apple Watch with a U2 chip such as the Apple Watch Series 9 or later, or Apple Watch Ultra 2 or later (excluding Apple Watch SE), to precisely locate items.
Now fast-forwarding a month-plus to this week’s announcements…
The M5 Pro and Max SoCs
2.5 years back, within my coverage of Intel’s then leading-edge and first-time chiplet-implemented Meteor Lake CPU architecture:

I noted that the company was, to at least some degree, following in the footsteps of AMD and Apple, both having already productized chiplet-based designs. In AMD’s case, I was on solid footing with my stance, as the company had already been embedding and interconnecting discrete processors, graphics, and other logic circuits for several years. In Apple’s case, conversely, my definition of a chiplet implementation was a bit more loosey-goosey, at least at the time:

Above is a de-lidded photo of Apple’s M1 SoC. At left is the single-die implementation of the entirety of the logic circuitry, plus cache. And on the right are two DRAM memory chips. Admittedly, the “Ultra” variant of the eventual M1 product family, at far right:

upped the ante a bit more, “stitching together two distinct M1 Max die via a silicon interposer”. But I’ve long wondered when Apple would go “full monty” on disaggregation, mixing-and-matching various slivers of logic silicon attached to and interconnected via a shared packaging substrate, to keep each die’s dimensions to a reasonable manufacturing-yield size and to afford fuller implementation flexibility. To wit, the points I made back in September 2023 remain valid:
- Leading-edge processes have become incredibly difficult and costly to develop and ramp into high-volume production,
- That struggle and expense, coupled with the exponentially growing transistor counts on modern ICs, have negatively (and significantly so) impacted large-die manufacturing yields not only during initial semiconductor process ramps but also long-term, and
- Desirable variability both in process technology (DRAM versus logic, for example), process optimization (low power consumption versus high performance) and IC sourcing (internal fab versus foundry), not to mention the attractiveness of being able to rapidly mix-and-match various feature set combinations to address different (and evolving) market needs, also enhance the appeal of a multi- vs monolithic-die IC implementation.
That time is now, branded as the “Fusion Architecture” and ironically foreshadowed by a then-subtle Apple online store tweak a month ago. Quoting from the press release subhead:
M5 Pro and M5 Max are built using the new Apple-designed Fusion Architecture that connects two dies with advanced IP blocks into a single SoC, delivering significant performance increases that push the limits of what’s possible…
In an interesting twist from the past, this time the two product proliferations seemingly share a common processor die, although the variety and number of guaranteed-functional cores varies both between the two devices and within a given device’s binning variants. Conversely, the graphics core counts diverge more substantially between the two devices. To some degree this is reflective of the high-end “Max” device’s professional content creator target demographic, although I’d wager that it more broadly affords more robust on-device deep learning inference capabilities in conjunction with the chips’ presumed-still-existent neural processing cores. And what of an “Ultra” variant of the M5…is it on the way? Maybe…
Tomato, tomahtoSpeaking of cores, by the way…sigh. Look back at my M5 SoC (and initial devices based on it) coverage from last October, and you’ll see that, just as with prior generations of both A- and M-based Apple-developed silicon, it contains a mix of both performance (speed- optimized) and efficiency (power consumption-tuned) cores. Here’s the specific press release quote again:
M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.
All well and good; the Arm-developed architecture analogy is big.LITTLE. Revisiting that page on Arm’s website just now, however, I curiously noticed that whereas it historically called out two different types of cores, now there are apparently three. Check out the subhead:
Arm big.LITTLE technology is a heterogeneous processing architecture that uses up to three types of processors. LITTLE processors are designed for maximum power efficiency, while big processors are designed to provide efficient, sustained compute performance.
Keep in mind that Apple is an Arm architecture licensee, so it develops its own (still instruction set-compatible, of course) cores. That said, beginning with the M5 Pro/Max processing chiplet, Apple has also developed a third core, this one an intermediate half-step between the performance and efficiency endpoints. You might think that Apple would call this new one the “balanced” core, say. But alas, you’d be wrong. Here’s long-time Apple observer Jason Snell, quoted in a post from another Apple prognosticator, “graybeard,” John Gruber:
With every new generation of Apple’s Mac-series processors, I’ve gotten the impression from Apple execs that they’ve been a little frustrated with the perception that their “lesser” efficiency cores were weak sauce. I’ve lost count of the number of briefings and conversations I’ve had where they’ve had to go out of their way to point out that, actually, the lesser cores on an M-series chip are quite fast on their own, in addition to being very good at saving power! Clearly they’ve had enough of that, so they’re changing how those cores are marketed to emphasize their performance, rather than their efficiency.
What did Apple decide to do instead, including a retrofit of published M5 documentation?
- The prior-named “Performance” core is now instead called, believe it or not, “Super.”
- The “Efficiency” core retains its original name, for a brief moment of sanity
- And the new in-between “balanced” core? It’s the recycled ”Performance” moniker.
The following summary table originated with another recent John Gruber post; I’ve simplified the SoC options, reordered the CPU core columns, and added a column for GPU core counts:
|
|
CPU (Super) |
CPU (Performance) |
CPU (Efficiency) |
GPU |
|
M5 |
3-4 |
N/A |
6 |
8-10 |
|
M5 Pro |
5-6 |
10-12 |
N/A |
16-20 |
|
M5 Max |
6 |
12 |
N/A |
32-40 |
That’s just…super. Sigh.
(More) M5 MacBook Pros(nifty video animation, eh?)
“Super” SoCs inside aside, the new 14” and 16” MacBook Pros are effectively identical to their M4-based forebears (note that the sole M5 version initially announced last fall was the 14” model). The only other items of particular note both involve memory. Baseline and upgraded DRAM capacity option prices remain the same as last time, despite current industry memory supply constraints; an upper-end 64 GByte option for the M5 Pro has even been added. And regarding flash memory, Apple has obsoleted last November’s entry-level 512 GByte SSD option for the baseline 14” M5 MacBook Pro, making the new capacity starting point for that product (1 TByte) more expensive than before. That said, it’s now $100 lower than the 1 TByte variant price at intro just a few months ago, and capacity-upgrade prices have also decreased.
The M5 MacBook Air(s)
Here’s another example of not being able to tell, based solely on external appearances, which generation of devices you’re looking at. Coming, as with its M3- and M4-based forebears, in both 13” and 15” versions, the M5 MacBook Air also upgrades to Apple’s N1 network connectivity chip. But, speaking once again of (flash, specifically) memory, and akin to the product line option slimming for the 14” M5 MacBook Pro mentioned in the prior section, the lowest-available capacity for the new devices is 512 GBytes, versus 256 GBytes in previous generation. I’m guessing that the reasoning is two-fold this time; as with the 14” M5 MacBook Pro’s option-culling, the company’s “hiding” its higher flash memory costs by only offering more profitable capacity choices to customers. Plus, by doing so, Apple can more clearly differentiate the MacBook Air from its other products. Speaking of which…
The MacBook Neo
I’ll kick off this section with a few history lessons. Back in 2015, Apple introduced the “new MacBook” (also commonly referred to as the “12” MacBook), with a Retina-resolution display and based on Intel m-series (and later, i-series) CPUs. It slotted between the then-non-Retina MacBook Air and the high-end MacBook Pro in Apple’s product portfolio from a pricing standpoint, even though its processing performance undershot that of the notably less expensive MacBook Air. Plus, it was hampered by the unreliable “butterfly” keyboard. It was discontinued after only three hardware iterations and four years of production.
In addition to its unfavorable price comparison to the MacBook Air, the “new MacBook” was also still competing to a degree against then-popular Windows-based “netbooks”, which were even lower priced. Back in late 2008, Former CEO Steve Jobs had (in)famously quipped re netbooks, “We don’t know how to make a $500 computer that’s not a piece of junk.” Hold that thought.
My last history lesson is, conversely, a Steve Jobs success story. Back in mid-1999, two years (and change) after Jobs’ return to Apple and less than a year after launching the consumer-tailored iMac desktop, Apple unveiled the iBook laptop:

which came in multiple eye-catching, intentionally non-“business” color options:

Quoting Wikipedia:
The line targeted entry-level, consumer and education markets, with lower specifications and prices than the PowerBook, Apple’s higher-end line of laptop computers. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort.
Look again at the image of the iBook’s color options. Now look at the photo at the beginning of this section. See where I’m going?
The newly unveiled MacBook Neo comes in two price tiers: $599 (with a further $100 discount for education customers; take that, Chromebooks) and $699. The higher-end variant gets you twice the SSD capacity—512 GBytes versus 256 GBytes—along with a Touch ID fingerprint reader built into the keyboard. That’s it. 8 GBytes of DRAM, with no upgrade option. No Thunderbolt, only two USB-C ports, one of them supporting only USB 2 speeds. The first-time use of an A-series processor, the (Apple Intelligence-capable) A18 Pro (albeit with one fewer graphics core enabled than the initial version in the iPhone 16 Pro series); that said, it seems to benchmark (at least) roughly on par with the M1 that until recently was still being sold by Walmart in the MacBook Air. And a networking subsystem rumored to come from MediaTek, versus developed internally.
In closing, at least for this section: what’s with the name? Some folks had forecasted that it’d just be called the “MacBook”, but as I’ve already noted, that particular name is now “damaged goods”. Others thought that an “iBook” resurrection was in the cards, but Apple stopped referring to devices via “i” monikers a while ago. That said, “Neo” was definitely not on my bingo card. Maybe someone in Cupertino is a fan of The Matrix, but thought that “MacBook Mr. Anderson” would be too ponderous?
Displays
Having already passed through 2,000 words, I’m going to keep this section short. Apple announced two new Studio Display models, its first updates to this particular product category in many years. They’re both 27” in size, with 5K Retina resolutions, although their refresh rates, dynamic ranges, and other image quality measures vary. The “inexpensive” one starts at $1,599, with its pricier sibling beginning at $3,299; both are available in standard or (upgrade) nano-texture glass options, and mounting and other accessories are also available. And interestingly, at least to me, they don’t work with legacy Intel-based Macs, even the scant few models (one of which I’m currently typing on) that are still supported by MacOS 26. For more details, check out the press release.
And what about…The M5 Mac mini, whose possibility I alluded to yesterday? Didn’t happen, even though the current M4-based models are popular with the agentic AI enthusiast community (and others). That said, in revisiting my prognostication yesterday afternoon, I remembered that Apple had also skipped the M3 Mac mini generation, and that said, the time-consuming form factor redesign development from the M2 to the M4 might have at least partly explained that delay.
And what of the upgrade to the “vanilla” iPad that lots of folks were forecasting would happen this week? Another nope. The primary rationale here was that it was the only remaining member of Apple’s current product line whose CPU (the A16) doesn’t support Apple Intelligence. But there was no evidence of the telltale indicator of a new product’s arrival: depleted retail inventories of the current model. My guess: Apple will be happily talking about AI again at this year’s WWDC, now that Google’s on board as the company’s development partner, and that’d be a perfect time to announce the “iPad 12”…or maybe “iPad Neo”? I jest (I hope).
Time to put down my cyber-pen and turn it over to you for your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- MWC 2026: Apple, Google, Samsung and Other Contending Contestants
- Intel’s next-generation CPUs hide chiplets inside*
- Apple’s obvious misfires
- Evaluating value-oriented x86 CPUs: The price of falling prices
The post Apple’s spring 2026 soirée: The rest of the story appeared first on EDN.
EEVblog 1737 - Alex Lidow: Inventor of the Power MOSFET
Wolfspeed launches first commercially available 10kV SiC power MOSFET
MCU enables ASIL D safety and control

Built on a 28-nm process, the Renesas RH850/U2C automotive microcontroller delivers robust connectivity and security for modern E/E architectures. This 32-bit MCU expands the RH850 lineup with a cost-optimized option for chassis and safety systems, battery management, body control, and other ASIL D–rated applications.

The device integrates four RH850 CPU cores running at up to 320 MHz, including two lockstep cores, and up to 8 MB of on-chip flash memory. It combines 10BASE-T1S and TSN Ethernet (1 Gbps/100 Mbps), CAN XL, and I3C with widely used interfaces such as CAN FD, LIN, UART, CXPI, I2C, I2S, and PSI5.
In addition to functional safety support up to ASIL D under ISO 26262, the RH850/U2C meets current cybersecurity requirements in accordance with ISO/SAE 21434. The MCU integrates hardware acceleration for cryptographic algorithms, ranging from post-quantum cryptography (PQC) to those mandated by current Chinese and other international regulations.
The RH850/U2C is available in BGA292 and HLQFP144 packages.
The post MCU enables ASIL D safety and control appeared first on EDN.
VNAs perform production test up to 9 GHz

With typical measurement speeds of 25 µs/point, Copper Mountain’s three SC series VNAs enable efficient testing in both R&D and manufacturing environments. The SC0402, SC0602, and SC0902 two-port analyzers cover a common frequency start of 9 kHz, with upper ranges of 4.5 GHz, 6.5 GHz, and 9 GHz, respectively.

These instruments offer a typical dynamic range of 130 dB (10 Hz IF BW) for precise characterization of RF components and complex systems. Output power can be adjusted from -50 dBm to +5 dBm, with up to 500,001 measurement points/sweep. Measured parameters include S11, S21, S12, and S22.
Standard software capabilities, available without a paid license, include linear and logarithmic sweeps, power sweeps, and time-domain conversion with gating. Additional functions include S-parameter embedding and de-embedding, limit testing, frequency offset, and vector mixer calibration.
Automation is supported through LabVIEW, Python, MATLAB, .NET, and other programming environments, allowing up to 16 independent channels with 16 traces/channel. A manufacturing test plug-in is available as an add-on to integrate the VNA software into existing automated manufacturing and QA processes.
The SC series VNAs carry MSRPs of $13,995 (SC0402), $15,995 (SC0602), and $17,995 (SC0902).
The post VNAs perform production test up to 9 GHz appeared first on EDN.
MCU brings USB-C power to embedded devices

Infineon’s EZ-PD PMG1-B2 MCU integrates a single-port USB Type-C PD controller with a 55-V buck-boost controller for charging 2- to 12-cell Li-ion battery packs. Compliant with the latest USB Type-C and PD specifications, the device accepts an input voltage range of 4.5 V to 55 V with switching frequencies programmable from 200 kHz to 700 kHz.

The MCU targets USB-C-powered embedded devices in consumer, industrial, and communications markets, where devices make use of its integrated functions. Typical applications include cordless power and gardening tools, vacuum cleaners, kitchen appliances, e-bikes, drones, and robots.
The EZ-PD PMG1-B2 features a 32-bit Arm Cortex-M0 processor with 128 KB of flash and 8 KB of SRAM for customizable embedded applications. It integrates analog and digital peripherals—including ADCs, PWMs, UART/I2C/SPI interfaces, and timers—reducing PCB space and BOM. A comprehensive SDK and software suite simplify development and system design.
Production of the EZ-PD PMG1-B2 is expected to begin in the second quarter of 2026. Samples, technical documentation, and evaluation boards are available upon request.
The post MCU brings USB-C power to embedded devices appeared first on EDN.
Passive limiter shields electronics from RF threats

Teledyne Microwave UK’s B3LT98026 is a passive wideband limiter designed to protect sensitive receiver front ends in defense and military communication systems. It operates from 0.1 GHz to 20 GHz and withstands up to 10 W peak input power under defined pulse width and duty cycle conditions.

The device enhances the survivability of Radar Electronic Support Measures (R-ESM) and Electronic Warfare (EW) systems operating in complex threat environments. It provides continuous, always-on protection against high-power RF and emerging Directed Energy Weapons (DEWs).
Across the operating band, the limiter maintains a maximum insertion loss/noise figure of 2.0 dB and a maximum input/output VSWR of 1.5:1. A fast 40-ns recovery time enables rapid return to nominal sensitivity following high-power events. The device operates over a temperature range of −20°C to +85°C, supporting deployment in demanding environments.
The compact SMA-based housing supports straightforward integration into existing architectures without requiring system redesign. The B3LT98026 is also compatible with Teledyne’s Phobos mast top unit and can accommodate additional RF elements, such as filters, when required.
The B3LT98026 is now available for evaluation in defense and EW systems.
The post Passive limiter shields electronics from RF threats appeared first on EDN.
Nordic debuts multiple cellular IoT products

Nordic Semiconductor expands its ultra-low-power cellular IoT portfolio with Cat 1 bis, satellite NTN, and advanced LTE-M/NB-IoT with edge AI. Leveraging the proven nRF91 series, the nRF92 and nRF93 deliver a scalable, secure platform for global connectivity.

The nRF92 LTE-M/NB-IoT and satellite NTN series introduces the company’s smallest, most highly integrated, and power-efficient cellular solution. It combines a high-performance application MCU with Axon neural processing units, a multi-constellation GNSS receiver, Wi-Fi positioning, and sensor coprocessing. Lead customer sampling is underway, with general availability expected in early 2027.
The nRF93M1 is an LTE Cat 1 bis cellular IoT module with integrated MCU, LTE modem, GNSS receiver, and Wi-Fi positioning. It supports up to 10 Mbps downlink and 5 Mbps uplink, offers global LTE coverage, and is designed for low-power, compact applications. The module is compatible with nRF Cloud for device management, firmware updates, and location services. Lead customers are currently developing products with the nRF93M1, with general availability starting mid-2026.
Additionally, Nordic has enhanced the nRF91 LTE-M/NB-IoT series with 3GPP-compliant GEO and LEO satellite NTN connectivity and sub-GHz fallback to maintain connectivity when public networks are unavailable. The company also introduced the nRF91M1 module, a compact Smart Modem that simplifies adding cellular connectivity to host–modem designs.
The post Nordic debuts multiple cellular IoT products appeared first on EDN.
📌 Стартувала реєстрація на НМТ-2026
Розпочинається перший етап підготовки до Національного мультипредметного тесту — реєстрація, яка триматиме до 02 квітня включно.
Що мають зробити українські вступники?
Smartphone shipments to fall 7% in 2026 amid memory constraints and geopolitical pressures
Circuits Integrated Hellas and Reach Power sign multi-year strategic MOU
EV system design from components to modules to software

Electric vehicle (EV) design at the system level is a rapidly evolving landscape encompassing components, hardware modules, and software platforms. So, on the first day of Automotive Tech Forum 2026, which was dedicated to EV designs, a panel titled “Powering the Electric Vehicle: From Semiconductors to Systems” took a deep dive into the system-level intricacies of EV designs.
Carsten Himmele, marketing manager for automotive at Allegro MicroSystems, highlighted the growing presence of silicon carbide (SiC) in traction inverters due to its ability to deliver higher bandwidth and efficiency. However, while talking about motor control for EV traction, he also mentioned challenges in operating in harsher electrical environments.
“SiC brings in higher bandwidth for motor control, but it also makes the electrical environment somewhat harsher,” he said. Himmele added that advanced phase-current sensing and inductive rotor-position sensing are essential for overcoming these challenges. “Moreover, system-grade building blocks reduce the number of external components and improve design efficiency,” he concluded.
That’s where gallium nitride (GaN) offers key advantages, said Alex Lidow, CEO and co-founder of Efficient Power Conversion (EPC). “GaN is smaller, more efficient, and more rugged compared to silicon and SiC,” he said. “It’s particularly effective in 48-V systems, which complement the emerging 800-V architectures.”
Lidow added that while EVs with 48-V systems are now leading the way, GaN devices are 5 to 7 times more efficient than their MOSFET ancestors. “GaN is powering onboard chargers, DC/DC converters, battery cooling pumps, steering systems, and infotainment.”

Rohan Samsi, VP of GaN Business Division at Renesas, also talked about the paradigm shift GaN brings to power converters, enabling simplified single-stage designs. “The bidirectional switch allows you to take out something that was a multi-stage converter and replace it with a single stage.” To achive integration synergy, Samsi emphasized that GaN’s strengths in current sensing, temperature sensing, and gate drive enable holistic EV solutions.
Finally, Kerry Grand, marketing manager for Simulink Automotive at MathWorks, turned the discussion toward the software aspects of design. He was asked to inform the panel on the latest developments in EV traction from a system-integration standpoint. And what does hardware testing uncover about the present and future of EV drivetrain?
Grand began with an insight into EV system-level design through simulation and model-based design. Then he identified enduring challenges in EV system design, including high-voltage isolation, battery life optimization, and thermal management. “Simulating detailed thermal systems offers automotive OEMs the ability to trade off temperature limits without compromising system performance.”
At a time when EV design building blocks like traction inverters and battery management systems (BMS) are continually adding functionality, system-level challenges are a critical area to watch. The panel discussion in Automotive Tech Forum 2026 provides a glimpse of design challenges and viable solutions in this design realm.
You can watch this session along with all sessions from the Automotive Tech Forum 2026 virtual event on demand at www.automotiveforum.eetimes.com.
Related Content
- Stop EMI from spreading in an EV design
- Are EVs Peaking? Exploring the Next Step in EV Design
- This is how an electronic system design platform works
- Shifting the EV Bus to 800 V: Benefits and Design Challenges
- How specialized MCUs meet on-board charger design needs
The post EV system design from components to modules to software appeared first on EDN.
Cardiac monitors: Inconspicuous, robust data collectors

As follow-up to last month’s narrative of a cardiac abnormality thankfully detected by wearable devices, this engineer details the monitoring system he subsequently donned for a month.
Two-plus years ago, my contributor-colleague John Dunn described his most recent experience with a wearable cardiac monitor. And, as any of you who read one of my last-month blog posts already know, I more recently followed in his footsteps. I don’t yet know the outcome of my heart health study; my follow-up appointment with the cardiologist is a week away as I type these words. Regardless, I thought you might still find it interesting to learn about the gear I toted around, stuck to my chest (and in my pocket) for 30 days, and my experiences using it.
The system I used was Philips’ MCOT (Mobile Cardiac Telemetry), specifically its “patch” variant:

Here’s an overview video; others, plus documentation, are at the product support page:
I took several “selfies” of the sensor in place on my chest but ultimately decided to save you all the abject horror of seeing any of them. Instead, I’ll stick with these stock images:


My initial scheduled meeting with the cardiologist took place on December 12, 3+ weeks after our “introduction” at the emergency room. I’d been on both beta blockers (to regulate my heartbeat) and blood thinners (in case my prior irregular rhythm had resulted in the formation of a clot) since my initial visit to the hospital in mid-November. The cardiologist ordered the monitor, which arrived a bit more than a week later; I began wearing it the day after Christmas.
Here’s the box that the system comes in:







The first thing I saw was the initial sensor patch, along with the return shipping packaging bag. Below it was the template I used for proper placement each time I stuck a patch on my chest:

The bulk of the contents were contained in two inner boxes, the first labeled “Getting Started” and the second referred to as “Monitoring”. Inside the first:

were several primary items:

along with installation and operation overview instructions:

The monitoring device, both here and in subsequent photos accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

whose dimensions and Android operating system foundation, along with the legacy presence of an analog headphone jack alongside the USB-C port:

and a multi-camera rear array in a specific arrangement:


suggest it to be a custom-software derivative of Samsung’s Galaxy A52 smartphone, introduced in March 2021:

It came with the translucent green case pre-installed, by the way. Here are some other overview images of the smartphone…err…monitoring device (its left side was unmemorable so I didn’t bother):


Next up was a small scrub pad used to further prepare my chest for patch application, after initial hair shaving. And, of course, there was the sensor itself:


Its edge arrived already abraded; I’m guessing that it had already been popped open, with its rechargeable battery subsequently replaced, at least once prior to its arrival at my residence:

Now for box #2:


More instructions, of course:

along with more patches, a more detailed instruction booklet, and the dual-charging unit:

The AC/DC adapter has two USB-A outputs:

which can be used in parallel:

One, connected to a red USB-A to USB-C cable, is used for daily recharge of the “monitoring device” (smartphone). The other (black, this time) cable terminates in a charging dock for the sensor, which I used every five days in conjunction with (and in-between) the patch removal and replacement steps:




Here’s how the initial “monitoring device” bootup went (since this was a custom Android-plus-app build, I wasn’t able to grab screenshots directly from the smartphone, perhaps obviously):





After initial charging of both the monitoring device and sensor, I continued the setup process:


Here’s what a patch looks like when you first take it out of the package; top:

and bottom:

Pressing down on the sensor while aligned with the patch base snaps it into place:

A briefly illuminated LED subsequently indicates that the sensor is correctly installed, at which point the monitoring device is able to “see” it (broadcasting over Bluetooth, presumably Low Energy):

At this point, you can peel away the protective clear plastic cover over the back side adhesive:



All that’s left is to press it into place on your chest…and then peel off the existing patch, pop out and recharge the sensor and redo the installation process five days later:

Lather, rinse, and repeat until the total 30-day cycle is over, which the system thoughtfully tracks on your behalf. Then ship it all back to the manufacturer.

The monitoring device, which regularly receives data transmissions from the sensor, periodically then uploads the data to the “cloud” server over an LTE or EV-DO cellular data connection.


If you forget to keep the monitoring device close by, data won’t be lost, at least for a while. There’s an unknown amount of memory onboard the sensor (yes, I searched for a teardown, alas unsuccessfully), albeit presumably not the full 2 GBytes allocated to this alternative device designed solely for local data logging. But the monitoring device will still alert you (both visually and audibly) to the lost wireless (again, presumably Bluetooth’s LE variant) connection:

You’ll also be alerted if the sensor’s integrated battery drops to a low level and recharge is necessary (I proactively did this every five days, as previously noted, since I’d received six total patches):

If you feel like something’s amiss with your “ticker” (heart pounding, fatigue, etc.) you can tap on the icon at the center of the display and the monitoring device will send an alert “flag” for subsequent correlation with the potential cardiac arrythmia data collected at that same time:

And in closing, here are some shots of other monitoring device display screens that I captured:



By the time you see this, assuming I don’t need to reschedule for some reason, I will have met with my cardiologist and gotten the (hopefully positive) results. I’ll follow up in the comments. And please also share your thoughts there! Thanks as always for reading.
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Adventures with a remote heart monitor
- Wearables for health analysis: A gratefulness-inducing personal experience
- Avoiding blood pressure measurement errors
- Avoiding blood pressure measurement errors – Part 2
- How to design an optical heart rate sensor into a wearable device’s wristband
The post Cardiac monitors: Inconspicuous, robust data collectors appeared first on EDN.
Volta initiates bioleaching gallium recovery study with Laurentian University
Semtech expands data-center portfolio by acquiring HieFo for $34m
Navitas and EPFL demo 250kW solid-state transformer
Київська політехніка отримала додаткову грантову підтримку від Amazon Web Services
Amazon Web Services (AWS) надав другий грант КПІ з початку повномасштабної війни. У 2022 році університету було надано перший терміновий грант, що дозволив оперативно здійснити міграцію інфраструктури до хмарного середовища. Тоді цифрові сервіси функціонували у партнерському середовищі компанії EPAM. Пізніше університет повністю перейшов на власний акаунт AWS.
Arrow Electronics and Infineon introduce 240W USB-C PD 3.2 reference design for battery-powered motor control applications
Arrow Electronics and Infineon Technologies AG have announced REF_ARIF240GaN, a 240W USB Power Delivery (PD) 3.2 reference design for battery-powered motor control applications that require high performance and power efficiency in a compact form factor. This design complements the existing portfolio of joint reference design solutions from Arrow and Infineon, supporting the ongoing migration of customer designs to USB-C technology.
REF_ARIF240GaN is specifically designed to support the launch of EZ-PD
PMG1-B2, Infineon’s newest USB PD 3.2 controller, featuring up to 240W USB sink capability and integrated buck-boost functionality in a compact single package. It provides developers with a ready-to-use platform for implementing high-power USB-C charging alongside efficient motor drive control features. It brings fast charging capabilities for 2- to 12-cell Li-ion battery packs, simplifying the overall design and reducing components count.
Motor control functionality is delivered using Infineon’s PSOC C3, a 180MHz Arm Cortex-M33 microcontroller, and highly efficient 100V CoolGaN G5 transistors. By combining a fully interoperable USB-C PD stack with high-performance sensor and sensorless GaN motor control on a single platform, the reference design enables compact, high-efficiency battery-powered systems while shortening development time, reducing bill of materials cost and space required.
Target applications include light electric vehicles (e-bikes, e-scooters and personal mobility devices), along with power tools, vacuum cleaners, kitchen appliances, garden equipment and robotics.
The reference design can be obtained upon request. Advanced technical support and customisation services are available from Arrow’s engineering solutions centre (ESC).
Visitors to embedded world 2026 can see the joint Arrow and Infineon solutions for motor control and battery-powered applications at Arrow’s stand 4A-342.
About Arrow Electronics
Arrow Electronics (NYSE:ARW) sources and engineers technology solutions for thousands of leading manufacturers and service providers. With 2025 sales of $31 billion, Arrow’s portfolio enables technology across major industries and markets. Learn more at arrow.com.
The post Arrow Electronics and Infineon introduce 240W USB-C PD 3.2 reference design for battery-powered motor control applications appeared first on ELE Times.
Robotics Engineering: The Architectural Evolution Behind IT–OT Convergence
Factories today operate as dense mechanical ecosystems, whether in automotive assembly lines or semiconductor fabrication units. Traditionally, each robotic and mechanical element performed predefined, deterministic functions within isolated automation cells. However, as shop floors become increasingly machine-intensive and interconnected, operational complexity rises proportionally. Managing these environments now requires more than mechanical precision—it demands architectural coordination across layers of control and intelligence.
In this context, the convergence of Information Technology (IT) and Operational Technology (OT) is fundamentally reshaping robotics engineering. Data processing layers—analytics engines, business logic systems, and enterprise platforms—are no longer separated from operational control systems. At the same time, the physical layer, comprising sensors, actuators, servo drives, and Programmable Logic Controllers (PLCs), is becoming increasingly tightly integrated with edge compute and network infrastructure. Robotics systems are no longer designed as standalone motion units; they are engineered as nodes within a larger, connected control ecosystem.
“Traditional automation tools were built for a high-volume, low-variability environment. But today’s market demands agility,” says Ujjwal Kumar, Former Group President of Teradyne Robotics.
This architectural integration is shifting robotics engineering from a purely mechanical discipline toward system-level design—where communication protocols, deterministic networking, cybersecurity, and software orchestration are as critical as torque curves, kinematics, and payload specifications.
Adaptive Systems
At the core of this transformation lies the emergence of adaptive robotic systems. In practical terms, adaptability on the shop floor means the ability to reconfigure, scale, and modify operational behavior through software-defined control and network orchestration, rather than through mechanical redesign. Modern robots are no longer confined to fixed, pre-programmed routines. Equipped with AI models, IIoT connectivity, and high-resolution sensor feedback, they can interpret environmental inputs, process real-time data streams, and dynamically adjust execution parameters.
“The big difference is that traditional automation was a custom-made, perfect solution for one application. The new age of AI-integrated robotics has standard products serving multiple applications. You go into multiple applications through software and some end-of-arm tooling differences,” says Ujjwal Kumar, Former Group President of Teradyne Robotics.
As manufacturers pursue higher efficiency alongside greater product diversity, such adaptability becomes essential. Integrated control and data layers allow robots to transition between production tasks or product variants with minimal downtime, supporting high-mix manufacturing environments. Simultaneously, context-aware operations enable robotic systems to respond to signals from enterprise platforms such as ERP and MES, aligning execution with demand fluctuations, material availability, and downstream constraints.
The Build Architecture: Sensors, Control, and Communication Layers
To understand the engineering behind IT–OT convergence, it is useful to examine the architectural layers that define modern shop-floor robotics. Traditionally, industrial systems followed hierarchical models such as ISA-95, where field devices, control systems, and enterprise platforms operated in structured tiers with limited cross-layer interaction. Today’s robotic systems, however, are increasingly designed around a more unified Industrial Internet of Things (IIoT) architecture—where sensing, control, computation, and enterprise integration operate within a tightly interconnected framework.
“The groundbreaking automation innovations of the future won’t come from one single company but from close cross-technology ecosystem collaborations,” says Ujjwal Kumar, Former Group President of Teradyne Robotics.
At the foundation lies the physical and sensing layer. Modern robots are embedded with dense networks of encoders, force–torque sensors, high-resolution vision systems, vibration monitors, and environmental sensors—particularly critical in semiconductor manufacturing. Unlike earlier generations, where sensors primarily supported local closed-loop motion control, today’s sensing infrastructure generates continuous, time-synchronised data streams. These data flows serve a dual purpose: ensuring precision motion control while simultaneously feeding analytics and optimisation engines upstream.
Above this sits the control and communication layer, where deterministic execution remains paramount. PLCs, motion controllers, industrial PCs, and real-time operating systems govern microsecond-level synchronisation of servo drives and actuators. However, this layer has evolved from rigid, ladder-logic-driven hierarchies to hybrid architectures that combine deterministic control with networked intelligence. Industrial Ethernet, fieldbus systems, and increasingly Time-Sensitive Networking (TSN) ensure that motion commands and data packets coexist without compromising latency or jitter requirements. Control systems are no longer isolated—they are communicative nodes within a broader industrial network.
The next shift occurs at the edge. Edge computing nodes now preprocess high-frequency sensor data, execute AI inference models, and filter operational information before it propagates upward. Event-driven architectures and publish–subscribe communication patterns allow machines to update a shared operational state across the plant continuously. Rather than relying solely on hierarchical polling mechanisms, modern factories operate through near real-time data dissemination, enabling contextual awareness across production assets.
James Davidson, Chief Artificial Intelligence Officer, Teradyne Robotics, says, ” AI is transforming robots from tools into intelligent collaborators that can perceive, learn, and adapt.”
At the enterprise integration level, robotics systems increasingly interact with MES and ERP platforms, digital twin environments, and predictive maintenance engines. Data flow is no longer unidirectional. Demand signals, material constraints, and quality metrics can influence robotic execution parameters in near real time. This bidirectional exchange is the practical manifestation of IT–OT convergence—where business logic and machine logic intersect.
Underpinning all these layers is a security and infrastructure framework that ensures resilience. As robots become connected assets, cybersecurity, network segmentation, device authentication, and secure firmware management become integral engineering considerations rather than afterthoughts. Connectivity without security would undermine determinism and operational continuity.
Redefining the Core of Robotics Engineering
For decades, robotics engineering on shop floors was largely centred on mechanical excellence. Engineers focused on motion accuracy, payload capacity, repeatability, structural rigidity, and cycle-time optimisation. The primary goal was to design a robot that could execute a defined task with precision and reliability within a controlled cell.
That foundation still matters—but it is no longer enough. As IT–OT convergence reshapes shop floors, robotics engineering now extends far beyond mechanical design. Engineers must integrate advanced sensors, real-time communication networks, edge computing systems, AI-driven analytics, and enterprise software interfaces into the robot’s architecture. A robot is no longer just a mechanical arm with a controller; it is a connected, data-producing, and data-consuming system embedded within a larger digital ecosystem.
This means engineering decisions are no longer confined to gears, motors, and control loops. Network latency can influence motion stability. Data accuracy affects predictive maintenance outcomes. Software updates can modify operational behaviour. Cybersecurity vulnerabilities can interrupt production. Mechanical performance is now intertwined with software reliability and network integrity.
Physical AI equips robots with the capacity to perceive and respond to the real world, providing the versatility and problem-solving capabilities that are often required by complex use cases that have been out of scope until now,” says James Davidson, Chief AI Officer, Teradyne Robotics.
In practical terms, robotics engineers are moving from designing machines to designing intelligent systems. They must think about interoperability, data structures, communication protocols, and secure integration—alongside torque curves and kinematics. The robot is no longer an isolated automation asset; it is part of a coordinated production architecture that responds to real-time information from across the enterprise.
The shift is clear: robotics engineering is evolving from a purely mechanical discipline into a multidisciplinary field where mechanics, electronics, networking, and software operate as a unified whole.
Conclusion
As factories continue to evolve into connected, data-driven environments, robotics can no longer be engineered as standalone mechanical systems. The convergence of IT and OT is embedding intelligence, connectivity, and responsiveness directly into the core of robotic architecture. What was once a discipline defined by mechanical precision is now defined by system integration.
“Taking a modern Industry 5.0 approach requires prioritisation of adaptability, empowering line workers with robots that can be reprogrammed and redeployed as demand shifts, which is the biggest benefit of having these very flexible systems coming online quickly,” says Ujjwal Kumar, Former Group President of Teradyne Robotics.
The competitive edge will not belong merely to the fastest or strongest robots, but to those designed as intelligent, interoperable components of a unified production ecosystem. In this new industrial reality, robotics engineering is no longer just about motion—it is about orchestration.
The post Robotics Engineering: The Architectural Evolution Behind IT–OT Convergence appeared first on ELE Times.
How AI Is Transforming Network Protocol Testing in Software-Defined Networks?
As enterprises accelerate toward cloud-native infrastructure, edge computing, and virtualised network functions, data volumes and traffic patterns have become increasingly dynamic and unpredictable. This shift has significantly complicated network management, making traditional monitoring and testing approaches insufficient for modern workloads.
Software-Defined Networking (SDN) emerged as a response to this complexity. By decoupling the control plane from the data plane and centralising network intelligence in software-based controllers, SDN introduced programmability, agility, and fine-grained policy enforcement into network architecture. Networks were no longer static hardware constructs — they became programmable systems capable of real-time configuration and orchestration.
However, this programmability has introduced a new challenge: protocol behaviour is no longer deterministic. Dynamic flow rules, frequent controller updates, real-time policy changes, and multi-controller orchestration have made protocol validation exponentially more complex. Traditional pre-defined test scripts and static regression libraries struggle to keep pace with continuously evolving network states.
“AI applications are driving an entirely new set of requirements in our customers’ network equipment and in their network architectures,” says Joel Conover, senior director at Keysight Technologies
In programmable environments, protocols must be validated not just for correctness, but for adaptive behaviour across changing topologies and traffic conditions. This is precisely where Artificial Intelligence is beginning to redefine network protocol testing — shifting it from rule-based verification to intelligent, adaptive validation.
Traditional Protocol Testing Failing with SDNs
With legacy traditional networks, the protocol behaviour remains largely uniform and predictable. Routing tables were static, firmware updates were infrequent, and network state changes followed predictable patterns. Testing technologies evolved accordingly – with pre-defined test cases, fixed traffic simulations, and rule-based regression suites. But with Software Defined Network, that isn’t the case.
SDN disrupts this very uniformity and predictability. As with SDN, the control plane is abstracted into centralised controllers, and the network remains largely flexible- not hardcoded into individual devices. Flow rules are dynamically installed, modified, or withdrawn based on application demands, policy engines, and real-time telemetry. As a result, network state becomes fluid rather than fixed. This also puts forth tremendous testing challanges including:
- Dynamic Flow Table Updates: In SDN environments, flow entries can change in milliseconds. Traditional test scripts, designed for static configurations, cannot continuously validate transient states or short-lived rule conflicts.
- Controller-Driven Logic Complexity: Unlike legacy networks, where protocols like Open Shortest Path First (OSPF) or Border Gateway Protocol (BGP) operate autonomously within devices, SDN controllers introduce centralized decision-making logic. Testing must now validate not only protocol compliance, but also controller algorithms, northbound applications, and southbound API interactions.
- Multi-Controller and Multi-Domain Orchestration: Large deployments often rely on distributed controller clusters for scalability and redundancy. Synchronisation delays, inconsistent state propagation, or split-brain scenarios introduce validation complexity beyond conventional test frameworks.
- CI/CD-Driven Network Updates: Modern SDN deployments increasingly follow DevOps models, where network policies and configurations are updated frequently. Regression cycles that once ran quarterly may now need to be executed daily or continuously.
- Emergent Behavior in Programmable Networks: When multiple applications interact through a controller — security policies, load balancers, traffic optimizers — unintended rule interactions can produce emergent protocol behavior. Static test matrices cannot anticipate such combinations.
In this evolving environment, traditional test automation tools operate reactively. They verify what has been explicitly defined, but struggle to discover what has not been anticipated. As SDN architectures scale in complexity, protocol testing must evolve from deterministic validation — capable of learning network behaviour rather than merely executing predefined scenarios.
The Limits of Automation in Modern SDN Testing
As SDN environments grew in complexity, testing frameworks also adopted automation. Continuous integration pipelines began validating controller updates, traffic replay tools simulated workloads, and orchestration layers executed regression suites at scale. Usually, the traditional automated testing systems operate on predefined logic. They execute scripted scenarios, compare outputs against expected results, and flag deviations. While this approach accelerates validation cycles, it remains fundamentally reactive. It can only test what engineers anticipate. In programmable networks, however, not all behaviours are foreseeable.
With SDNs, Flow rules interact dynamically, policies overlap, and controllers adapt in real time to the telemetry inputs. Under such conditions, failure modes are often emergent rather than explicit. They arise from complex interactions between components rather than from isolated configuration errors.
This is where the limitations of deterministic automation become evident:
- Static rule engines cannot adapt to evolving topology states.
- Regression libraries cannot scale combinatorially with policy variations.
- Manual definition of edge cases becomes impractical in large-scale SDN fabrics.
As networks increasingly resemble distributed software systems, testing must adopt characteristics of software intelligence — the ability to learn patterns, detect deviations autonomously, and anticipate risk scenarios. It is within this context that Artificial Intelligence begins to move from experimental concept to architectural necessity.
How is AI replacing the Automation Debate in Testing?
As Software-Defined Networks evolve into highly dynamic, programmable infrastructures, testing frameworks must move beyond deterministic execution models. AI-driven protocol testing becomes the obvious and most promising strategy as it is enhanced with contextual learning, predictive analysis, and adaptive decision-making. An effective AI-enabled SDN testing architecture operates across multiple functional layers.
“AI is being infused into many aspects of communications technology – it shows particular promise in predicting channel conditions, essentially creating new forms of ‘smart radios’ that can achieve higher throughput and/or longer distances by incorporating machine learning in the radio itself,” says Mr Conover.
At the foundation lies a telemetry intelligence layer. SDN environments generate vast volumes of real-time data — including flow table updates, controller logs, latency metrics, packet drops, topology transitions, and API interactions across northbound and southbound interfaces. Rather than relying solely on post-event log analysis, AI models ingest and process this telemetry continuously. By establishing behavioural baselines, the system distinguishes between acceptable adaptive changes and genuine protocol anomalies.
Built upon this is the Behavioral Modeling Layer. In programmable networks, protocol validation must account for interactions between controllers, applications, and dynamic policies. Machine learning models analyse how control-plane decisions influence data-plane outcomes under varying traffic loads, topology shifts, and failover scenarios. Through supervised and unsupervised learning techniques, the system identifies normal operational patterns and detects deviations that static scripts might overlook — such as cascading latency effects, unstable rule propagation, or intermittent synchronization gaps.
The next layer introduces Intelligent Test Case Generation and Prioritisation. Traditional regression testing treats all scenarios uniformly, often leading to inefficiencies. AI-enhanced systems instead evaluate historical defect data, configuration change patterns, and policy dependency graphs to calculate risk scores. Testing resources are then dynamically allocated to high-risk areas. Reinforcement learning techniques can further simulate targeted disruptions, enabling adversarial-style validation that exposes weaknesses before deployment.
Finally, Predictive Validation capabilities elevate protocol testing from reactive detection to proactive assurance. By analysing patterns across multiple test cycles, AI systems can forecast potential congestion points, controller overload risks, and policy conflicts at scale. This predictive insight is particularly valuable in CI/CD-driven SDN environments, where frequent updates demand continuous and reliable validation.
Together, these layers transform protocol testing from a script-driven verification exercise into an adaptive, intelligence-led framework. As networks become software-defined, testing infrastructures are becoming learning-defined — capable not only of validating correctness, but of anticipating instability before it manifests in production environments.
Conclusion
Software-Defined Networking transformed networks into programmable, software-driven systems — but in doing so, it also made protocol validation far more complex. Static test scripts and deterministic regression cycles are no longer sufficient for environments defined by dynamic flows, controller logic, and continuous updates.
“The use case for network testing is emulating the unique properties of that environment, and delivering it at a scale we’ve never seen before,” says Mr Conover.
Artificial Intelligence is emerging as the natural evolution of network testing. By learning behavioural patterns, detecting anomalies in real time, and prioritising risk intelligently, AI shifts protocol validation from reactive verification to predictive assurance.
The future of SDN will not depend solely on how programmable networks become, but on how intelligently they are tested. As infrastructure grows more dynamic, validation must become equally adaptive — combining automation, intelligence, and human oversight to ensure resilient, scalable network operations.
The post How AI Is Transforming Network Protocol Testing in Software-Defined Networks? appeared first on ELE Times.



