Збирач потоків

Differentiating Between LPDDR6, LPDDR5, and LPDDR5X

ELE Times - 1 година 49 хв тому

Courtesy: Synopsys

Advances in memory standards are driving faster and more power-efficient mobile and connected devices, from smartphones and tablets to ultra-thin laptops and wearables.

One such standard is Low Power Double Data Rate (LPDDR), which plays a crucial role in balancing high performance with energy efficiency. The latest iteration of the standard, LPDDR6, represents a big step forward in memory management. Comparing LPDDR6 to its predecessors, LPDDR5 and LPDDR5X, reveals just how quickly mobile memory technology is evolving — and what these advances mean for next-generation devices.

The role of LPDDR memory

LPDDR acts as the main system memory inside electronic devices. Working hand-in-hand with device processors and other components to store and access frequently used data, it helps keep applications, media, and multitasking features running smoothly. LPDDR is optimised for low power usage, compact footprint, and fast data transfer, making it ideal for portable, battery-powered devices.

LPDDR can integrate with Inline Memory Encryption (IME) modules to ensure data confidentiality — both in-use and when stored in off-chip memory. This is achieved through standards-compliant independent cryptographic support for read and write operations, providing robust protection against unauthorised access.

LPDDR memory is also available as automotive-grade Synchronous Dynamic Random-Access Memory (SDRAM), making it the preferred DRAM solution for automotive applications that require strict compliance with automotive standards.

LPDDR5 and LPDDR5X: the previous benchmarks

LPDDR5 marked a big step up in mobile memory when it was introduced in 2019. It delivered data rates up to 6.4 Gbps with improved energy efficiency (through features such as Dynamic Voltage Scaling) and smarter data handling. These upgrades led to longer battery life and better support for demanding applications like 5G connectivity, high-resolution media, and the initial wave of artificial intelligence (AI).

LPDDR5 also added new reliability features and smarter error handling, helping stabilise performance under complex workloads. As a result, devices using LPDDR5 delivered noticeable gains in both speed and overall user experience compared to devices using previous generations of LPDDR SDRAMs.

Introduced in 2021, LPDDR5X offered increased performance (up to 10.67 Gbps) and minor enhancements to LPDDR5’s features. LPDDR5X SDRAMs represent the vast majority of LPDDR SDRAMs shipping today.

LPDDR6: the next generation

Published in July 2025, the new LPDDR6 specification and compliant SDRAMs deliver even more performance, efficiency, and features — all designed to meet the growing demands of next-generation mobile and connected devices. LPDDR6 offers:

  • Faster data rates. LPDDR6 is expected to reach up to 14.4 Gbps, a significant increase from LPDDR5X. This extra speed is essential for power-hungry applications like augmented reality, ultra-high-definition video streaming, advanced AI, and automotive electronics, all of which depend on rapid data processing.
  • Wider bandwidth. Using 24-bit channels (up to 96 bits per package with 4 channels total), LPDDR6 effectively doubles LPDDR5X’s bandwidth per package. In addition, two 12-bit sub-channels in each channel help improve latency and access.
  • Enhanced power management. LPDDR6 introduces more precise control over voltage and power states. This upgrade helps devices run more efficiently and extends their battery life as a result.
  • Improved reliability and error correction. As the speed and footprint of LPDDR rise, so too does the risk of data errors — especially in data centres. LPDDR6 addresses this challenge with enhanced RAS (Reliability, Availability, and Serviceability) capabilities, providing robust error correction via Metadata, Advanced ECC, and Link ECC features. These improvements help minimise system glitches and stabilise device performance.

While LPDDR6 builds on LPDDR5 and LPDDR5X’s foundations, some legacy mechanisms were streamlined or replaced to support higher speeds and tighter power control. For example, earlier voltage scaling and command encoding schemes have been reworked to enable more granular power states and improved signal integrity. These changes mean LPDDR6 prioritises advanced efficiency and reliability features over older approaches that were optimised for lower data rates.

The implications for memory design and mobile devices

The improved performance, efficiency, and features of LPDDR6 will have wide-ranging impacts. From a technical perspective, LPDDR6 introduces a variety of upgrades to memory architecture:

  • Signal integrity and bank management. Smarter signalling and improved memory bank management reduce latency and maximise data throughput.
  • Ultra-low power modes. New power-saving states allow devices to conserve energy when idle, a big advantage for wearables and Internet of Things (IoT) products that run on small batteries.
  • The new specification is engineered for seamless integration with the latest processors and chipsets, making it easier for manufacturers to integrate LPDDR6 into their next-generation devices.

These upgrades will enable the creation of mobile devices that offer:

  • Faster, smoother performance. Higher data rates mean apps open quicker, multitasking is more efficient, and device operation is smoother.
  • Better battery life. Improved power management reduces energy consumption, allowing devices to run longer between charges.
  • Greater system stability. Stronger error correction improves reliability and reduces the risk of crashes and data loss.
  • Future-proofing. LPDDR6 enables devices to support future advances in mobile computing, connectivity, and multimedia.

The Impact of LPDDR6 on smartphones, laptops, and wearables

LPDDR6 represents a significant step forward in mobile memory technology, delivering faster speeds, increased capacity, improved reliability, and better energy efficiency.

Leveraging silicon-proven interface IP and verification IP solutions — which have also been successfully validated at 10.667 Gb/s for SDRAM — device manufacturers are already upgrading their flagship smartphones, high-end laptops, and innovative wearables with LPDDR6-based memory.

But the transition from LPDDR5X/5 to LPDDR6 is more than just a technical upgrade — it enables new possibilities in mobile computing. As manufacturers adopt the new standard, users can expect devices that are faster, more reliable, and ready to support the next wave of on-device and cloud-connected experiences.

The post Differentiating Between LPDDR6, LPDDR5, and LPDDR5X appeared first on ELE Times.

Apple’s spring 2026 soirée: The rest of the story

EDN Network - 8 годин 27 хв тому

With smartphone and tablet news already discussed, what else did Apple unveil this week? Read on for all the goodies and their details.

As I teased at the end of my prior piece, computers and displays were also on the plate for Apple’s “big week of news” announcements suite. With today’s (as I write this on Wednesday in the late afternoon) New York, London, and Shanghai “Experience” in-person events now concluded:

(No, alas, I wasn’t invited)

I’m guessing that Apple’s wrapped up its rollouts for now, therefore compelling me to revisit my keyboard for concluding part 2. That said, I realized in retrospect that there was one additional earlier hardware announcement that, had I remembered at the time (and in time), I would have also included in part 1, since it also covered mobile devices. So, let’s start there.

AirTag 2

In late April 2021, Apple introduced its first-generation AirTag trackers, leveraging Bluetooth LE connectivity to mate them with owner-paired smartphones and tablets and more broadly (when a tagged device is lost) the broader Find My crowdsourced network ecosystem to assist in identifying their whereabouts and monitoring their movements. Integrated ultrawideband (UWB) support, when also comprehended by the paired mobile device, affords even more precise location discernment (i.e., not just somewhere in the living room, but having fallen between the sofa cushions). And built-in NFC support assists anyone who might find a tag (and whatever it’s attached to), to notify the person that it belongs to. Here’s my first-gen teardown.

Nearly five years later, and quoting Wikipedia:

An updated model with the U2 chip, upgraded Bluetooth, and a louder speaker was released in January 2026 [editor note: Monday the 26th, to be precise]. It has enhanced range for precision detection with iPhones equipped with a U2 chip such as the iPhone 15/Pro or later (excluding iPhone 16e), and also allows an Apple Watch with a U2 chip such as the Apple Watch Series 9 or later, or Apple Watch Ultra 2 or later (excluding Apple Watch SE), to precisely locate items.

Now fast-forwarding a month-plus to this week’s announcements…

The M5 Pro and Max SoCs

2.5 years back, within my coverage of Intel’s then leading-edge and first-time chiplet-implemented Meteor Lake CPU architecture:

I noted that the company was, to at least some degree, following in the footsteps of AMD and Apple, both having already productized chiplet-based designs. In AMD’s case, I was on solid footing with my stance, as the company had already been embedding and interconnecting discrete processors, graphics, and other logic circuits for several years. In Apple’s case, conversely, my definition of a chiplet implementation was a bit more loosey-goosey, at least at the time:

Above is a de-lidded photo of Apple’s M1 SoC. At left is the single-die implementation of the entirety of the logic circuitry, plus cache. And on the right are two DRAM memory chips. Admittedly, the “Ultra” variant of the eventual M1 product family, at far right:

upped the ante a bit more, “stitching together two distinct M1 Max die via a silicon interposer”. But I’ve long wondered when Apple would go “full monty” on disaggregation, mixing-and-matching various slivers of logic silicon attached to and interconnected via a shared packaging substrate, to keep each die’s dimensions to a reasonable manufacturing-yield size and to afford fuller implementation flexibility. To wit, the points I made back in September 2023 remain valid:

  • Leading-edge processes have become incredibly difficult and costly to develop and ramp into high-volume production,
  • That struggle and expense, coupled with the exponentially growing transistor counts on modern ICs, have negatively (and significantly so) impacted large-die manufacturing yields not only during initial semiconductor process ramps but also long-term, and
  • Desirable variability both in process technology (DRAM versus logic, for example), process optimization (low power consumption versus high performance) and IC sourcing (internal fab versus foundry), not to mention the attractiveness of being able to rapidly mix-and-match various feature set combinations to address different (and evolving) market needs, also enhance the appeal of a multi- vs monolithic-die IC implementation.

That time is now, branded as the “Fusion Architecture” and ironically foreshadowed by a then-subtle Apple online store tweak a month ago. Quoting from the press release subhead:

M5 Pro and M5 Max are built using the new Apple-designed Fusion Architecture that connects two dies with advanced IP blocks into a single SoC, delivering significant performance increases that push the limits of what’s possible…

In an interesting twist from the past, this time the two product proliferations seemingly share a common processor die, although the variety and number of guaranteed-functional cores varies both between the two devices and within a given device’s binning variants. Conversely, the graphics core counts diverge more substantially between the two devices. To some degree this is reflective of the high-end “Max” device’s professional content creator target demographic, although I’d wager that it more broadly affords more robust on-device deep learning inference capabilities in conjunction with the chips’ presumed-still-existent neural processing cores. And what of an “Ultra” variant of the M5…is it on the way? Maybe

Tomato, tomahto

Speaking of cores, by the way…sigh. Look back at my M5 SoC (and initial devices based on it) coverage from last October, and you’ll see that, just as with prior generations of both A- and M-based Apple-developed silicon, it contains a mix of both performance (speed- optimized) and efficiency (power consumption-tuned) cores. Here’s the specific press release quote again:

M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.

All well and good; the Arm-developed architecture analogy is big.LITTLE. Revisiting that page on Arm’s website just now, however, I curiously noticed that whereas it historically called out two different types of cores, now there are apparently three. Check out the subhead:

Arm big.LITTLE technology is a heterogeneous processing architecture that uses up to three types of processors. LITTLE processors are designed for maximum power efficiency, while big processors are designed to provide efficient, sustained compute performance.

Keep in mind that Apple is an Arm architecture licensee, so it develops its own (still instruction set-compatible, of course) cores. That said, beginning with the M5 Pro/Max processing chiplet, Apple has also developed a third core, this one an intermediate half-step between the performance and efficiency endpoints. You might think that Apple would call this new one the “balanced” core, say. But alas, you’d be wrong. Here’s long-time Apple observer Jason Snell, quoted in a post from another Apple prognosticator, “graybeard,” John Gruber:

With every new generation of Apple’s Mac-series processors, I’ve gotten the impression from Apple execs that they’ve been a little frustrated with the perception that their “lesser” efficiency cores were weak sauce. I’ve lost count of the number of briefings and conversations I’ve had where they’ve had to go out of their way to point out that, actually, the lesser cores on an M-series chip are quite fast on their own, in addition to being very good at saving power! Clearly they’ve had enough of that, so they’re changing how those cores are marketed to emphasize their performance, rather than their efficiency.

What did Apple decide to do instead, including a retrofit of published M5 documentation?

  • The prior-named “Performance” core is now instead called, believe it or not, “Super.”
  • The “Efficiency” core retains its original name, for a brief moment of sanity
  • And the new in-between “balanced” core? It’s the recycled ”Performance” moniker.

The following summary table originated with another recent John Gruber post; I’ve simplified the SoC options, reordered the CPU core columns, and added a column for GPU core counts:

 

CPU (Super)

CPU (Performance)

CPU (Efficiency)

GPU

M5

3-4

N/A

6

8-10

M5 Pro

5-6

10-12

N/A

16-20

M5 Max

6

12

N/A

32-40

That’s just…super. Sigh.

(More) M5 MacBook Pros

(nifty video animation, eh?)

“Super” SoCs inside aside, the new 14” and 16” MacBook Pros are effectively identical to their M4-based forebears (note that the sole M5 version initially announced last fall was the 14” model). The only other items of particular note both involve memory. Baseline and upgraded DRAM capacity option prices remain the same as last time, despite current industry memory supply constraints; an upper-end 64 GByte option for the M5 Pro has even been added. And regarding flash memory, Apple has obsoleted last November’s entry-level 512 GByte SSD option for the baseline 14” M5 MacBook Pro, making the new capacity starting point for that product (1 TByte) more expensive than before. That said, it’s now $100 lower than the 1 TByte variant price at intro just a few months ago, and capacity-upgrade prices have also decreased.

The M5 MacBook Air(s)

Here’s another example of not being able to tell, based solely on external appearances, which generation of devices you’re looking at. Coming, as with its M3- and M4-based forebears, in both 13” and 15” versions, the M5 MacBook Air also upgrades to Apple’s N1 network connectivity chip. But, speaking once again of (flash, specifically) memory, and akin to the product line option slimming for the 14” M5 MacBook Pro mentioned in the prior section, the lowest-available capacity for the new devices is 512 GBytes, versus 256 GBytes in previous generation. I’m guessing that the reasoning is two-fold this time; as with the 14” M5 MacBook Pro’s option-culling, the company’s “hiding” its higher flash memory costs by only offering more profitable capacity choices to customers. Plus, by doing so, Apple can more clearly differentiate the MacBook Air from its other products. Speaking of which…

The MacBook Neo

I’ll kick off this section with a few history lessons. Back in 2015, Apple introduced the “new MacBook” (also commonly referred to as the “12” MacBook), with a Retina-resolution display and based on Intel m-series (and later, i-series) CPUs. It slotted between the then-non-Retina MacBook Air and the high-end MacBook Pro in Apple’s product portfolio from a pricing standpoint, even though its processing performance undershot that of the notably less expensive MacBook Air. Plus, it was hampered by the unreliable “butterfly” keyboard. It was discontinued after only three hardware iterations and four years of production.

In addition to its unfavorable price comparison to the MacBook Air, the “new MacBook” was also still competing to a degree against then-popular Windows-based “netbooks”, which were even lower priced. Back in late 2008, Former CEO Steve Jobs had (in)famously quipped re netbooks, “We don’t know how to make a $500 computer that’s not a piece of junk.” Hold that thought.

My last history lesson is, conversely, a Steve Jobs success story. Back in mid-1999, two years (and change) after Jobs’ return to Apple and less than a year after launching the consumer-tailored iMac desktop, Apple unveiled the iBook laptop:

which came in multiple eye-catching, intentionally non-“business” color options:

Quoting Wikipedia:

The line targeted entry-level, consumer and education markets, with lower specifications and prices than the PowerBook, Apple’s higher-end line of laptop computers. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort.

Look again at the image of the iBook’s color options. Now look at the photo at the beginning of this section. See where I’m going?

The newly unveiled MacBook Neo comes in two price tiers: $599 (with a further $100 discount for education customers; take that, Chromebooks) and $699. The higher-end variant gets you twice the SSD capacity—512 GBytes versus 256 GBytes—along with a Touch ID fingerprint reader built into the keyboard. That’s it. 8 GBytes of DRAM, with no upgrade option. No Thunderbolt, only two USB-C ports, one of them supporting only USB 2 speeds. The first-time use of an A-series processor, the (Apple Intelligence-capable) A18 Pro (albeit with one fewer graphics core enabled than the initial version in the iPhone 16 Pro series); that said, it seems to benchmark (at least) roughly on par with the M1 that until recently was still being sold by Walmart in the MacBook Air. And a networking subsystem rumored to come from MediaTek, versus developed internally.

In closing, at least for this section: what’s with the name? Some folks had forecasted that it’d just be called the “MacBook”, but as I’ve already noted, that particular name is now “damaged goods”. Others thought that an “iBook” resurrection was in the cards, but Apple stopped referring to devices via “i” monikers a while ago. That said, “Neo” was definitely not on my bingo card. Maybe someone in Cupertino is a fan of The Matrix, but thought that “MacBook Mr. Anderson” would be too ponderous?

Displays

Having already passed through 2,000 words, I’m going to keep this section short. Apple announced two new Studio Display models, its first updates to this particular product category in many years. They’re both 27” in size, with 5K Retina resolutions, although their refresh rates, dynamic ranges, and other image quality measures vary. The “inexpensive” one starts at $1,599, with its pricier sibling beginning at $3,299; both are available in standard or (upgrade) nano-texture glass options, and mounting and other accessories are also available. And interestingly, at least to me, they don’t work with legacy Intel-based Macs, even the scant few models (one of which I’m currently typing on) that are still supported by MacOS 26. For more details, check out the press release.

And what about…

The M5 Mac mini, whose possibility I alluded to yesterday? Didn’t happen, even though the current M4-based models are popular with the agentic AI enthusiast community (and others). That said, in revisiting my prognostication yesterday afternoon, I remembered that Apple had also skipped the M3 Mac mini generation, and that said, the time-consuming form factor redesign development from the M2 to the M4 might have at least partly explained that delay.

And what of the upgrade to the “vanilla” iPad that lots of folks were forecasting would happen this week? Another nope. The primary rationale here was that it was the only remaining member of Apple’s current product line whose CPU (the A16) doesn’t support Apple Intelligence. But there was no evidence of the telltale indicator of a new product’s arrival: depleted retail inventories of the current model. My guess: Apple will be happily talking about AI again at this year’s WWDC, now that Google’s on board as the company’s development partner, and that’d be a perfect time to announce the “iPad 12”…or maybe “iPad Neo”? I jest (I hope).

Time to put down my cyber-pen and turn it over to you for your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Apple’s spring 2026 soirée: The rest of the story appeared first on EDN.

Wolfspeed launches first commercially available 10kV SiC power MOSFET

Semiconductor today - Чтв, 03/05/2026 - 19:54
Wolfspeed Inc of Durham, NC, USA — which makes silicon carbide (SiC) materials and power semiconductor devices — has announced what it claims is the industry’s first commercially available 10kV SiC power MOSFET. The firm says this unlocks architectural freedom, delivering unprecedented system durability, and advancing access to reliable and sustainable power for the most demanding applications. The advance challenges conventions in power conversion technology, delivering a solution to modernize the grid and critical power infrastructure, to accelerate industrial electrification, and to unleash the potential for AI data-center growth, it adds...

MCU enables ASIL D safety and control

EDN Network - Чтв, 03/05/2026 - 19:06

Built on a 28-nm process, the Renesas RH850/U2C automotive microcontroller delivers robust connectivity and security for modern E/E architectures. This 32-bit MCU expands the RH850 lineup with a cost-optimized option for chassis and safety systems, battery management, body control, and other ASIL D–rated applications.

The device integrates four RH850 CPU cores running at up to 320 MHz, including two lockstep cores, and up to 8 MB of on-chip flash memory. It combines 10BASE-T1S and TSN Ethernet (1 Gbps/100 Mbps), CAN XL, and I3C with widely used interfaces such as CAN FD, LIN, UART, CXPI, I2C, I2S, and PSI5.

In addition to functional safety support up to ASIL D under ISO 26262, the RH850/U2C meets current cybersecurity requirements in accordance with ISO/SAE 21434. The MCU integrates hardware acceleration for cryptographic algorithms, ranging from post-quantum cryptography (PQC) to those mandated by current Chinese and other international regulations.

The RH850/U2C is available in BGA292 and HLQFP144 packages.

RH850/U2C product page 

Renesas Electronics 

The post MCU enables ASIL D safety and control appeared first on EDN.

VNAs perform production test up to 9 GHz

EDN Network - Чтв, 03/05/2026 - 19:06

With typical measurement speeds of 25 µs/point, Copper Mountain’s three SC series VNAs enable efficient testing in both R&D and manufacturing environments. The SC0402, SC0602, and SC0902 two-port analyzers cover a common frequency start of 9 kHz, with upper ranges of 4.5 GHz, 6.5 GHz, and 9 GHz, respectively.

These instruments offer a typical dynamic range of 130 dB (10 Hz IF BW) for precise characterization of RF components and complex systems. Output power can be adjusted from -50 dBm to +5 dBm, with up to 500,001 measurement points/sweep. Measured parameters include S11, S21, S12, and S22.

Standard software capabilities, available without a paid license, include linear and logarithmic sweeps, power sweeps, and time-domain conversion with gating. Additional functions include S-parameter embedding and de-embedding, limit testing, frequency offset, and vector mixer calibration.

Automation is supported through LabVIEW, Python, MATLAB, .NET, and other programming environments, allowing up to 16 independent channels with 16 traces/channel. A manufacturing test plug-in is available as an add-on to integrate the VNA software into existing automated manufacturing and QA processes.

The SC series VNAs carry MSRPs of $13,995 (SC0402), $15,995 (SC0602), and $17,995 (SC0902).

Copper Mountain Technologies 

The post VNAs perform production test up to 9 GHz appeared first on EDN.

MCU brings USB-C power to embedded devices

EDN Network - Чтв, 03/05/2026 - 19:05

Infineon’s EZ-PD PMG1-B2 MCU integrates a single-port USB Type-C PD controller with a 55-V buck-boost controller for charging 2- to 12-cell Li-ion battery packs. Compliant with the latest USB Type-C and PD specifications, the device accepts an input voltage range of 4.5 V to 55 V with switching frequencies programmable from 200 kHz to 700 kHz.

The MCU targets USB-C-powered embedded devices in consumer, industrial, and communications markets, where devices make use of its integrated functions. Typical applications include cordless power and gardening tools, vacuum cleaners, kitchen appliances, e-bikes, drones, and robots.

The EZ-PD PMG1-B2 features a 32-bit Arm Cortex-M0 processor with 128 KB of flash and 8 KB of SRAM for customizable embedded applications. It integrates analog and digital peripherals—including ADCs, PWMs, UART/I2C/SPI interfaces, and timers—reducing PCB space and BOM. A comprehensive SDK and software suite simplify development and system design.

Production of the EZ-PD PMG1-B2 is expected to begin in the second quarter of 2026. Samples, technical documentation, and evaluation boards are available upon request.

EZ-PD PMG1-B2 product page 

Infineon Technologies 

The post MCU brings USB-C power to embedded devices appeared first on EDN.

Passive limiter shields electronics from RF threats

EDN Network - Чтв, 03/05/2026 - 19:05

Teledyne Microwave UK’s B3LT98026 is a passive wideband limiter designed to protect sensitive receiver front ends in defense and military communication systems. It operates from 0.1 GHz to 20 GHz and withstands up to 10 W peak input power under defined pulse width and duty cycle conditions.

The device enhances the survivability of Radar Electronic Support Measures (R-ESM) and Electronic Warfare (EW) systems operating in complex threat environments. It provides continuous, always-on protection against high-power RF and emerging Directed Energy Weapons (DEWs).

Across the operating band, the limiter maintains a maximum insertion loss/noise figure of 2.0 dB and a maximum input/output VSWR of 1.5:1. A fast 40-ns recovery time enables rapid return to nominal sensitivity following high-power events. The device operates over a temperature range of −20°C to +85°C, supporting deployment in demanding environments.

The compact SMA-based housing supports straightforward integration into existing architectures without requiring system redesign. The B3LT98026 is also compatible with Teledyne’s Phobos mast top unit and can accommodate additional RF elements, such as filters, when required.

The B3LT98026 is now available for evaluation in defense and EW systems.

B3LT98026 product page 

Teledyne Microwave UK 

The post Passive limiter shields electronics from RF threats appeared first on EDN.

Nordic debuts multiple cellular IoT products

EDN Network - Чтв, 03/05/2026 - 19:05

Nordic Semiconductor expands its ultra-low-power cellular IoT portfolio with Cat 1 bis, satellite NTN, and advanced LTE-M/NB-IoT with edge AI. Leveraging the proven nRF91 series, the nRF92 and nRF93 deliver a scalable, secure platform for global connectivity.

The nRF92 LTE-M/NB-IoT and satellite NTN series introduces the company’s smallest, most highly integrated, and power-efficient cellular solution. It combines a high-performance application MCU with Axon neural processing units, a multi-constellation GNSS receiver, Wi-Fi positioning, and sensor coprocessing. Lead customer sampling is underway, with general availability expected in early 2027.

The nRF93M1 is an LTE Cat 1 bis cellular IoT module with integrated MCU, LTE modem, GNSS receiver, and Wi-Fi positioning. It supports up to 10 Mbps downlink and 5 Mbps uplink, offers global LTE coverage, and is designed for low-power, compact applications. The module is compatible with nRF Cloud for device management, firmware updates, and location services. Lead customers are currently developing products with the nRF93M1, with general availability starting mid-2026.

Additionally, Nordic has enhanced the nRF91 LTE-M/NB-IoT series with 3GPP-compliant GEO and LEO satellite NTN connectivity and sub-GHz fallback to maintain connectivity when public networks are unavailable. The company also introduced the nRF91M1 module, a compact Smart Modem that simplifies adding cellular connectivity to host–modem designs.

Nordic Semiconductor 

The post Nordic debuts multiple cellular IoT products appeared first on EDN.

📌 Стартувала реєстрація на НМТ-2026

Новини - Чтв, 03/05/2026 - 18:14
📌 Стартувала реєстрація на НМТ-2026
Image
kpi чт, 03/05/2026 - 18:14
Текст

Розпочинається перший етап підготовки до Національного мультипредметного тесту — реєстрація, яка триматиме до 02 квітня включно.

Що мають зробити українські вступники?

Smartphone shipments to fall 7% in 2026 amid memory constraints and geopolitical pressures

Semiconductor today - Чтв, 03/05/2026 - 15:54
Based on assumptions on first-quarter memory prices (which indicate that pricing pressure and constrained supply will begin to ease in second-half 2026), Omdia’s latest outlook forecasts that global smartphone shipments will fall by about 7% year-on-year in 2026...

Circuits Integrated Hellas and Reach Power sign multi-year strategic MOU

Semiconductor today - Чтв, 03/05/2026 - 15:43
Satellite communication (Satcom) technology provider Circuits Integrated Hellas (CIH) of Athens, Greece and wireless power-at-a-distance technologies provider Reach Power of Redwood City, CA, USA, have announced a memorandum of understanding (MOU) establishing a multi-year strategic alliance. Focused on joint development of integrated radio frequency (RF)/millimeter-wave (mmWave) and wireless power and data transfer (WPDT) solutions, the alliance will target Satcom, defense, energy transfer, and other phased-array applications...

EV system design from components to modules to software

EDN Network - Чтв, 03/05/2026 - 15:01

Electric vehicle (EV) design at the system level is a rapidly evolving landscape encompassing components, hardware modules, and software platforms. So, on the first day of Automotive Tech Forum 2026, which was dedicated to EV designs, a panel titled “Powering the Electric Vehicle: From Semiconductors to Systems” took a deep dive into the system-level intricacies of EV designs.

Carsten Himmele, marketing manager for automotive at Allegro MicroSystems, highlighted the growing presence of silicon carbide (SiC) in traction inverters due to its ability to deliver higher bandwidth and efficiency. However, while talking about motor control for EV traction, he also mentioned challenges in operating in harsher electrical environments.

“SiC brings in higher bandwidth for motor control, but it also makes the electrical environment somewhat harsher,” he said. Himmele added that advanced phase-current sensing and inductive rotor-position sensing are essential for overcoming these challenges. “Moreover, system-grade building blocks reduce the number of external components and improve design efficiency,” he concluded.

That’s where gallium nitride (GaN) offers key advantages, said Alex Lidow, CEO and co-founder of Efficient Power Conversion (EPC). “GaN is smaller, more efficient, and more rugged compared to silicon and SiC,” he said. “It’s particularly effective in 48-V systems, which complement the emerging 800-V architectures.”

Lidow added that while EVs with 48-V systems are now leading the way, GaN devices are 5 to 7 times more efficient than their MOSFET ancestors. “GaN is powering onboard chargers, DC/DC converters, battery cooling pumps, steering systems, and infotainment.”

Rohan Samsi, VP of GaN Business Division at Renesas, also talked about the paradigm shift GaN brings to power converters, enabling simplified single-stage designs. “The bidirectional switch allows you to take out something that was a multi-stage converter and replace it with a single stage.” To achive integration synergy, Samsi emphasized that GaN’s strengths in current sensing, temperature sensing, and gate drive enable holistic EV solutions.

Finally, Kerry Grand, marketing manager for Simulink Automotive at MathWorks, turned the discussion toward the software aspects of design. He was asked to inform the panel on the latest developments in EV traction from a system-integration standpoint. And what does hardware testing uncover about the present and future of EV drivetrain?

Grand began with an insight into EV system-level design through simulation and model-based design. Then he identified enduring challenges in EV system design, including high-voltage isolation, battery life optimization, and thermal management. “Simulating detailed thermal systems offers automotive OEMs the ability to trade off temperature limits without compromising system performance.”

At a time when EV design building blocks like traction inverters and battery management systems (BMS) are continually adding functionality, system-level challenges are a critical area to watch. The panel discussion in Automotive Tech Forum 2026 provides a glimpse of design challenges and viable solutions in this design realm.

You can watch this session along with all sessions from the Automotive Tech Forum 2026 virtual event on demand at www.automotiveforum.eetimes.com.

Related Content

The post EV system design from components to modules to software appeared first on EDN.

Cardiac monitors: Inconspicuous, robust data collectors

EDN Network - Чтв, 03/05/2026 - 15:00

As follow-up to last month’s narrative of a cardiac abnormality thankfully detected by wearable devices, this engineer details the monitoring system he subsequently donned for a month.

Two-plus years ago, my contributor-colleague John Dunn described his most recent experience with a wearable cardiac monitor. And, as any of you who read one of my last-month blog posts already know, I more recently followed in his footsteps. I don’t yet know the outcome of my heart health study; my follow-up appointment with the cardiologist is a week away as I type these words. Regardless, I thought you might still find it interesting to learn about the gear I toted around, stuck to my chest (and in my pocket) for 30 days, and my experiences using it.

The system I used was Philips’ MCOT (Mobile Cardiac Telemetry), specifically its “patch” variant:

Here’s an overview video; others, plus documentation, are at the product support page:

I took several “selfies” of the sensor in place on my chest but ultimately decided to save you all the abject horror of seeing any of them. Instead, I’ll stick with these stock images:

My initial scheduled meeting with the cardiologist took place on December 12, 3+ weeks after our “introduction” at the emergency room. I’d been on both beta blockers (to regulate my heartbeat) and blood thinners (in case my prior irregular rhythm had resulted in the formation of a clot) since my initial visit to the hospital in mid-November. The cardiologist ordered the monitor, which arrived a bit more than a week later; I began wearing it the day after Christmas.

Here’s the box that the system comes in:

Open sesame:

The first thing I saw was the initial sensor patch, along with the return shipping packaging bag. Below it was the template I used for proper placement each time I stuck a patch on my chest:

The bulk of the contents were contained in two inner boxes, the first labeled “Getting Started” and the second referred to as “Monitoring”. Inside the first:

were several primary items:

along with installation and operation overview instructions:

The monitoring device, both here and in subsequent photos accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

whose dimensions and Android operating system foundation, along with the legacy presence of an analog headphone jack alongside the USB-C port:

and a multi-camera rear array in a specific arrangement:

suggest it to be a custom-software derivative of Samsung’s Galaxy A52 smartphone, introduced in March 2021:

It came with the translucent green case pre-installed, by the way. Here are some other overview images of the smartphone…err…monitoring device (its left side was unmemorable so I didn’t bother):

Next up was a small scrub pad used to further prepare my chest for patch application, after initial hair shaving. And, of course, there was the sensor itself:

Its edge arrived already abraded; I’m guessing that it had already been popped open, with its rechargeable battery subsequently replaced, at least once prior to its arrival at my residence:

Now for box #2:

More instructions, of course:

along with more patches, a more detailed instruction booklet, and the dual-charging unit:

The AC/DC adapter has two USB-A outputs:

which can be used in parallel:

One, connected to a red USB-A to USB-C cable, is used for daily recharge of the “monitoring device” (smartphone). The other (black, this time) cable terminates in a charging dock for the sensor, which I used every five days in conjunction with (and in-between) the patch removal and replacement steps:

Here’s how the initial “monitoring device” bootup went (since this was a custom Android-plus-app build, I wasn’t able to grab screenshots directly from the smartphone, perhaps obviously):

After initial charging of both the monitoring device and sensor, I continued the setup process:

Here’s what a patch looks like when you first take it out of the package; top:

and bottom:

Pressing down on the sensor while aligned with the patch base snaps it into place:

A briefly illuminated LED subsequently indicates that the sensor is correctly installed, at which point the monitoring device is able to “see” it (broadcasting over Bluetooth, presumably Low Energy):

At this point, you can peel away the protective clear plastic cover over the back side adhesive:

All that’s left is to press it into place on your chest…and then peel off the existing patch, pop out and recharge the sensor and redo the installation process five days later:

Lather, rinse, and repeat until the total 30-day cycle is over, which the system thoughtfully tracks on your behalf. Then ship it all back to the manufacturer.

The monitoring device, which regularly receives data transmissions from the sensor, periodically then uploads the data to the “cloud” server over an LTE or EV-DO cellular data connection.

If you forget to keep the monitoring device close by, data won’t be lost, at least for a while. There’s an unknown amount of memory onboard the sensor (yes, I searched for a teardown, alas unsuccessfully), albeit presumably not the full 2 GBytes allocated to this alternative device designed solely for local data logging. But the monitoring device will still alert you (both visually and audibly) to the lost wireless (again, presumably Bluetooth’s LE variant) connection:

You’ll also be alerted if the sensor’s integrated battery drops to a low level and recharge is necessary (I proactively did this every five days, as previously noted, since I’d received six total patches):

If you feel like something’s amiss with your “ticker” (heart pounding, fatigue, etc.) you can tap on the icon at the center of the display and the monitoring device will send an alert “flag” for subsequent correlation with the potential cardiac arrythmia data collected at that same time:

And in closing, here are some shots of other monitoring device display screens that I captured:

By the time you see this, assuming I don’t need to reschedule for some reason, I will have met with my cardiologist and gotten the (hopefully positive) results. I’ll follow up in the comments. And please also share your thoughts there! Thanks as always for reading.

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post Cardiac monitors: Inconspicuous, robust data collectors appeared first on EDN.

Volta initiates bioleaching gallium recovery study with Laurentian University

Semiconductor today - Чтв, 03/05/2026 - 12:49
Mineral exploration company Volta Metals Ltd of Toronto, Canada (which owns, has optioned and is currently exploring a critical minerals portfolio of rare-earths, gallium, lithium, cesium and tantalum projects in Ontario) has begun laboratory-scale bioleaching recovery test work primarily targeting gallium and secondarily rare-earth elements (REEs) at Dr Vasu Appanna’s laboratory of Biomine Research and Development at Laurentian University in Sudbury, Ontario. Laurentian University is recognized for its applied research expertise in mining, mineral processing, and earth sciences...

Semtech expands data-center portfolio by acquiring HieFo for $34m

Semiconductor today - Чтв, 03/05/2026 - 11:28
High-performance semiconductor, Internet of Things (IoT) systems and cloud connectivity service provider Semtech Corp of Camarillo, CA, USA has acquired HieFo Corp of Alhambra, CA –which manufactures indium phosphide (InP) optoelectronic devices for optical transceivers used across data-center interconnects (DCI) and intra-data-center interconnects – for about $34m in cash...

Navitas and EPFL demo 250kW solid-state transformer

Semiconductor today - Чтв, 03/05/2026 - 11:19
In booth #2027 at the IEEE Applied Power Electronics Conference (APEC 2026) in San Antonio, Texas (22–26 March), Navitas Semiconductor Corp of Torrance, CA, USA is exhibiting a 250kW solid-state transformer (SST) platform developed by the Power Electronics Laboratory of Switzerland’s École Polytechnique Fédérale de Lausanne (EPFL) that enables the grid architecture required by next-generation data centers, eliminating bulky low-frequency transformers while improving end-to-end efficiency...

Київська політехніка отримала додаткову грантову підтримку від Amazon Web Services

Новини - Чтв, 03/05/2026 - 09:54
Київська політехніка отримала додаткову грантову підтримку від Amazon Web Services
Image
kpi чт, 03/05/2026 - 09:54
Текст

Amazon Web Services (AWS) надав другий грант КПІ з початку повномасштабної війни. У 2022 році університету було надано перший терміновий грант, що дозволив оперативно здійснити міграцію інфраструктури до хмарного середовища. Тоді цифрові сервіси функціонували у партнерському середовищі компанії EPAM. Пізніше університет повністю перейшов на власний акаунт AWS.

Arrow Electronics and Infineon introduce 240W USB-C PD 3.2 reference design for battery-powered motor control applications

ELE Times - Чтв, 03/05/2026 - 08:50

Arrow Electronics and Infineon Technologies AG have announced REF_ARIF240GaN, a 240W USB Power Delivery (PD) 3.2 reference design for battery-powered motor control applications that require high performance and power efficiency in a compact form factor. This design complements the existing portfolio of joint reference design solutions from Arrow and Infineon, supporting the ongoing migration of customer designs to USB-C technology.

REF_ARIF240GaN is specifically designed to support the launch of EZ-PD™ PMG1-B2, Infineon’s newest USB PD 3.2 controller, featuring up to 240W USB sink capability and integrated buck-boost functionality in a compact single package. It provides developers with a ready-to-use platform for implementing high-power USB-C charging alongside efficient motor drive control features. It brings fast charging capabilities for 2- to 12-cell Li-ion battery packs, simplifying the overall design and reducing components count.

Motor control functionality is delivered using Infineon’s PSOC C3, a 180MHz Arm Cortex-M33 microcontroller, and highly efficient 100V CoolGaN G5 transistors. By combining a fully interoperable USB-C PD stack with high-performance sensor and sensorless GaN motor control on a single platform, the reference design enables compact, high-efficiency battery-powered systems while shortening development time, reducing bill of materials cost and space required.

Target applications include light electric vehicles (e-bikes, e-scooters and personal mobility devices), along with power tools, vacuum cleaners, kitchen appliances, garden equipment and robotics.

The reference design can be obtained upon request. Advanced technical support and customisation services are available from Arrow’s engineering solutions centre (ESC).

Visitors to embedded world 2026 can see the joint Arrow and Infineon solutions for motor control and battery-powered applications at Arrow’s stand 4A-342.

About Arrow Electronics
Arrow Electronics (NYSE:ARW) sources and engineers technology solutions for thousands of leading manufacturers and service providers. With 2025 sales of $31 billion, Arrow’s portfolio enables technology across major industries and markets. Learn more at arrow.com.

The post Arrow Electronics and Infineon introduce 240W USB-C PD 3.2 reference design for battery-powered motor control applications appeared first on ELE Times.

Robotics Engineering: The Architectural Evolution Behind IT–OT Convergence

ELE Times - Чтв, 03/05/2026 - 08:23

Factories today operate as dense mechanical ecosystems, whether in automotive assembly lines or semiconductor fabrication units. Traditionally, each robotic and mechanical element performed predefined, deterministic functions within isolated automation cells. However, as shop floors become increasingly machine-intensive and interconnected, operational complexity rises proportionally. Managing these environments now requires more than mechanical precision—it demands architectural coordination across layers of control and intelligence.

In this context, the convergence of Information Technology (IT) and Operational Technology (OT) is fundamentally reshaping robotics engineering. Data processing layers—analytics engines, business logic systems, and enterprise platforms—are no longer separated from operational control systems. At the same time, the physical layer, comprising sensors, actuators, servo drives, and Programmable Logic Controllers (PLCs), is becoming increasingly tightly integrated with edge compute and network infrastructure. Robotics systems are no longer designed as standalone motion units; they are engineered as nodes within a larger, connected control ecosystem.

“Traditional automation tools were built for a high-volume, low-variability environment. But today’s market demands agility,says Ujjwal Kumar, Former Group President of Teradyne Robotics.

This architectural integration is shifting robotics engineering from a purely mechanical discipline toward system-level design—where communication protocols, deterministic networking, cybersecurity, and software orchestration are as critical as torque curves, kinematics, and payload specifications.

Adaptive Systems

At the core of this transformation lies the emergence of adaptive robotic systems. In practical terms, adaptability on the shop floor means the ability to reconfigure, scale, and modify operational behavior through software-defined control and network orchestration, rather than through mechanical redesign. Modern robots are no longer confined to fixed, pre-programmed routines. Equipped with AI models, IIoT connectivity, and high-resolution sensor feedback, they can interpret environmental inputs, process real-time data streams, and dynamically adjust execution parameters.

“The big difference is that traditional automation was a custom-made, perfect solution for one application. The new age of AI-integrated robotics has standard products serving multiple applications. You go into multiple applications through software and some end-of-arm tooling differences,” says Ujjwal Kumar, Former Group President of Teradyne Robotics.

As manufacturers pursue higher efficiency alongside greater product diversity, such adaptability becomes essential. Integrated control and data layers allow robots to transition between production tasks or product variants with minimal downtime, supporting high-mix manufacturing environments. Simultaneously, context-aware operations enable robotic systems to respond to signals from enterprise platforms such as ERP and MES, aligning execution with demand fluctuations, material availability, and downstream constraints.

The Build Architecture: Sensors, Control, and Communication Layers

To understand the engineering behind IT–OT convergence, it is useful to examine the architectural layers that define modern shop-floor robotics. Traditionally, industrial systems followed hierarchical models such as ISA-95, where field devices, control systems, and enterprise platforms operated in structured tiers with limited cross-layer interaction. Today’s robotic systems, however, are increasingly designed around a more unified Industrial Internet of Things (IIoT) architecture—where sensing, control, computation, and enterprise integration operate within a tightly interconnected framework.

“The groundbreaking automation innovations of the future won’t come from one single company but from close cross-technology ecosystem collaborations,” says Ujjwal Kumar, Former Group President of Teradyne Robotics.

At the foundation lies the physical and sensing layer. Modern robots are embedded with dense networks of encoders, force–torque sensors, high-resolution vision systems, vibration monitors, and environmental sensors—particularly critical in semiconductor manufacturing. Unlike earlier generations, where sensors primarily supported local closed-loop motion control, today’s sensing infrastructure generates continuous, time-synchronised data streams. These data flows serve a dual purpose: ensuring precision motion control while simultaneously feeding analytics and optimisation engines upstream.

Above this sits the control and communication layer, where deterministic execution remains paramount. PLCs, motion controllers, industrial PCs, and real-time operating systems govern microsecond-level synchronisation of servo drives and actuators. However, this layer has evolved from rigid, ladder-logic-driven hierarchies to hybrid architectures that combine deterministic control with networked intelligence. Industrial Ethernet, fieldbus systems, and increasingly Time-Sensitive Networking (TSN) ensure that motion commands and data packets coexist without compromising latency or jitter requirements. Control systems are no longer isolated—they are communicative nodes within a broader industrial network.

The next shift occurs at the edge. Edge computing nodes now preprocess high-frequency sensor data, execute AI inference models, and filter operational information before it propagates upward. Event-driven architectures and publish–subscribe communication patterns allow machines to update a shared operational state across the plant continuously. Rather than relying solely on hierarchical polling mechanisms, modern factories operate through near real-time data dissemination, enabling contextual awareness across production assets.

James Davidson, Chief Artificial Intelligence Officer, Teradyne Robotics, says, ” AI is transforming robots from tools into intelligent collaborators that can perceive, learn, and adapt.” 

At the enterprise integration level, robotics systems increasingly interact with MES and ERP platforms, digital twin environments, and predictive maintenance engines. Data flow is no longer unidirectional. Demand signals, material constraints, and quality metrics can influence robotic execution parameters in near real time. This bidirectional exchange is the practical manifestation of IT–OT convergence—where business logic and machine logic intersect.

Underpinning all these layers is a security and infrastructure framework that ensures resilience. As robots become connected assets, cybersecurity, network segmentation, device authentication, and secure firmware management become integral engineering considerations rather than afterthoughts. Connectivity without security would undermine determinism and operational continuity.

Redefining the Core of Robotics Engineering 

For decades, robotics engineering on shop floors was largely centred on mechanical excellence. Engineers focused on motion accuracy, payload capacity, repeatability, structural rigidity, and cycle-time optimisation. The primary goal was to design a robot that could execute a defined task with precision and reliability within a controlled cell.

That foundation still matters—but it is no longer enough. As IT–OT convergence reshapes shop floors, robotics engineering now extends far beyond mechanical design. Engineers must integrate advanced sensors, real-time communication networks, edge computing systems, AI-driven analytics, and enterprise software interfaces into the robot’s architecture. A robot is no longer just a mechanical arm with a controller; it is a connected, data-producing, and data-consuming system embedded within a larger digital ecosystem.

This means engineering decisions are no longer confined to gears, motors, and control loops. Network latency can influence motion stability. Data accuracy affects predictive maintenance outcomes. Software updates can modify operational behaviour. Cybersecurity vulnerabilities can interrupt production. Mechanical performance is now intertwined with software reliability and network integrity.

Physical AI equips robots with the capacity to perceive and respond to the real world, providing the versatility and problem-solving capabilities that are often required by complex use cases that have been out of scope until now,” says James Davidson, Chief AI Officer, Teradyne Robotics. 

In practical terms, robotics engineers are moving from designing machines to designing intelligent systems. They must think about interoperability, data structures, communication protocols, and secure integration—alongside torque curves and kinematics. The robot is no longer an isolated automation asset; it is part of a coordinated production architecture that responds to real-time information from across the enterprise.

The shift is clear: robotics engineering is evolving from a purely mechanical discipline into a multidisciplinary field where mechanics, electronics, networking, and software operate as a unified whole.

Conclusion 

As factories continue to evolve into connected, data-driven environments, robotics can no longer be engineered as standalone mechanical systems. The convergence of IT and OT is embedding intelligence, connectivity, and responsiveness directly into the core of robotic architecture. What was once a discipline defined by mechanical precision is now defined by system integration. 

“Taking a modern Industry 5.0 approach requires prioritisation of adaptability, empowering line workers with robots that can be reprogrammed and redeployed as demand shifts, which is the biggest benefit of having these very flexible systems coming online quickly,”  says Ujjwal Kumar, Former Group President of Teradyne Robotics.

The competitive edge will not belong merely to the fastest or strongest robots, but to those designed as intelligent, interoperable components of a unified production ecosystem. In this new industrial reality, robotics engineering is no longer just about motion—it is about orchestration.

The post Robotics Engineering: The Architectural Evolution Behind IT–OT Convergence appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів