Збирач потоків

New LX4580 – Highly Integrated 24‑Channel Mixed‑Signal IC for Aviation & Defence Actuation Systems

ELE Times - 9 годин 49 хв тому
Microchip Technology announces the LX4580, a 24‑channel mixed‑signal IC designed to streamline high‑reliability actuation control systems for aviation and defence applications. The LX4580 is highly integrated and replaces multiple discrete components with a single device that supports synchronised data acquisition, fault monitoring and motor control—reducing system size, weight and complexity.
The LX4580 is offered in a compact 144‑pin LQFP package and developed for applications including More Electric Aircraft (MEA), guided defence systems, drones and launch platforms. The LX4580 integrates pressure sensing, temperature measurement, PWM motor drive outputs, current sensing, Hall effect sensor inputs, dual LVDT/resolver interfaces and dual high‑speed SAR ADCs. This level of integration delivers broad sensor coverage, precise timing alignment and improved reliability compared to multi‑device architectures.
“The LX4580 brings together an exceptional level of functionality in a single device, allowing our customers to simplify designs that previously required multiple ICs,” said Ronan Dillion, director of Microchip’s high-reliability and RF business unit. “By reducing system complexity and providing robust evaluation tools, we’re making it easier for engineers to accelerate development and deliver the next generation of reliable actuation systems.”
The device’s redundant architecture is tailored for mission‑critical environments that demand fault tolerance and deterministic performance. By consolidating functions commonly spread across MCUs, ADCs, DACs, driver ICs and regulators, the LX4580 reduces board space and wiring complexity, supporting manufacturers’ goals to minimise overall system weight while meeting demanding safety and certification requirements.

The post New LX4580 – Highly Integrated 24‑Channel Mixed‑Signal IC for Aviation & Defence Actuation Systems appeared first on ELE Times.

Last-level cache has become a critical SoC design element

EDN Network - 10 годин 36 хв тому

As AI workloads extend across nearly every technology sector, systems must move more data, use memory more efficiently, and respond more predictably than traditional design methodologies allow. These pressures are exposing limitations in conventional system-on-chip (SoC) architectures as compute becomes increasingly heterogeneous and traffic patterns become more complex.

Modern SoCs integrate CPUs, GPUs, NPUs, and specialized accelerators that must operate concurrently, placing unprecedented strain on memory hierarchies and interconnects. To Keep processing units fully utilized requires high-bandwidth, low-latency access to data, making the memory hierarchy as critical to overall system effectiveness as raw performance.

On-chip interconnects move data quickly and predictably, but once requests reach external memory, latency increases, and timing becomes less consistent. As more data accesses go off chip, the gap between compute throughput and data availability widens. In these conditions, processing engines stall while waiting for memory transactions to complete, creating data starvation.

 

The role of last-level cache

To mitigate this imbalance, SoC designers are increasingly turning to last-level cache (LLC). Positioned between external memory and internal subsystems, LLC stores frequently accessed data close to compute resources, allowing requests to be served with significantly lower latency.

Unlike static buffers, an LLC dynamically fetches and evicts cache lines based on runtime behavior without direct CPU intervention. When deployed effectively, this architectural layer delivers measurable benefits, including substantial reductions in external memory traffic and power consumption.

Simply including an LLC does not guarantee improved performance. Configuring the cache correctly is a complex task that must account for workload characteristics, compute-unit behavior, and real-time constraints. Poorly chosen parameters can waste area without meaningful gains, while under-provisioned configurations may fail to alleviate memory bottlenecks.

Architects must carefully determine cache capacity, the number of cache instances, and internal banking structures to support sufficient parallelism. Partitioning strategies must also be defined to ensure that individual IP blocks receive the bandwidth and predictability they require. While some settings can be adjusted later through software, foundational decisions on cache size, banking, and associativity must be finalized early in the development cycle.

The role of last-level cache is shown in successful designs. Source: Arteris

Factors influencing cache behavior

Banking configuration illustrates this trade-off clearly. Increasing the number of cache banks improves internal parallelism and throughput, but it also increases silicon area. Workloads with largely sequential access patterns may see limited benefit from aggressive banking.

In contrast, highly parallel workloads, especially those driven by AI accelerators or GPUs, require substantial internal concurrency to maintain utilization. Because these characteristics vary by application, banking decisions must be informed by realistic workload analysis during the architectural phase.

Cache capacity is just as important. A cache that is too small struggles to achieve acceptable hit rates, pushing excessive traffic to external memory. Conversely, oversizing the cache often yields diminishing returns relative to the additional area consumed. The optimal balance depends on actual runtime behavior rather than theoretical assumptions.

In practice, acceptable hit rates vary widely. Some systems can tolerate moderate miss rates if latency and power reductions outweigh the cost, while real-time applications demand consistently high hit rates to maintain deterministic behavior.

This variability underscores why no single LLC configuration is universally optimal. Mobile devices may require only a few megabytes of cache to balance power efficiency and responsiveness. At the same time, servers and HPC platforms often deploy tens or hundreds of megabytes to reduce DRAM pressure. Despite these differences, successful designs rely on a common principle in which cache parameters are derived from the workloads the system will actually execute.

Managing shared caches

Diversity in system demands further complicates how an LLC must be structured. Automotive chips built around concurrent vision processing and strict timing requirements operate under very different constraints than data-center platforms optimized for accelerator-heavy inference at scale. Even within a single chip, CPUs, accelerators, and I/O subsystems generate distinct access patterns with different latency sensitivities.

The LLC must accommodate all of them without allowing one workload to interfere with another’s real-time guarantees. This makes early understanding of system-level access behavior essential, since cache configuration otherwise becomes speculative at best.

Partitioning provides a powerful mechanism for preserving determinism in such environments. By allocating portions of cache capacity to specific clients, architects can prevent high-bandwidth workloads from starving latency-sensitive subsystems. This capability is particularly critical in environments that must meet strict timing guarantees. Partition sizes must be tuned carefully, as oversizing wastes area while undersizing risks violating latency requirements.

Configuring a last-level cache is ultimately a multidimensional challenge shaped by workload demands, compute topology, latency requirements, and silicon constraints. Achieving the right balance between performance, determinism, power, and area depends on understanding how an SoC behaves under real operating conditions.

To address this, SoC teams increasingly rely on system-level simulation using realistic data flow profiles generated by multiple on-chip request sources. This approach allows teams to evaluate cache behavior before key architectural decisions are finalized. It helps identify bottlenecks, validate cache sizing, and determine when isolation mechanisms such as partitioning are required to preserve real-time guarantees.

Arteris developed its CodaCache IP, which operates as a configurable last-level cache between on-chip initiators and different types of external memories such as DDR-DRAM, HBM and even NVM for execution in place (EIP) use cases. With CodaCache, architects can equip their SoC fabric with the optimal configuration to address intelligent, scalable, and automated data management in a wide range of applications.

Andre Bonnardot is product marketing manager at Arteris.

Related Content

The post Last-level cache has become a critical SoC design element appeared first on EDN.

Marktech adds 230nm and 265nm deep UV, 310nm UVB and 340nm UVA LEDs

Semiconductor today - 11 годин 32 хв тому
Marktech Optoelectronics Inc of Latham, NY, USA has announced an expanded portfolio of high-power UV LED light sources with new UV emitters spanning 230nm to 400nm. These devices strengthen the firm’s offerings across the ultraviolet light spectrum — from deep-UV LED (DUV LED, far-UVC LED) sterilization wavelengths to longer UVB and UVA wavelengths for curing, phototherapy and forensics...

TI redoubles advancement of next-gen physical AI with NVIDIA

ELE Times - 12 годин 56 хв тому

Texas Instruments announced accelerating the safe deployment of humanoid robots into the real world with NVIDIA. By combining TI’s real-time motor control, sensing, radar and power technologies with NVIDIA’s advanced robotics compute, Ethernet-based sensing and simulation technologies, robotics developers can validate perception, actuation and safety earlier and more accurately. TI connects NVIDIA physical AI compute to real-world applications with deterministic control, sensing, power, and safety at every joint and subsystem. This partnership will help developers move faster from virtual development to production-ready, scalable and safety-compliant systems.

As part of this collaboration, TI designed a sensor fusion solution by integrating its mmWave radar technology with NVIDIA Jetson Thor using NVIDIA Holoscan Sensor Bridge to enable low-latency, 3D perception and safety awareness for humanoid robots. TI will showcase the solution at NVIDIA GTC, March 16–19, 2026, in San Jose, California.

“The next generation of physical AI requires more than just advanced compute – it demands seamless integration between sensing, control, power and safety systems,” said Giovanni Campanella, general manager of industrial automation and robotics at TI. “TI’s comprehensive portfolio bridges the gap between NVIDIA’s powerful AI compute and real-world applications, enabling developers to validate complete humanoid systems earlier in development. This integrated approach will help accelerate the evolution from prototypes to commercially viable humanoid robots operating safely alongside humans.”

“The safe operation of humanoid robots in unpredictable environments requires a massive leap in processing power to synchronise complex AI models with real-time sensor data and motor controls,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “The integration of Texas Instruments’ sensing and power management technologies with the NVIDIA Jetson Thor platform provides developers with a functional safety-capable foundation to accelerate the deployment of next-generation physical AI.”

Enabling safer humanoid robots with real-time sensor fusion technology

TI’s mmWave radar sensor, IWR6243, connected via Ethernet to NVIDIA Jetson Thor, enables scalable low-latency, 3D perception and safety awareness for physical AI applications. By fusing camera and radar data, the solution improves object detection, localisation, and tracking while reducing false positives for confident, real-time decision-making in humanoid robots.

This solution enables human-like perception that works reliably in challenging conditions – from low light and bright glare to fog and dust indoors and outdoors – and addresses a critical safety gap that has limited real-world deployment of humanoid robots. For example, while cameras may not reliably detect glass doors or reflective surfaces, radar provides consistent detection of these transparent obstacles, enabling smooth navigation in places like office buildings, hospitals and retail environments.

TI at NVIDIA GTC 

TI will present its technologies at NVIDIA GTC in booth 169 at the San Jose McEnery Convention Centre. TI and D3 Embedded’s live demonstration, “Real-time sensor fusion for reliable robotic perception with Holoscan,” showcases how TI’s mmWave radar technology integrates with NVIDIA’s Jetson Thor and Holoscan ecosystem using an end-to-end software processing chain and visualisation from D3 Embedded.

On Wednesday, March 18, from 3:00-3:40 p.m. PT, TI’s Giovanni Campanella will participate in a lightning talk, “The Edge of the Edge: Redefining GPU-Enabled AI Sensor Processing.” Campanella will discuss how the tight integration of sensing, networking and GPUs is enabling real-time physical AI at the edge of industrial systems.

The post TI redoubles advancement of next-gen physical AI with NVIDIA appeared first on ELE Times.

Everspin Advances High-Reliability xSPI MRAM Portfolio With Complete Production Qualification for 64Mb MRAM

ELE Times - 13 годин 18 хв тому

Everspin Technologies, the world’s leading developer and manufacturer of magnetoresistive random access memory (MRAM) persistent memory solutions, announced continued progress across its high-reliability (HR) PERSYST xSPI STT-MRAM portfolio, including the completion of full production qualification for its 64Mb MRAM and the expansion of the family to a new 256Mb density.

The HR 64Mb xSPI STT-MRAM has now completed full production qualification for the AEC-Q100 Grade 1 specification. It is currently available for customer orders and supports high-volume production programs, with inventory available through Everspin’s authorised distributors worldwide.

The 128Mb xSPI STT-MRAM is expected to complete production qualification in May 2026, and a new 256Mb option is scheduled to complete full production qualification in July 2026, with volume availability expected in the second half of 2026.

“Advancing our high-reliability product family through production qualification and expanding density options reflects steady progress against our technology roadmap,” said Sanjeev Aggarwal, president and CEO of Everspin Technologies. “Customers designing long-lifecycle systems require validated memory solutions with predictable performance, and we are extending the PERSYST platform to meet those needs across a wider range of densities.”

The addition of the 256Mb density enables higher-capacity persistent memory designs within the same xSPI-based architecture. Together with the 64Mb and 128Mb xSPI STT-MRAM products, the expanded Hi-Rel portfolio provides scalable options for applications operating across extended temperature ranges and demanding reliability environments.

“Production qualification provides the level of confidence required for space and satellite programs moving into long-term deployment,” said Billy Wahng, Chief Technology Officer at Astro Digital. “Everspin’s focus on endurance, data integrity and radiation tolerance addresses the challenges of operating in unpredictable environments.”

These milestones represent continued execution of Everspin’s roadmap to broaden its HR MRAM portfolio for aerospace, defence, automotive, industrial and other mission-critical applications.

The post Everspin Advances High-Reliability xSPI MRAM Portfolio With Complete Production Qualification for 64Mb MRAM appeared first on ELE Times.

Rohde & Schwarz enables rapid validation of next-gen Wi-Fi 8 networking platforms, including 5×5 MIMO capabilities

ELE Times - 13 годин 22 хв тому

Qualcomm Technologies has used the CMP180 radio communication tester from Rohde & Schwarz to validate advanced multi-antenna capabilities that are designed into its next-generation Wi-Fi 8 platforms, including support for 5×5 MIMO in the 2.4, 5, and 6 GHz bands. Advanced 5×5 MIMO architectures help Wi‑Fi 8 platforms deliver higher capacity and more reliable connectivity across a wider range of real‑world deployment scenarios.

The industry‑leading CMP180 delivers full bandwidth and seamless scalability for testing leading Wi‑Fi 8 chipsets across the entire device lifecycle — from development to production. As a result of this collaboration, Rohde & Schwarz now offers pre‑built test routines and early access to key resources, enabling device manufacturers to accelerate the time‑to‑market of their products.

Wi-Fi 8, based on the IEEE 802.11bn specification, builds on the foundation of Wi-Fi 7 to deliver next-level reliability, efficiency, and seamless mobility. New PHY and MAC layer technologies work together to extend range, improve spectrum utilization, reduce latency, and enable coordinated access across dense environments, setting the stage for ultra-high reliability (UHR) performance. Advanced antenna architectures such as 5×5 MIMO help enhance spatial efficiency and link robustness and provide a more consistent performance in real-world environments.

This new feature set of Wi-Fi 8 will accelerate the wireless LAN performance at home, in offices, venues, and factories, and enable applications like extended reality (XR), AI-assisted applications, real-time cloud gaming, and ultra-high-definition content streaming. To realize these benefits, test equipment must support all bands, full channel bandwidths, multi-antenna operation (MIMO), and deliver best-in-class measurement accuracy at benchmarking test efficiency. Rohde & Schwarz has designed the CMP180 radio communication tester with these capabilities in mind.

The CMP180 enables Qualcomm Technologies to validate essential features of its latest Wi-Fi innovation, including:

  • 5×5 MIMO performance to further improve maximum data throughput per link
  • Advanced modulation and coding schemes that enable fine‑grained adaptation to real‑time radio conditions.
  • Distributed-tone resource units to improve uplink performance under regulatory limits.

Goce Talaganov, Vice President Mobile Radio Testers at Rohde & Schwarz, said: “We are excited to strengthen our long-time collaboration with Qualcomm Technologies to provide a unique testing solution for the next area of Wi-Fi innovations. The CMP180’s advanced features and our close collaboration will empower device manufacturers to bring innovative Wi-Fi 8 products to market quickly and confidently.”

Ganesh Swaminathan, Vice President and General Manager, Wireless Infrastructure and Networking, Qualcomm Technologies, Inc., said: “Qualcomm Technologies’ Wi-Fi 8 portfolio is engineered to deliver next-level performance, reliability, and scalability across a broad range of networking use cases. As part of this portfolio approach, we are advancing innovations such as higher-order MIMO to help increase performance in real-world environments. Our collaboration with Rohde & Schwarz highlights the progress of these capabilities as the Wi-Fi 8 ecosystem builds momentum.”

The post Rohde & Schwarz enables rapid validation of next-gen Wi-Fi 8 networking platforms, including 5×5 MIMO capabilities appeared first on ELE Times.

R&S acquires SRS, specialists in SDR communications solutions

ELE Times - 13 годин 46 хв тому

Rohde & Schwarz acquired Software Radio Systems (SRS), a specialist in 5G software-defined radio systems. This acquisition further strengthens the position of Rohde & Schwarz in the cellular and wireless communications software market and accelerates development in AI-based test solutions for satellite and next-generation 6G wireless technologies. SRS will continue to operate under its own name as a Rohde & Schwarz group company, maintaining its product roadmap, leadership team and strategic focus. 

The management team at Software Radio Systems, Paul Sutton, Ismael Gomez and Andre Puschmann, will remain. Together with their team at SRS and their new colleagues at Rohde & Schwarz, they will drive innovation in software-defined mobile communications and contribute to the long-term success of the group.

With the acquisition of Software Radio Systems (SRS), Rohde & Schwarz is expanding its software-defined radio (SDR) technology portfolio, strengthening its position in the market for mobile and radiocommunications software, especially in the emerging fields of non-terrestrial networks (NTN) and 6G. The acquisition became effective as of March 5, 2026. Founded in 2012 and headquartered in Ireland, with branches in Barcelona and the USA, SRS has grown from a startup into a globally recognised innovator in mobile and radiocommunications software. The company has established itself as a key player in a highly competitive market through deep technical expertise, product innovation and a strong commitment to open and interoperable network architectures.

By integrating SRS, Rohde & Schwarz decisively expands its capabilities in software-defined radio technology to better serve existing market segments and tap new business opportunities. At the same time, SRS enters a new phase of accelerated growth. With access to deep technical resources from Rohde & Schwarz, global reach and long-term strategic stability, SRS is ideally positioned to scale its technology, expand internationally and execute its long-term mission. The combination of Rohde & Schwarz leadership in test and measurement and RAN expertise from SRS creates powerful synergies that will further advance software-defined mobile network solutions.

Goce Talaganov, Vice President Mobile Radio Testers at Rohde & Schwarz, explains: “I am very pleased that Software Radio Systems is becoming part of the Rohde & Schwarz group. SRS combines extensive telecommunications and wireless expertise with agile software development, making it the ideal complement to our mobile radio testing portfolio, which will benefit the customers of both companies. What SRS and its employees have built up over the past few years is a real success story that you don’t often see. From now on, we will work together to push the boundaries of the technically possible even further, to leverage new synergies and to make an impact on the market.”

Paul Sutton, Chief Executive Officer and co-founder of Software Radio Systems, is ready for the future. “Joining the Rohde & Schwarz group is a proud moment for everyone at SRS. We will continue to operate as SRS with our existing leadership team and a clear commitment to our strategic priorities. What changes is our ability to scale with speed and accelerate the execution of our roadmap, including key initiatives such as the OCUDU project. With Rohde & Schwarz, we gain the strength of a global technology leader whose deep technical expertise, worldwide presence and long-term perspective enable us to grow faster and think bigger. Together, we will expand the impact of our software-defined solutions and deliver innovation that truly makes a difference for our customers.”

The post R&S acquires SRS, specialists in SDR communications solutions appeared first on ELE Times.

Differentiating Between LPDDR6, LPDDR5, and LPDDR5X

ELE Times - 14 годин 4 хв тому

Courtesy: Synopsys

Advances in memory standards are driving faster and more power-efficient mobile and connected devices, from smartphones and tablets to ultra-thin laptops and wearables.

One such standard is Low Power Double Data Rate (LPDDR), which plays a crucial role in balancing high performance with energy efficiency. The latest iteration of the standard, LPDDR6, represents a big step forward in memory management. Comparing LPDDR6 to its predecessors, LPDDR5 and LPDDR5X, reveals just how quickly mobile memory technology is evolving — and what these advances mean for next-generation devices.

The role of LPDDR memory

LPDDR acts as the main system memory inside electronic devices. Working hand-in-hand with device processors and other components to store and access frequently used data, it helps keep applications, media, and multitasking features running smoothly. LPDDR is optimised for low power usage, compact footprint, and fast data transfer, making it ideal for portable, battery-powered devices.

LPDDR can integrate with Inline Memory Encryption (IME) modules to ensure data confidentiality — both in-use and when stored in off-chip memory. This is achieved through standards-compliant independent cryptographic support for read and write operations, providing robust protection against unauthorised access.

LPDDR memory is also available as automotive-grade Synchronous Dynamic Random-Access Memory (SDRAM), making it the preferred DRAM solution for automotive applications that require strict compliance with automotive standards.

LPDDR5 and LPDDR5X: the previous benchmarks

LPDDR5 marked a big step up in mobile memory when it was introduced in 2019. It delivered data rates up to 6.4 Gbps with improved energy efficiency (through features such as Dynamic Voltage Scaling) and smarter data handling. These upgrades led to longer battery life and better support for demanding applications like 5G connectivity, high-resolution media, and the initial wave of artificial intelligence (AI).

LPDDR5 also added new reliability features and smarter error handling, helping stabilise performance under complex workloads. As a result, devices using LPDDR5 delivered noticeable gains in both speed and overall user experience compared to devices using previous generations of LPDDR SDRAMs.

Introduced in 2021, LPDDR5X offered increased performance (up to 10.67 Gbps) and minor enhancements to LPDDR5’s features. LPDDR5X SDRAMs represent the vast majority of LPDDR SDRAMs shipping today.

LPDDR6: the next generation

Published in July 2025, the new LPDDR6 specification and compliant SDRAMs deliver even more performance, efficiency, and features — all designed to meet the growing demands of next-generation mobile and connected devices. LPDDR6 offers:

  • Faster data rates. LPDDR6 is expected to reach up to 14.4 Gbps, a significant increase from LPDDR5X. This extra speed is essential for power-hungry applications like augmented reality, ultra-high-definition video streaming, advanced AI, and automotive electronics, all of which depend on rapid data processing.
  • Wider bandwidth. Using 24-bit channels (up to 96 bits per package with 4 channels total), LPDDR6 effectively doubles LPDDR5X’s bandwidth per package. In addition, two 12-bit sub-channels in each channel help improve latency and access.
  • Enhanced power management. LPDDR6 introduces more precise control over voltage and power states. This upgrade helps devices run more efficiently and extends their battery life as a result.
  • Improved reliability and error correction. As the speed and footprint of LPDDR rise, so too does the risk of data errors — especially in data centres. LPDDR6 addresses this challenge with enhanced RAS (Reliability, Availability, and Serviceability) capabilities, providing robust error correction via Metadata, Advanced ECC, and Link ECC features. These improvements help minimise system glitches and stabilise device performance.

While LPDDR6 builds on LPDDR5 and LPDDR5X’s foundations, some legacy mechanisms were streamlined or replaced to support higher speeds and tighter power control. For example, earlier voltage scaling and command encoding schemes have been reworked to enable more granular power states and improved signal integrity. These changes mean LPDDR6 prioritises advanced efficiency and reliability features over older approaches that were optimised for lower data rates.

The implications for memory design and mobile devices

The improved performance, efficiency, and features of LPDDR6 will have wide-ranging impacts. From a technical perspective, LPDDR6 introduces a variety of upgrades to memory architecture:

  • Signal integrity and bank management. Smarter signalling and improved memory bank management reduce latency and maximise data throughput.
  • Ultra-low power modes. New power-saving states allow devices to conserve energy when idle, a big advantage for wearables and Internet of Things (IoT) products that run on small batteries.
  • The new specification is engineered for seamless integration with the latest processors and chipsets, making it easier for manufacturers to integrate LPDDR6 into their next-generation devices.

These upgrades will enable the creation of mobile devices that offer:

  • Faster, smoother performance. Higher data rates mean apps open quicker, multitasking is more efficient, and device operation is smoother.
  • Better battery life. Improved power management reduces energy consumption, allowing devices to run longer between charges.
  • Greater system stability. Stronger error correction improves reliability and reduces the risk of crashes and data loss.
  • Future-proofing. LPDDR6 enables devices to support future advances in mobile computing, connectivity, and multimedia.

The Impact of LPDDR6 on smartphones, laptops, and wearables

LPDDR6 represents a significant step forward in mobile memory technology, delivering faster speeds, increased capacity, improved reliability, and better energy efficiency.

Leveraging silicon-proven interface IP and verification IP solutions — which have also been successfully validated at 10.667 Gb/s for SDRAM — device manufacturers are already upgrading their flagship smartphones, high-end laptops, and innovative wearables with LPDDR6-based memory.

But the transition from LPDDR5X/5 to LPDDR6 is more than just a technical upgrade — it enables new possibilities in mobile computing. As manufacturers adopt the new standard, users can expect devices that are faster, more reliable, and ready to support the next wave of on-device and cloud-connected experiences.

The post Differentiating Between LPDDR6, LPDDR5, and LPDDR5X appeared first on ELE Times.

I designed my own Morse code trainer

Reddit:Electronics - 16 годин 17 хв тому
I designed my own Morse code trainer

Demo at https://www.youtube.com/watch?v=sKtSpykOBXY

This is the Morse code trainer I designed. It runs on an AVR128DA48 microcontroller with a 2.42 inch 128x64 OLED and a custom-designed capacitive touch sensor PCB straight key. It also includes an NRF24L01+ radio module to allow 2-way send and receive of Morse code between nearby devices. The whole thing is powered by a rechargeable 3.7V 800mAh LiPo battery. I also designed the enclosure and 3D print it out of PET-G filament.

Happy to answer any questions!

submitted by /u/neverlogout891231902
[link] [comments]

Apple’s spring 2026 soirée: The rest of the story

EDN Network - 20 годин 1 хв тому

With smartphone and tablet news already discussed, what else did Apple unveil this week? Read on for all the goodies and their details.

As I teased at the end of my prior piece, computers and displays were also on the plate for Apple’s “big week of news” announcements suite. With today’s (as I write this on Wednesday in the late afternoon) New York, London, and Shanghai “Experience” in-person events now concluded:

(No, alas, I wasn’t invited)

I’m guessing that Apple’s wrapped up its rollouts for now, therefore compelling me to revisit my keyboard for concluding part 2. That said, I realized in retrospect that there was one additional earlier hardware announcement that, had I remembered at the time (and in time), I would have also included in part 1, since it also covered mobile devices. So, let’s start there.

AirTag 2

In late April 2021, Apple introduced its first-generation AirTag trackers, leveraging Bluetooth LE connectivity to mate them with owner-paired smartphones and tablets and more broadly (when a tagged device is lost) the broader Find My crowdsourced network ecosystem to assist in identifying their whereabouts and monitoring their movements. Integrated ultrawideband (UWB) support, when also comprehended by the paired mobile device, affords even more precise location discernment (i.e., not just somewhere in the living room, but having fallen between the sofa cushions). And built-in NFC support assists anyone who might find a tag (and whatever it’s attached to), to notify the person that it belongs to. Here’s my first-gen teardown.

Nearly five years later, and quoting Wikipedia:

An updated model with the U2 chip, upgraded Bluetooth, and a louder speaker was released in January 2026 [editor note: Monday the 26th, to be precise]. It has enhanced range for precision detection with iPhones equipped with a U2 chip such as the iPhone 15/Pro or later (excluding iPhone 16e), and also allows an Apple Watch with a U2 chip such as the Apple Watch Series 9 or later, or Apple Watch Ultra 2 or later (excluding Apple Watch SE), to precisely locate items.

Now fast-forwarding a month-plus to this week’s announcements…

The M5 Pro and Max SoCs

2.5 years back, within my coverage of Intel’s then leading-edge and first-time chiplet-implemented Meteor Lake CPU architecture:

I noted that the company was, to at least some degree, following in the footsteps of AMD and Apple, both having already productized chiplet-based designs. In AMD’s case, I was on solid footing with my stance, as the company had already been embedding and interconnecting discrete processors, graphics, and other logic circuits for several years. In Apple’s case, conversely, my definition of a chiplet implementation was a bit more loosey-goosey, at least at the time:

Above is a de-lidded photo of Apple’s M1 SoC. At left is the single-die implementation of the entirety of the logic circuitry, plus cache. And on the right are two DRAM memory chips. Admittedly, the “Ultra” variant of the eventual M1 product family, at far right:

upped the ante a bit more, “stitching together two distinct M1 Max die via a silicon interposer”. But I’ve long wondered when Apple would go “full monty” on disaggregation, mixing-and-matching various slivers of logic silicon attached to and interconnected via a shared packaging substrate, to keep each die’s dimensions to a reasonable manufacturing-yield size and to afford fuller implementation flexibility. To wit, the points I made back in September 2023 remain valid:

  • Leading-edge processes have become incredibly difficult and costly to develop and ramp into high-volume production,
  • That struggle and expense, coupled with the exponentially growing transistor counts on modern ICs, have negatively (and significantly so) impacted large-die manufacturing yields not only during initial semiconductor process ramps but also long-term, and
  • Desirable variability both in process technology (DRAM versus logic, for example), process optimization (low power consumption versus high performance) and IC sourcing (internal fab versus foundry), not to mention the attractiveness of being able to rapidly mix-and-match various feature set combinations to address different (and evolving) market needs, also enhance the appeal of a multi- vs monolithic-die IC implementation.

That time is now, branded as the “Fusion Architecture” and ironically foreshadowed by a then-subtle Apple online store tweak a month ago. Quoting from the press release subhead:

M5 Pro and M5 Max are built using the new Apple-designed Fusion Architecture that connects two dies with advanced IP blocks into a single SoC, delivering significant performance increases that push the limits of what’s possible…

In an interesting twist from the past, this time the two product proliferations seemingly share a common processor die, although the variety and number of guaranteed-functional cores varies both between the two devices and within a given device’s binning variants. Conversely, the graphics core counts diverge more substantially between the two devices. To some degree this is reflective of the high-end “Max” device’s professional content creator target demographic, although I’d wager that it more broadly affords more robust on-device deep learning inference capabilities in conjunction with the chips’ presumed-still-existent neural processing cores. And what of an “Ultra” variant of the M5…is it on the way? Maybe

Tomato, tomahto

Speaking of cores, by the way…sigh. Look back at my M5 SoC (and initial devices based on it) coverage from last October, and you’ll see that, just as with prior generations of both A- and M-based Apple-developed silicon, it contains a mix of both performance (speed- optimized) and efficiency (power consumption-tuned) cores. Here’s the specific press release quote again:

M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.

All well and good; the Arm-developed architecture analogy is big.LITTLE. Revisiting that page on Arm’s website just now, however, I curiously noticed that whereas it historically called out two different types of cores, now there are apparently three. Check out the subhead:

Arm big.LITTLE technology is a heterogeneous processing architecture that uses up to three types of processors. LITTLE processors are designed for maximum power efficiency, while big processors are designed to provide efficient, sustained compute performance.

Keep in mind that Apple is an Arm architecture licensee, so it develops its own (still instruction set-compatible, of course) cores. That said, beginning with the M5 Pro/Max processing chiplet, Apple has also developed a third core, this one an intermediate half-step between the performance and efficiency endpoints. You might think that Apple would call this new one the “balanced” core, say. But alas, you’d be wrong. Here’s long-time Apple observer Jason Snell, quoted in a post from another Apple prognosticator, “graybeard,” John Gruber:

With every new generation of Apple’s Mac-series processors, I’ve gotten the impression from Apple execs that they’ve been a little frustrated with the perception that their “lesser” efficiency cores were weak sauce. I’ve lost count of the number of briefings and conversations I’ve had where they’ve had to go out of their way to point out that, actually, the lesser cores on an M-series chip are quite fast on their own, in addition to being very good at saving power! Clearly they’ve had enough of that, so they’re changing how those cores are marketed to emphasize their performance, rather than their efficiency.

What did Apple decide to do instead, including a retrofit of published M5 documentation?

  • The prior-named “Performance” core is now instead called, believe it or not, “Super.”
  • The “Efficiency” core retains its original name, for a brief moment of sanity
  • And the new in-between “balanced” core? It’s the recycled ”Performance” moniker.

The following summary table originated with another recent John Gruber post; I’ve simplified the SoC options, reordered the CPU core columns, and added a column for GPU core counts:

 

CPU (Super)

CPU (Performance)

CPU (Efficiency)

GPU

M5

3-4

N/A

6

8-10

M5 Pro

5-6

10-12

N/A

16-20

M5 Max

6

12

N/A

32-40

That’s just…super. Sigh.

(More) M5 MacBook Pros

(nifty video animation, eh?)

“Super” SoCs inside aside, the new 14” and 16” MacBook Pros are effectively identical to their M4-based forebears (note that the sole M5 version initially announced last fall was the 14” model). The only other items of particular note both involve memory. Baseline and upgraded DRAM capacity option prices remain the same as last time, despite current industry memory supply constraints; an upper-end 64 GByte option for the M5 Pro has even been added. And regarding flash memory, Apple has obsoleted last November’s entry-level 512 GByte SSD option for the baseline 14” M5 MacBook Pro, making the new capacity starting point for that product (1 TByte) more expensive than before. That said, it’s now $100 lower than the 1 TByte variant price at intro just a few months ago, and capacity-upgrade prices have also decreased.

The M5 MacBook Air(s)

Here’s another example of not being able to tell, based solely on external appearances, which generation of devices you’re looking at. Coming, as with its M3- and M4-based forebears, in both 13” and 15” versions, the M5 MacBook Air also upgrades to Apple’s N1 network connectivity chip. But, speaking once again of (flash, specifically) memory, and akin to the product line option slimming for the 14” M5 MacBook Pro mentioned in the prior section, the lowest-available capacity for the new devices is 512 GBytes, versus 256 GBytes in previous generation. I’m guessing that the reasoning is two-fold this time; as with the 14” M5 MacBook Pro’s option-culling, the company’s “hiding” its higher flash memory costs by only offering more profitable capacity choices to customers. Plus, by doing so, Apple can more clearly differentiate the MacBook Air from its other products. Speaking of which…

The MacBook Neo

I’ll kick off this section with a few history lessons. Back in 2015, Apple introduced the “new MacBook” (also commonly referred to as the “12” MacBook), with a Retina-resolution display and based on Intel m-series (and later, i-series) CPUs. It slotted between the then-non-Retina MacBook Air and the high-end MacBook Pro in Apple’s product portfolio from a pricing standpoint, even though its processing performance undershot that of the notably less expensive MacBook Air. Plus, it was hampered by the unreliable “butterfly” keyboard. It was discontinued after only three hardware iterations and four years of production.

In addition to its unfavorable price comparison to the MacBook Air, the “new MacBook” was also still competing to a degree against then-popular Windows-based “netbooks”, which were even lower priced. Back in late 2008, Former CEO Steve Jobs had (in)famously quipped re netbooks, “We don’t know how to make a $500 computer that’s not a piece of junk.” Hold that thought.

My last history lesson is, conversely, a Steve Jobs success story. Back in mid-1999, two years (and change) after Jobs’ return to Apple and less than a year after launching the consumer-tailored iMac desktop, Apple unveiled the iBook laptop:

which came in multiple eye-catching, intentionally non-“business” color options:

Quoting Wikipedia:

The line targeted entry-level, consumer and education markets, with lower specifications and prices than the PowerBook, Apple’s higher-end line of laptop computers. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort.

Look again at the image of the iBook’s color options. Now look at the photo at the beginning of this section. See where I’m going?

The newly unveiled MacBook Neo comes in two price tiers: $599 (with a further $100 discount for education customers; take that, Chromebooks) and $699. The higher-end variant gets you twice the SSD capacity—512 GBytes versus 256 GBytes—along with a Touch ID fingerprint reader built into the keyboard. That’s it. 8 GBytes of DRAM, with no upgrade option. No Thunderbolt, only two USB-C ports, one of them supporting only USB 2 speeds. The first-time use of an A-series processor, the (Apple Intelligence-capable) A18 Pro (albeit with one fewer graphics core enabled than the initial version in the iPhone 16 Pro series); that said, it seems to benchmark (at least) roughly on par with the M1 that until recently was still being sold by Walmart in the MacBook Air. And a networking subsystem rumored to come from MediaTek, versus developed internally.

In closing, at least for this section: what’s with the name? Some folks had forecasted that it’d just be called the “MacBook”, but as I’ve already noted, that particular name is now “damaged goods”. Others thought that an “iBook” resurrection was in the cards, but Apple stopped referring to devices via “i” monikers a while ago. That said, “Neo” was definitely not on my bingo card. Maybe someone in Cupertino is a fan of The Matrix, but thought that “MacBook Mr. Anderson” would be too ponderous?

Displays

Having already passed through 2,000 words, I’m going to keep this section short. Apple announced two new Studio Display models, its first updates to this particular product category in many years. They’re both 27” in size, with 5K Retina resolutions, although their refresh rates, dynamic ranges, and other image quality measures vary. The “inexpensive” one starts at $1,599, with its pricier sibling beginning at $3,299; both are available in standard or (upgrade) nano-texture glass options, and mounting and other accessories are also available. And interestingly, at least to me, they don’t work with legacy Intel-based Macs, even the scant few models (one of which I’m currently typing on) that are still supported by MacOS 26. For more details, check out the press release.

And what about…

The M5 Mac mini, whose possibility I alluded to yesterday? Didn’t happen, even though the current M4-based models are popular with the agentic AI enthusiast community (and others). That said, in revisiting my prognostication yesterday afternoon, I remembered that Apple had also skipped the M3 Mac mini generation, and that said, the time-consuming form factor redesign development from the M2 to the M4 might have at least partly explained that delay.

And what of the upgrade to the “vanilla” iPad that lots of folks were forecasting would happen this week? Another nope. The primary rationale here was that it was the only remaining member of Apple’s current product line whose CPU (the A16) doesn’t support Apple Intelligence. But there was no evidence of the telltale indicator of a new product’s arrival: depleted retail inventories of the current model. My guess: Apple will be happily talking about AI again at this year’s WWDC, now that Google’s on board as the company’s development partner, and that’d be a perfect time to announce the “iPad 12”…or maybe “iPad Neo”? I jest (I hope).

Time to put down my cyber-pen and turn it over to you for your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Apple’s spring 2026 soirée: The rest of the story appeared first on EDN.

My home lab as a 13.7 year old

Reddit:Electronics - Чтв, 03/05/2026 - 23:14
My home lab as a 13.7 year old

Helloo

I have been building my setup for a year or two i participated in isef last year. as a Iraqis second year in isef i really like my setup. any suggestions

submitted by /u/Maximus_robotics
[link] [comments]

Wolfspeed launches first commercially available 10kV SiC power MOSFET

Semiconductor today - Чтв, 03/05/2026 - 19:54
Wolfspeed Inc of Durham, NC, USA — which makes silicon carbide (SiC) materials and power semiconductor devices — has announced what it claims is the industry’s first commercially available 10kV SiC power MOSFET. The firm says this unlocks architectural freedom, delivering unprecedented system durability, and advancing access to reliable and sustainable power for the most demanding applications. The advance challenges conventions in power conversion technology, delivering a solution to modernize the grid and critical power infrastructure, to accelerate industrial electrification, and to unleash the potential for AI data-center growth, it adds...

MCU enables ASIL D safety and control

EDN Network - Чтв, 03/05/2026 - 19:06

Built on a 28-nm process, the Renesas RH850/U2C automotive microcontroller delivers robust connectivity and security for modern E/E architectures. This 32-bit MCU expands the RH850 lineup with a cost-optimized option for chassis and safety systems, battery management, body control, and other ASIL D–rated applications.

The device integrates four RH850 CPU cores running at up to 320 MHz, including two lockstep cores, and up to 8 MB of on-chip flash memory. It combines 10BASE-T1S and TSN Ethernet (1 Gbps/100 Mbps), CAN XL, and I3C with widely used interfaces such as CAN FD, LIN, UART, CXPI, I2C, I2S, and PSI5.

In addition to functional safety support up to ASIL D under ISO 26262, the RH850/U2C meets current cybersecurity requirements in accordance with ISO/SAE 21434. The MCU integrates hardware acceleration for cryptographic algorithms, ranging from post-quantum cryptography (PQC) to those mandated by current Chinese and other international regulations.

The RH850/U2C is available in BGA292 and HLQFP144 packages.

RH850/U2C product page 

Renesas Electronics 

The post MCU enables ASIL D safety and control appeared first on EDN.

VNAs perform production test up to 9 GHz

EDN Network - Чтв, 03/05/2026 - 19:06

With typical measurement speeds of 25 µs/point, Copper Mountain’s three SC series VNAs enable efficient testing in both R&D and manufacturing environments. The SC0402, SC0602, and SC0902 two-port analyzers cover a common frequency start of 9 kHz, with upper ranges of 4.5 GHz, 6.5 GHz, and 9 GHz, respectively.

These instruments offer a typical dynamic range of 130 dB (10 Hz IF BW) for precise characterization of RF components and complex systems. Output power can be adjusted from -50 dBm to +5 dBm, with up to 500,001 measurement points/sweep. Measured parameters include S11, S21, S12, and S22.

Standard software capabilities, available without a paid license, include linear and logarithmic sweeps, power sweeps, and time-domain conversion with gating. Additional functions include S-parameter embedding and de-embedding, limit testing, frequency offset, and vector mixer calibration.

Automation is supported through LabVIEW, Python, MATLAB, .NET, and other programming environments, allowing up to 16 independent channels with 16 traces/channel. A manufacturing test plug-in is available as an add-on to integrate the VNA software into existing automated manufacturing and QA processes.

The SC series VNAs carry MSRPs of $13,995 (SC0402), $15,995 (SC0602), and $17,995 (SC0902).

Copper Mountain Technologies 

The post VNAs perform production test up to 9 GHz appeared first on EDN.

MCU brings USB-C power to embedded devices

EDN Network - Чтв, 03/05/2026 - 19:05

Infineon’s EZ-PD PMG1-B2 MCU integrates a single-port USB Type-C PD controller with a 55-V buck-boost controller for charging 2- to 12-cell Li-ion battery packs. Compliant with the latest USB Type-C and PD specifications, the device accepts an input voltage range of 4.5 V to 55 V with switching frequencies programmable from 200 kHz to 700 kHz.

The MCU targets USB-C-powered embedded devices in consumer, industrial, and communications markets, where devices make use of its integrated functions. Typical applications include cordless power and gardening tools, vacuum cleaners, kitchen appliances, e-bikes, drones, and robots.

The EZ-PD PMG1-B2 features a 32-bit Arm Cortex-M0 processor with 128 KB of flash and 8 KB of SRAM for customizable embedded applications. It integrates analog and digital peripherals—including ADCs, PWMs, UART/I2C/SPI interfaces, and timers—reducing PCB space and BOM. A comprehensive SDK and software suite simplify development and system design.

Production of the EZ-PD PMG1-B2 is expected to begin in the second quarter of 2026. Samples, technical documentation, and evaluation boards are available upon request.

EZ-PD PMG1-B2 product page 

Infineon Technologies 

The post MCU brings USB-C power to embedded devices appeared first on EDN.

Passive limiter shields electronics from RF threats

EDN Network - Чтв, 03/05/2026 - 19:05

Teledyne Microwave UK’s B3LT98026 is a passive wideband limiter designed to protect sensitive receiver front ends in defense and military communication systems. It operates from 0.1 GHz to 20 GHz and withstands up to 10 W peak input power under defined pulse width and duty cycle conditions.

The device enhances the survivability of Radar Electronic Support Measures (R-ESM) and Electronic Warfare (EW) systems operating in complex threat environments. It provides continuous, always-on protection against high-power RF and emerging Directed Energy Weapons (DEWs).

Across the operating band, the limiter maintains a maximum insertion loss/noise figure of 2.0 dB and a maximum input/output VSWR of 1.5:1. A fast 40-ns recovery time enables rapid return to nominal sensitivity following high-power events. The device operates over a temperature range of −20°C to +85°C, supporting deployment in demanding environments.

The compact SMA-based housing supports straightforward integration into existing architectures without requiring system redesign. The B3LT98026 is also compatible with Teledyne’s Phobos mast top unit and can accommodate additional RF elements, such as filters, when required.

The B3LT98026 is now available for evaluation in defense and EW systems.

B3LT98026 product page 

Teledyne Microwave UK 

The post Passive limiter shields electronics from RF threats appeared first on EDN.

Nordic debuts multiple cellular IoT products

EDN Network - Чтв, 03/05/2026 - 19:05

Nordic Semiconductor expands its ultra-low-power cellular IoT portfolio with Cat 1 bis, satellite NTN, and advanced LTE-M/NB-IoT with edge AI. Leveraging the proven nRF91 series, the nRF92 and nRF93 deliver a scalable, secure platform for global connectivity.

The nRF92 LTE-M/NB-IoT and satellite NTN series introduces the company’s smallest, most highly integrated, and power-efficient cellular solution. It combines a high-performance application MCU with Axon neural processing units, a multi-constellation GNSS receiver, Wi-Fi positioning, and sensor coprocessing. Lead customer sampling is underway, with general availability expected in early 2027.

The nRF93M1 is an LTE Cat 1 bis cellular IoT module with integrated MCU, LTE modem, GNSS receiver, and Wi-Fi positioning. It supports up to 10 Mbps downlink and 5 Mbps uplink, offers global LTE coverage, and is designed for low-power, compact applications. The module is compatible with nRF Cloud for device management, firmware updates, and location services. Lead customers are currently developing products with the nRF93M1, with general availability starting mid-2026.

Additionally, Nordic has enhanced the nRF91 LTE-M/NB-IoT series with 3GPP-compliant GEO and LEO satellite NTN connectivity and sub-GHz fallback to maintain connectivity when public networks are unavailable. The company also introduced the nRF91M1 module, a compact Smart Modem that simplifies adding cellular connectivity to host–modem designs.

Nordic Semiconductor 

The post Nordic debuts multiple cellular IoT products appeared first on EDN.

📌 Стартувала реєстрація на НМТ-2026

Новини - Чтв, 03/05/2026 - 18:14
📌 Стартувала реєстрація на НМТ-2026
Image
kpi чт, 03/05/2026 - 18:14
Текст

Розпочинається перший етап підготовки до Національного мультипредметного тесту — реєстрація, яка триматиме до 02 квітня включно.

Що мають зробити українські вступники?

Smartphone shipments to fall 7% in 2026 amid memory constraints and geopolitical pressures

Semiconductor today - Чтв, 03/05/2026 - 15:54
Based on assumptions on first-quarter memory prices (which indicate that pricing pressure and constrained supply will begin to ease in second-half 2026), Omdia’s latest outlook forecasts that global smartphone shipments will fall by about 7% year-on-year in 2026...

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів