Feed aggregator

🎥 Еко–Техно Україна 2025

Новини - Fri, 02/28/2025 - 23:03
🎥 Еко–Техно Україна 2025
Image
kpi пт, 02/28/2025 - 23:03
Текст

25–28 лютого 2025 року відбувся ІІ тур (фінал) Еко–Техно Україна 2025. Це дуже великий конкурс, який є національним етапом Regeneron ISEF 2025 (престижна Міжнародна науково-технічна виставка для школярів у США).

AC-Line Safety Monitor Brings Technical, Privacy Issues

EDN Network - Fri, 02/28/2025 - 21:24

There’s a small AC-line device that has received a lot of favorable media coverage lately. It’s called Ting from Whisker Labs, Inc. and its purpose is to monitor the home AC line, Figure 1. It then alerts the homeowner via smartphone to surges, brownouts, and arcing (arc faults) which could lead to house fires. It’s even getting glowing click-bait testimonials such as “This Device Saved My House From an Electrical Fire. And You Might Be Able to Get It for Free.” Let’s face it, accolades don’t get much better than that.

Figure 1 The Ting voltage monitor is a small, plug-in box with no user buttons except a reset. Source: Wisker Labs

(“Arcing”—which can ignite nearby flammable substances—occurs when electrical energy jumps across a gap between conductors; it usually but not always occurs at a connector and is often accompanied by sparks, buzzing sounds, and overheating; if it’s in a wall or basement, you might not know about it.)

The $99 device plugs into any convenient outlet—more formally, a receptacle—and once set up with your smartphone, it continuously monitors the AC line for conditions which may be detrimental. It needs no additional sensors or special wiring and looks like any other plug-in device. The vendor claims over a million homes have been protected, aggregating over 980,000 “home years” of coverage and that four of five electrical fires have been prevented.

When the Ting unit identifies a problem it recognizes, the owner receives an alert through the Ting app that provides advice on what to do, Figure 2. Depending on the issue, a live member of the company’s Fire Safety Team may contact you to walk you through whatever remediation steps might be required. In addition, if Ting finds a problem, the company will coordinate service by a licensed electrician and cover costs to remedy the problem up to $1,000.

Figure 2 All interaction between the homeowner and the Ting unit for alerts and reporting is via a Wi-Fi to a smartphone. Source: Wirecutter/New York Times

It all seems so straightforward and beneficial. However, whenever you are dealing with the AC line, there’s lots of room for oversimplification, misunderstanding, and confusion. Just look at the National Electrical Code (NEC) in the US (other countries have similar codes) and you’ll see that there’s more to safety in wiring than just using the appropriate gauge wire, making solid connection, and insulating obvious points. The code is complicated and there are good reasons for its many requirements and mandates.

My first thought on seeing this was “this is a great idea.” Then my natural skepticism kicked in and I wondered: does it really do what they claim? Exactly what does it do, and is that actually meaningful? And then the extra credit question: what else does it do that might not be so good or desirable?

For example, some home-insurance companies are offering it for free, and waive the monthly fee for the first year. That’s a tradeoff users might consider, or is it a clever subscription-service hook?

There is lots of laudatory and flowery language associated with the marketing of this device, but solid technical details are scant, see “How Ting Works.” They state, “Ting pinpoints and identifies the unique signals generated by tiny electrical arcs, the precursors to imminent fire risks. These signals are incredibly small but are clearly visible thanks to Ting’s advanced detection technology.”

Other online postings say that Ting samples the at 30 megasamples/second, looking for anomalies. When it identifies a problem it recognizes, the owner receives an alert through the Ting app that provides advice on what to do.

Let’s face it: the real-world AC line looks nothing like the smoothly undulating textbook sine wave with a steady RMS value. Instead, these are some voltage level variations which the vendor says Ting captured, Figure 3.

Figure 3 The real-world AC line has voltage variation, spikes, surges, and dropouts. Source: F150 Lightning Forum

As for arcing, that’s more complicated than just a low or high-voltage assessment, as it produces RF emissions which can be captured and analyzed.

I was about to sign up to try one out myself but realized the pointlessness of that. First, a sample of one doesn’t prove much. Also, how could I “inject” known faults into the system (my house wiring) to evaluate it? That would be difficult, risky, foolish, and almost meaningless.

Consider the split supply phases

Instead, I looked around the web to see what others said, knowing that you can’t believe everything you read there. One electrician noted that it is only monitoring one side of the two split phases feeding the house, so there’s a significant coverage gap. Another one responded by saying that it was true, but most issues come across on the neutral wire that is shared by both phases.

Even Ting addressed this “one side” concern with a semi-technical response: “The signals that Ting is looking for can be detected throughout the home’s electrical system even though it is installed on a single 120V phase. Fundamentally, Ting is designed to detect the tiny electro-magnetic emissions associated with micro-arcing characteristics of potential electrical faults and does so at very high frequencies. At high frequencies, your home wiring acts like a communications network.”

They continued: “Since each phase shares a common neutral back at your main breaker panel, arcing signals from one phase can be detected by Ting even if it is on the opposite phase. Thus, each outlet in the home will see the signal no matter its location of origin to some degree. With its sensitive detector and powerful post-processing algorithms, Ting can separate the signal from the noise and detect if there is unusual electrical activity. So, you only need one Ting for your home.”

This response brought yet another online response: “monitoring the voltage of both sides of the split phase would be far more ideal. One of the more common types of electrical fires is a damaged or open neutral coming from the transformer. This could send one side of your split phase low and the other high frying equipment and starting fires. But if you’re only monitoring one side of the split phase, you will only see a high or low voltage and have no way of knowing if that is from a neutral issue or voltage sagging on the street.”

As for arcing, every house built since 1999 in the US has been required by code to use AFCI (arc fault circuit interrupter) outlets; those can stop an electrical fire in nearly all cases, not just report it. However, using a single Ting is less costly and presumably has some value for an older home that isn’t going to be renovated or updated to code.

How big is the problem?

Data on house fires is collected and analyzed by various organizations including the National Fire Protection Association (NFPA), individual insurance companies and industry-insurance consortiums. Are house first due to electrical faults a problem? The answer is that it depends on how you look at it.

Depending on who you ask and what you count, there are about 1.5 million fires each year—but many are outdoor barbeque or backyard wood-pile fires. The blog “Predict & Prevent: From Data to Practical Insight” from the Insurance Information Institute deals with electrical house fires and Ting in a generally favorable way (of course, you have to consider the blog’s source) with some interesting numbers: The 10 years from 2012 through 2021 saw reduced cooking, smoking, and heating fires; however, electrical fires saw an 11 percent increase over that same period, Figure 4. Fire ignitions with an undetermined cause also increased by 11 percent.  

Figure 4 The causes of house fires have changed in recent years; electrical fires have increased while others have decreased. Source: U.S. Fire Administration via the Insurance Information Institute

Specific hazards are also detailed, Figure 5:

Figure 5 For those fires whose source has been identified, connected devices and appliances are the source of about half while infrastructure wiring is at about one quarter. Source: Whisker Labs via Insurance Information Institute

The blog also points out that there are many misconceptions regarding electrical fires. It’s easy to assume that most fires are due to older home-wiring infrastructure. However, their data found that 50 percent of home electrical-fire hazards are due to failing or defective devices and appliances, with the other half attributed to home wiring and outlets.

Further, it seems obvious that older homes have higher risk. This may be true only if all other things are equal when considering the effects of age and use on existing wiring infrastructure, but they rarely are. The data shows that assumption is suspect when considering all other factors such as materials, build quality, and the standards and codes at that time.

Other implications

If you get this unit through an insurance company (free or semi-free), that means there’s yet another player the story in addition to the homeowner and Whisker Labs. First, one poster claimed “Digging through the web pages I found each device sends 160 megabytes back to Ting every month…So that means you have to have a stable WiFi router to do the upload. As far as I know, the homeowner does not get a copy of the report uploaded to Ting, but the insurance company does.”

Further, there’s a clause in the agreement between the insurance company that supplied the unit and the homeowner. It says they “may also use the data for purposes of insurance underwriting, pricing, claims handling, and other insurance uses.” Will this information be used to increase your rates or worse cancel your home insurance for imperfect wiring?

It’s not easy to say that the Ting project is a good or bad idea, as that assessment depends on many technical factors and personal preferences. One thing is clear: it may be very useful for collecting and analyzing “big data” across the wiring of millions of homes, AC-line performance, and the relationships between house specifics and electrical risks (hello, AI). However, it can be very tricky when it starts looking at microdata related to a single residence, as it can tell others more about your lifestyle than you would like others to know or how affects how the insurance company rates your house.

What’s your sense of this device and its technical validity?  What about larger-scale technical data-collection value? Finally, how do you feel about personal security and privacy implications?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related content

References

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AC-Line Safety Monitor Brings Technical, Privacy Issues appeared first on EDN.

Latest issue of Semiconductor Today now available

Semiconductor today - Fri, 02/28/2025 - 19:12
For coverage of all the key business and technology developments in compound semiconductors and advanced silicon materials and devices over the last month...

Chiplets and Heterogeneous Integration: The Future of Semiconductor Design

ELE Times - Fri, 02/28/2025 - 13:57

As semiconductor scaling approaches fundamental limits, the industry is increasingly adopting chiplet-based architectures and heterogeneous integration to drive performance, power efficiency, and functionality. This shift is enabling new computing paradigms, from high-performance computing (HPC) to artificial intelligence (AI) accelerators and edge devices. This article explores the latest developments in chiplets, their role in modern semiconductor design, the challenges that lie ahead, and the technical innovations driving this revolution.

The Rise of Chiplet-Based Architectures

Traditional monolithic chip designs are facing bottlenecks due to escalating fabrication costs, yield issues, and power constraints. Chiplets offer a modular approach, enabling manufacturers to:

  • Improve Yield: Smaller dies reduce defect density, improving overall yield and lowering per-unit cost.
  • Enhance Performance: Optimized chiplets for different functions allow greater efficiency and performance scaling.
  • Reduce Costs: Advanced nodes can be selectively used for performance-critical chiplets while other functions remain on mature nodes to balance cost and efficiency.
  • Enable Scalability: Chiplets allow seamless integration of different process nodes and functionalities, ensuring adaptability across multiple applications.

The flexibility of chiplet-based designs is enabling complex computing architectures, where compute, memory, interconnect, and I/O functionalities are independently designed and integrated into a heterogeneous multi-die system.

Heterogeneous Integration: The Next Evolution in Semiconductor Design

Heterogeneous integration refers to the assembly of multiple dissimilar semiconductor components into a single package. This includes logic, memory, power management, RF, photonics, and sensors, combined to optimize system performance.

Key benefits of heterogeneous integration:

  1. Increased Performance Density – More transistors can be packed per unit area without the constraints of monolithic die sizes.
  2. Energy Efficiency – Improved power management through advanced interconnect technologies and proximity of critical functions.
  3. Customizable Architectures – Modular design allows for application-specific optimizations in AI, HPC, and embedded systems.
  4. Multi-Node Manufacturing – Different components can be fabricated using different technology nodes, enabling cost and performance trade-offs.
Key Technologies Enabling Chiplets and Heterogeneous Integration
  1. Advanced Packaging Technologies

The success of chiplet integration depends on sophisticated packaging methodologies that ensure low-latency, high-bandwidth interconnects while maintaining power efficiency. The latest packaging technologies include:

  • 2.5D Integration: Uses an interposer (silicon or organic) to connect multiple chiplets, offering high-speed interconnects with reduced power consumption.
  • 3D Stacking: Enables vertical stacking of dies using Through-Silicon Vias (TSVs), achieving high interconnect density and bandwidth.
  • Fan-Out Wafer-Level Packaging (FOWLP): Enhances signal integrity by reducing interconnect length and improving thermal performance.
  • Wafer-to-Wafer and Die-to-Wafer Bonding: Enables ultra-dense 3D integration for logic-memory co-packaging and AI processors.
  1. High-Speed Interconnects and Chiplet Standards

Efficient interconnects are critical for seamless communication between chiplets. Recent advancements include:

  • Universal Chiplet Interconnect Express (UCIe) – An industry-standard interface for connecting chiplets from different vendors with minimized latency.
  • Advanced Interface Bus (AIB) – Developed by Intel, enabling high-bandwidth chiplet communication for FPGA and AI accelerators.
  • Bunch of Wires (BoW) – A low-power interconnect standard optimized for edge computing and AI applications.
  • Silicon Photonics Interconnects – Optical interconnects enable ultra-high-speed data transfer between chiplets in HPC environments.
  1. Power Delivery and Thermal Management

As chiplet architectures increase integration density, power and thermal constraints become critical challenges:

  • Advanced Power Distribution Networks (PDNs) optimize efficiency across chiplets, ensuring stable voltage regulation.
  • Thermal Interface Materials (TIMs) and liquid cooling solutions mitigate heat buildup in densely packed chiplet systems.
  • On-Package Voltage Regulation (OPVR) reduces power loss in multi-die systems and enhances dynamic power allocation.
Industry Adoption and Notable Implementations

AMD’s Chiplet Approach

AMD pioneered the chiplet strategy with its Zen architecture, integrating multiple CCD (Core Complex Dies) with an IOD (I/O Die). The approach enhances yield and scalability while maintaining high performance.

Intel’s Heterogeneous Integration with Foveros

Intel’s Foveros 3D packaging allows high-performance logic stacking, demonstrated in products like the Meteor Lake processors, which integrate high-performance and power-efficient cores within a single package.

TSMC’s CoWoS and SoIC

TSMC’s Chip-on-Wafer-on-Substrate (CoWoS) and System on Integrated Chips (SoIC) technologies provide cutting-edge 2.5D and 3D integration solutions for AI accelerators and HPC applications.

NVIDIA’s Hopper Architecture

NVIDIA’s Hopper GPU integrates multiple HBM stacks and logic dies using TSMC’s CoWoS-S technology, demonstrating the potential of chiplet-based HPC solutions.

Challenges in Chiplet and Heterogeneous Integration

Despite the benefits, challenges remain:

  1. Interconnect Latency and Bandwidth – Efficient, low-latency interconnect solutions are required for high-speed data exchange between chiplets.
  2. Standardization Issues – Lack of universal standards complicates cross-vendor chiplet integration and interoperability.
  3. Design Complexity – Optimizing power, thermal efficiency, and routing in multi-die architectures requires advanced EDA (Electronic Design Automation) tools.
  4. Manufacturing Costs – While chiplets can reduce per-unit costs, the added complexity in packaging and interconnects can offset savings.
  5. Security and Reliability – Multi-vendor chiplet integration introduces security risks and potential failure points that require robust testing methodologies.
The Future of Chiplets and Heterogeneous Integration

The industry is rapidly evolving towards fully modular semiconductor designs, driven by:

  • AI and Machine Learning – Custom chiplets optimized for AI workloads are expected to dominate future architectures.
  • 3D Heterogeneous Computing – Next-generation chips will feature tightly integrated compute and memory stacks for high-speed processing.
  • Chiplet Ecosystem Growth – Collaboration among semiconductor giants is leading to open standards like UCIe for universal chiplet interoperability.
  • Quantum and Neuromorphic Computing – Emerging computing paradigms are leveraging chiplets for specialized, high-performance computation.
  • AI-Assisted Chiplet Design – Machine learning and AI-driven automation are revolutionizing semiconductor design, optimizing layouts for power and performance efficiency.
Conclusion

Chiplets and heterogeneous integration represent the next frontier in semiconductor design, overcoming the limitations of traditional monolithic scaling. With industry leaders like AMD, Intel, TSMC, and NVIDIA driving advancements, we are entering an era of unprecedented performance and efficiency in computing architectures. While challenges remain in standardization, interconnects, and thermal management, continued innovation promises a future where chiplets become the fundamental building blocks of next-generation processors, ushering in a new era of modular, high-performance computing.

The post Chiplets and Heterogeneous Integration: The Future of Semiconductor Design appeared first on ELE Times.

Arm’s AI pivot for the edge: Cortex A-320 CPU

EDN Network - Fri, 02/28/2025 - 13:55

For artificial intelligence (AI) at the edge moving from basic tasks like noise reduction and anomaly detection to more sophisticated use cases such as big models and AI agents, Arm has launched a new CPU core, the Cortex A-320, as part of the Arm v9 architecture. Combined with Arm’s Ethos-U85 NPU, Cortex A-320 enables generative and agentic AI use cases in Internet of Things (IoT) devices. EE Times’ Sally Ward-Foxton provides details of this AI-centric CPU upgrade while also highlighting key features like better memory access, Kleidi AI, and software compatibility.

Read the full story at EDN’s sister publication, EE Times.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Arm’s AI pivot for the edge: Cortex A-320 CPU appeared first on EDN.

Graphene Electronics and Miniaturization: The Future of Nano-Scale Devices

ELE Times - Fri, 02/28/2025 - 13:50

The relentless drive toward miniaturization in electronics has led to a growing demand for materials that can sustain high performance at the nanoscale. Graphene, a two-dimensional allotrope of carbon, has emerged as a game-changer due to its exceptional electrical, thermal, and mechanical properties. This article explores the latest advancements in graphene-based electronics, focusing on its role in enabling ultra-miniaturized devices, challenges in fabrication, and future prospects.

Graphene’s Unique Properties for Electronics

Graphene’s exceptional properties make it an ideal candidate for miniaturized electronics:

  • High Electrical Conductivity: Graphene exhibits carrier mobilities exceeding 200,000 cm²/V·s, significantly surpassing silicon, due to its unique Dirac cone band structure allowing ballistic transport over micrometer scales.
  • Atomic Thickness: At just one atom thick (0.34 nm), graphene enables extreme device miniaturization, significantly reducing the short-channel effects encountered in silicon transistors.
  • High Thermal Conductivity: With values up to 5000 W/m·K, graphene efficiently dissipates heat, crucial for high-performance electronics, especially in applications requiring ultra-high power density.
  • Mechanical Strength: Graphene is over 200 times stronger than steel, ensuring durability in nano-scale applications and enabling mechanically flexible devices.
  • Quantum Effects: Graphene’s electronic properties are governed by relativistic Dirac fermions, enabling high-speed transistors, valleytronic devices, and novel quantum computing architectures.
Graphene in Transistors and Logic Devices Graphene Field-Effect Transistors (GFETs)

Graphene-based transistors, or GFETs, are at the forefront of miniaturization due to their ultra-high carrier mobility and near-ballistic transport.

  • Recent advances include dual-gated GFETs, which enhance carrier modulation and energy efficiency by reducing contact resistance and improving subthreshold slope.
  • Researchers at MIT have demonstrated graphene-based sub-5nm transistors, showcasing potential replacements for conventional MOSFETs and FinFETs.
  • The integration of graphene with high-k dielectrics such as HfO₂ has shown improved gate control and reduced leakage current.
Graphene Nano-Ribbons (GNRs) for Bandgap Engineering

One challenge with graphene is its lack of an intrinsic bandgap, making it difficult to use in digital logic. Narrowing graphene into nano-ribbons (GNRs) introduces a bandgap, allowing for graphene-based semiconductors.

  • IBM has developed 5nm GNR transistors, which exhibit superior switching behavior compared to conventional silicon devices.
  • Recent studies on doping GNRs with boron and nitrogen have further improved bandgap tunability and transistor performance.
Graphene in Memory and Storage Devices

Graphene’s potential in memory applications stems from its ability to form ultra-thin, high-capacity storage solutions with fast switching characteristics.

Graphene-Based Resistive RAM (RRAM)

Graphene oxide (GO)-based RRAM enables high-speed, low-power memory.

  • Samsung and research institutions have demonstrated graphene-based non-volatile memory capable of replacing NAND flash storage with endurance exceeding 10¹² write cycles.
Graphene Supercapacitors for Fast-Charging Memory

Graphene supercapacitors provide ultra-fast charging and discharging, making them ideal for next-generation RAM and hybrid storage solutions.

  • The incorporation of graphene aerogels and MXenes in supercapacitors has drastically improved capacitance and retention characteristics.
Graphene in Flexible and Wearable Electronics

The push toward wearable and bendable electronics demands materials that maintain high conductivity while being flexible. Graphene’s high mechanical flexibility and optical transparency make it ideal for:

  • Flexible Displays: Graphene-based OLEDs and micro-LEDs enable ultra-thin, foldable screens.
  • Wearable Sensors: Graphene-based biosensors detect physiological changes in real-time, with high sensitivity and selectivity.
  • Smart Textiles: Integrated graphene circuits enable e-textiles for healthcare monitoring and human-machine interface applications.
Challenges in Graphene Electronics

Despite its potential, graphene electronics face challenges:

  1. Scalability: Large-area, defect-free graphene synthesis remains difficult. Current CVD processes often introduce grain boundaries affecting electron transport.
  2. Bandgap Engineering: Lack of a natural bandgap limits its application in digital logic. Research into graphene bilayers and heterostructures aims to address this.
  3. Integration with CMOS: Seamless integration into existing silicon-based processes is challenging. Efforts in 2D material stacking with TMDs like MoS₂ show promise.
  4. Fabrication Costs: High-quality graphene production methods such as CVD (Chemical Vapor Deposition) and mechanical exfoliation are expensive and require optimization.
Recent Breakthroughs and Solutions
  • Graphene-Silicon Hybrid Chips: Researchers at the University of Manchester have demonstrated graphene-silicon hybrid devices, improving compatibility with existing chip technologies.
  • Graphene-Doped 2D Materials: Heterostructures with h-BN (hexagonal boron nitride) and MoS₂ (molybdenum disulfide) provide tunable electronic properties and enhanced stability.
  • AI-Assisted Material Design: Machine learning models are now accelerating the discovery of optimal graphene-based transistor architectures.
  • Twistronics: The controlled twisting of graphene bilayers at specific angles (e.g., the magic angle ~1.1°) has enabled the discovery of superconducting states, opening doors for quantum computing applications.
The Future of Graphene in Electronics

The integration of graphene into commercial electronics is closer than ever. Major developments include:

  • 5G and 6G Communications: Graphene antennas and RF components enable ultra-fast wireless networks with reduced energy consumption.
  • Neuromorphic Computing: Graphene’s quantum properties contribute to brain-inspired computing architectures, with memristive behavior suitable for AI applications.
  • Quantum Electronics: Graphene-based qubits and topological insulators are being explored for scalable quantum computing architectures.
  • Spintronics: Graphene’s spin-orbit interactions are being leveraged for the next generation of low-power spintronic devices.
Conclusion

Graphene electronics is pushing the boundaries of miniaturization, promising a future of ultra-small, high-performance devices. While challenges remain in fabrication and integration, ongoing research and industry collaborations are accelerating progress. With continued advancements in materials engineering, device physics, and quantum mechanics, graphene may soon replace silicon as the foundation of next-generation nanoelectronics.

The post Graphene Electronics and Miniaturization: The Future of Nano-Scale Devices appeared first on ELE Times.

🌸 Завітайте на виставку робіт учнів та викладачів Східно-Європейської філії школи Ікенобо

Новини - Fri, 02/28/2025 - 12:41
🌸 Завітайте на виставку робіт учнів та викладачів Східно-Європейської філії школи Ікенобо
Image
kpi пт, 02/28/2025 - 12:41
Текст

З 3 по 8 березня завітайте до Українсько-Японського центру, щоб надихнутись красою робіт учнів та викладачів Східно-Європейської філії школи Ікенобо.

🕔 Дайджест актуальних подій та конкурсів від Відділу академічної мобільності

Новини - Fri, 02/28/2025 - 10:00
🕔 Дайджест актуальних подій та конкурсів від Відділу академічної мобільності
Image
kpi пт, 02/28/2025 - 10:00
Текст

Відділ академічної мобільності регулярно публікує пропозиції для студентів та викладачів з академічної мобільності. Слідкуйте за оголошеннями на сайті та в телеграм-каналі відділу.

VNA enables fast, accurate RF measurements

EDN Network - Thu, 02/27/2025 - 19:58

With high measurement speed and stability, the R&S ZNB3000 vector network analyzer (VNA) supports large-scale RF component production. Its PCB-based frontend minimizes thermal drift, enabling reliable measurements for days without recalibration. The analyzer is also useful in RF labs.

The ZNB3000 is available with two or four ports and covers frequency ranges of 9 kHz to 4.5 GHz, 9 GHz, 20 GHz, and 26.5 GHz. R&S states that it offers the highest dynamic range and output power in its class, achieving up to 150 dB RMS with trace noise below 0.0015 dB RMS and providing +11 dBm output power at 26.5 GHz. Further, the VNA completes a 1-MHz to 26.5-GHz frequency sweep with 1601 points, 500-kHz IF bandwidth, and two-port error correction in 21.2 ms.

Understanding measurement uncertainty under test conditions is essential. Previously, calculating uncertainty for DUT S-parameters was only possible in a metrology lab. With the R&S ZNB3-K50(P) option, developed with METAS, the R&S ZNB3000 now calculates and displays uncertainty bands alongside measured S-parameters.

The ZNB3000 VNA is available now. To request pricing information, use the link to the product page below.

ZNB3000 product page 

Rohde & Schwarz 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post VNA enables fast, accurate RF measurements appeared first on EDN.

Multiprotocol SoCs ease IoT integration

EDN Network - Thu, 02/27/2025 - 19:58

Silicon Labs’ MG26 family of wireless SoCs enables mesh IoT connectivity through Matter, OpenThread, and Zigbee protocols. By supporting concurrent multiprotocol capabilities, the MG26 chips simplify the integration of smart home and building devices—such as LED lighting, switches, sensors, and locks—into both Matter and Zigbee networks simultaneously.

The MG26 SoCs offer up to 3 MB of flash and 512 KB of RAM, doubling the memory of other Silicon Labs multiprotocol devices. Powered by an Arm Cortex-M33 CPU with dedicated cores for radio and security subsystems, these devices offload tasks from the main core, optimizing performance for customer applications. Embedded AI/ML hardware acceleration enables up to 8x faster processing of machine learning algorithms, consuming just 1/6th the power compared to running them on the CPU.

Silicon Labs’ Secure Vault and Arm TrustZone meet all Matter security requirements. Secure OTA firmware updates and secure boot protect against malicious software installation and enable vulnerability patching. Through Silicon Labs’ Custom Part Manufacturing Service, MG26 devices can be programmed with customer-specific Matter device attestation certificates, security keys, and other features during fabrication.

The MG26 family of wireless SoCs is now generally available through Silicon Labs and its distribution partners.

MG26 series product page

Silicon Labs 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Multiprotocol SoCs ease IoT integration appeared first on EDN.

MPUs enhance HMI application performance

EDN Network - Thu, 02/27/2025 - 19:58

Microchip’s SAMA7D65 MPUs, based on an Arm Cortex-A7 core running up to 1 GHz, integrate a 2D GPU, LVDS, and MIPI DSI. These features enhance data transmission and processing for improved graphics performance, optimizing HMI applications in industrial, medical, and transportation markets.

The SAMA7D65 microprocessors feature dual Gigabit Ethernet MACs with Time Sensitive Networking (TSN) support, ensuring precise synchronization and low-latency communication for industrial and building automation HMI systems. This enables seamless data exchange and deterministic networking, essential for responsive user interfaces.

Microchip also offers a system-in-package (SiP) variant of the SAMA7D65 MPU, the SAMA7D65D2G, which integrates a 2-Gb DDR3L DRAM for high-speed synchronization. Its low-voltage design reduces power consumption and optimizes energy efficiency. SiPs streamline development by addressing high-speed memory interface challenges and simplifying memory supply, accelerating time to market. Additionally, a system-on-module (SOM) variant is available for early access.

SAMA7D65 MPUs are available now in production quantities.

SAMA7D65 product page

Microchip Technology

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MPUs enhance HMI application performance appeared first on EDN.

GNSS receivers achieve precise positioning

EDN Network - Thu, 02/27/2025 - 19:58

TeseoVI GNSS receivers from Microchip integrate multi-constellation and quad-band signal processing on a single die. This series of ICs and modules provides centimeter-level accuracy for high-volume automotive and industrial applications, such as ADAS, autonomous driving, asset trackers, and mobile robots for home deliveries.

Three standalone chips—the STA8600A, STA8610A, and STA9200MA—include dual independent Arm Cortex-M7 processing cores for local control of IC functions, along with ST’s phase-change memory to remove external memory needs. The STA9200MA runs dual cores in lockstep, providing hardware redundancy that meets ISO26262 ASIL-B functional safety requirements.

The TeseoVI family also includes two GNSS automotive modules, the VIC6A (16×12 mm) and ELE6A (17×22 mm), which integrate the chipset along with key external components—TCXO, RTC, SAW filter, and RF frontend—into a larger package with fewer pins and an EMI shield. These modules simplify development by eliminating the need for RF path design.

Samples of the TeseoVI GNSS receivers are available on request. Read the blogpost here.

TeseoVI product page 

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GNSS receivers achieve precise positioning appeared first on EDN.

GaN, DPD tech improve 5G RU energy efficiency

EDN Network - Thu, 02/27/2025 - 19:57

MaxLinear and RFHIC have collaborated on a power amp solution for high-power macro cell radio units (RUs) that lowers power consumption, weight, and volume. The setup combines RFHIC’s GaN power amplifiers with MaxLinear’s digital predistortion (DPD) technology running on the Sierra radio SoC. The companies will showcase the solution at next week’s Mobile World Congress 2025.

MaxLinear’s DPD technology (MaxLIN) and Sierra radio chip, combined with RFHIC’s ID19801D GaN power amplifier and SDM19007-30H drive amplifier, achieve 55.2% line-up power efficiency with ACLR < -61 dBc and EVM < 3% at 49.6 dBm (91 W). The setup operates in the PCS band (1930–1995 MHz) with 2×NR10MHz carriers.

The Sierra radio SoC supports all major RU applications, including conventional macro, massive MIMO, pico, and all-in-one small cells. It integrates an RF transceiver supporting up to 8 transmitters, digital frontend with MaxLIN, low-PHY baseband processor, O-RAN split 7.2x fronthaul interface, and Arm Cortex-A53 quad-core CPU subsystem.

RFHIC’s ID series GaN power transistors operate from 1.8 GHz to 4.2 GHz, delivering saturated power levels of 410 W, 460 W, 700 W, and 800 W. The SDM series two-stage GaN hybrid drive amplifiers, internally matched to 50 Ω, cover 1.8 GHz to 4.1 GHz with output power options of 40 W, 60 W, and 80 W.

MaxLinear

RFHIC

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN, DPD tech improve 5G RU energy efficiency appeared first on EDN.

1 A, 20V PWM DAC current source with tracking preregulator

EDN Network - Thu, 02/27/2025 - 19:32

This design idea reprises another “1A, 20V, PWM controlled current source.” Like the earlier circuit, this design integrates an LM3x7 adjustable regulator with a PWM DAC to make a programmable 20 V, 1 A current source. It profits from the accurate internal voltage reference and overload and thermal protection features of this time proven Bob Pease masterpiece! 

However, unlike the earlier design idea that requires a floating, fixed output 24-VDC power adapter, this sequel incorporates a ground-referred boost preregulator that can run from a 5-V regulated or unregulated supply rail. The previous linear design has limited power efficiency that actually drops below single-digit percentages when driving low voltage loads. The preregulator in this version fixes that by tracking the input-output voltage differential across the LM3x7, maintaining it at a constant 3 V. This provides adequate dropout-suppressing headroom for the LM3x7 while minimizing wasted power and unnecessary heat.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Here’s how it works. LM317 fans will recognize Figure 1 as the traditional LM317 constant current source topology that maintains Iout = Vadj/Rs by forcing the ADJ pin to be 1.25 V more negative (a.k.a. less positive) than the OUT pin. It has worked great for 50 years, but of course the only way you can vary Iout is by changing R. 

Figure 1 A classic LM317 constant current source where: Iout = Vadj/R = 1.25v/Rs.

Figure 2 shows another (easier) way to make Iout programmable. The circuit enables control of ampere-scale Iout with only milliamps of Ic control current. 

Figure 2 A modification that makes the current source variable where: Iout = (Vadj – IcRc)/Rs – Ic.

Figure 3 shows this idea fleshed out and put to practical use. Note that Rs = R4 and Rc = R5.

Figure 3 U2 current source programmed by U1 PWM DAC and powered by U3 tracking preregulator.

Figure 2’s Ic control current is provided by the Q2 Q3 complementary pair. Since Q3 provides tempco compensation for Q2, it should be closely thermally coupled with its partner. Q4 does some nonlinearity compensation by providing curvature correction to Q2’s Ic control current generation. The daisy chain of three 1N4001 diodes provides bias for Q2 and Q4.

The PWM input frequency is assumed to be 10 kHz or thereabouts. Ripple filtering is the purpose of C1 and C2 and gets some help from an analog subtraction cancellation trick first described in “Cancel PWM DAC ripple with analog subtraction.”

About that tracking preregulator thing: Control of U3 to maintain the 3 V of headroom required to hold U2 safe from dropout relies on Q1 acting as a simple differential amplifier. Q1 drives U3’s Vfb voltage feedback pin to maintain Vfb = 1.245 V. Therefore (if Vbe = Q1’s base-emitter bias, typically ~0.6 V for Ie = ~500 µA)

Vfb/R7 = ((U2in – U2out) – Vbe)/R6
1.245v = (U2in – U2out – 0.6v)/(5100/2700)
U2in – U2out = 1.89 * 1.245v + 0.6v = 3v

 Note, if you want to use this circuit with a different preregulator with a different Vfb, just adjust:

R7 = R6 Vfb/2.4v

Finally, a note about overvoltage. Current sources have the potential (no pun!) for output voltage to soar to damaging levels (destructive of U3’s internal switch and downstream circuitry too) if deprived of a proper load. R11 and R12 protect against this by utilizing U3’s built in OVP feature to limit max open circuit voltage to about 30 V if the load is lost.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 1 A, 20V PWM DAC current source with tracking preregulator appeared first on EDN.

The Apple iPhone 16e: No more fiscally friendly “SE” for thee (or me)

EDN Network - Thu, 02/27/2025 - 19:10
A new “entry level” iPhone

Truthfully, I originally didn’t plan on covering the new iPhone 16e, Apple’s latest “entry-level” phone preceded by three generations worth of iPhone SE devices.

I knew a fourth-generation offering was coming (and sooner vs later), since European regulations had compelled Apple to phase out the SEs’ proprietary Lightning connector in favor of industry-standard USB-C. The iPhone SE 3, announced in 2022, had already been discontinued at the end of last year in Europe, in fact, along with the similarly Lightning-equipped iPhone 14, both subsequently also pulled from Apple’s product line across the rest of the world coincident with the iPhone 16e’s unveiling on February 19, 2025. Considering the heavy company-wide Apple Intelligence push, the iPhone SE 3 was also encumbered by its sub-par processor (A15 Bionic) and system memory allocation (4 GBytes), both factors suggesting the sooner-vs-later appearance of a replacement.

But how exciting could a new “entry-level” (translation: cost-optimized trailing-edge feature set) smartphone be, after all? Instead, I was planning on covering Amazon’s unveiling of its new AI-enhanced (and Anthropic-leveraging, among others) Alexa+ service, which happened earlier today as I write these words on the evening of February 26. That said, as Amazon’s launch event date drew near, I began hearing that no new hardware would be unveiled, just the upgraded service (in spite of the fact that Amazon referred to it as a “Devices & Services event”), and that what we now know of as Alexa+ would only beta-demoed, not actually released until weeks or months later. Those rumors unfortunately all panned out; initial user upgrades won’t start until sometime in March, more broadly rolling out over an unspecified-duration period of “time”.

What those in attendance in New York (no virtual livestream was offered) saw were only tightly scripted, albeit reportedly impressive (when they worked, that is, which wasn’t always the case), demos. As we engineers know well, translating from curated demos to real-world diverse usage experiences rarely goes without a hitch or few. Then there were the indications of longstanding (and ongoing) struggles with AI “hallucinations”, another big-time technology hit. Add in the fact that Alexa+ won’t run on any of the numerous, albeit all geriatric, Amazon devices in my abode, and I suspect at last for a while, I’ll be putting my coverage plans on hold.

Pricing deviations from prior generations

Back to the iPhone 16e then, which I’m happy to report ended up being far more interesting than I’d anticipated, both for Apple’s entry-level and broader smartphone product line and more generally for the company’s fuller hardware portfolio. Let’s begin with the name. “SE” most typically in the industry refers to “Special Edition”, I’ve found, but Apple has generally avoided clarifying the meaning here, aside from a brief explanation that Phil Schiller, Apple’s then-head of Worldwide Product Marketing (and now Apple Fellow), gave a reporter back in 2016 at the first-generation iPhone SE unveiling.

And in contrast to the typical Special Edition reputation, which comes with an associated price tag uptick, Apple’s various iPhone SE generations were historically priced lower than the mainstream and high-end iPhone offerings that accompanied them in the product line at any point in time. To accomplish this, they were derivations of prior-generation mainstream iPhones, for which development costs had already been amortized. The iPhone SE 3, for example, was form factor-reminiscent of the 2017-era, LCD-based iPhone 8, albeit with upgraded internals akin to those in the 2021-era iPhone 13.

The iPhone 16e marks the end of the SE generational cadence, at least for now. So, what does “e” stand for? Once again, Apple isn’t saying. I’m going with “economy” for now, although reality doesn’t exactly line up with that definitional expectation. The starting price for the iPhone SE 3 at introduction was $429. In contrast, the iPhone 16e begins at $599 and goes up from there, depending on how much internal storage you need. Not only did Apple ratchet up the price tag, as it’s more broadly done in recent years, it also broke through the perception-significant $499 barrier, which frankly shocked me. In contrast, if you’ll indulge a bit of snark, I chuckled when I noticed Google’s response to Apple’s news: a Pixel 8a price cut to $399.

Upgrades

That said, RAM jumps from 4 GBytes on the iPhone SE 3 to (reportedly: as usual, Apple didn’t reveal the amount) 8 GBytes. The iPhone SE 3’s storage started at 64 GBytes; now it’s 128 GBytes minimum. The 4.7” diagonal LCD has been superseded by a 6.1” OLED; more generally, Apple no longer sells a single sub-6” smartphone. And the front and rear cameras are both notably resolution-upgraded from those in the iPhone SE. The front sensor array also now supports TrueDepth for (among other things) FaceID unlock, replacing the legacy Touch ID fingerprint sensor built into the no-longer-present Home button, and the rear one, although still only one, includes 2x optical zoom support.

Turning now to the internals, there are three particularly notable (IMHO) evolutions that I’ll focus on. Unsurprisingly, the application processor was upgraded for the Apple Intelligence era, from the aforementioned A15 Bionic to the A18. But this version of the newer SoC is different than that in the iPhone 16, only enabling 4 GPU cores versus 5 on the mainstream iPhone 16 (and 6 on the iPhone 16 Pro). Again, as I mentioned before, I suspect that all three A18 variants are sourced from the same sliver of silicon, with the iPhone 16e’s version detuned to maximize usable wafer yield. Similarly, there may also be clock speed variations, another spec that Apple unfortunately doesn’t make public, between the three A18 versions.

In-house 5G chip

More significant to me is that this smartphone marks the initial unveil of Apple’s first internally developed LTE-plus-5G cellular subsystem. A quick history lesson; as regular readers already know, the company generally prefers to be vertically integrated versus external supplier-dependent, when doing so makes sense. One notable example was the transition from Intel x86 to Apple Silicon Arm-based computer chipsets that began in 2020. Notable exceptions (at least to date) to this rule, conversely, include volatile (DRAM) and nonvolatile (flash) memory, and image sensors. As with Intel in CPUs, Apple has long had a “complicated” (among other words) relationship with Qualcomm for cellular chipsets. Specifically, back in April 2019, the two companies agreed to drop all pending litigation between them, shortly after Qualcomm had won a patent infringement lawsuit, and which had begun two years earlier. Three months later, Apple dropped $1B to buy the bulk of Intel’s (small world, eh?) cellular modem business.

Six years later, the premier C1 cellular modem marks the fruits (Apple? Fruit? Get it?) of the company’s longstanding labors. Initial testing results on pre-release devices are encouraging from performance and network-compatibility standpoints, and Apple’s expertise in power consumption coupled with the tight coupling potential with other internally developed silicon subsystems, operating systems and applications are also promising. That said, this initial offering is absent support for ultra-high-speed—albeit range-restrictive, interference-prone and coverage-limitedmmWave, i.e., ultrawideband (UWB) 5G. For that matter, speaking of wireless technologies, there’s no short-range UWB support for AirTags and the like in the iPhone 16e, either.

Whose modem—Apple’s own, Qualcomm’s, or a combination—will the company be using in its next-generation mainstream and high-end iPhone 17 offerings due out later this year? Longer term, will Apple integrate the cellular modem—at minimum, the digital-centric portions of it—into its application processors, minimally at the common-package or perhaps even the common-die level? And what else does the company have planned for its newfound internally developed technology; cellular service-augmented laptops, perhaps? Only time will tell. Apple is rumored to also be developing its own Wi-Fi transceiver silicon, with the aspiration of supplanting today’s Broadcom-supplied devices in the future.

Wireless charging support

Speaking of wireless—and cellular modems—let’s close out with a mention of wireless charging support. The iPhone 16e still has it. But in a first since the company initially rolled out its MagSafe-branded wireless charging capabilities with the iPhone 12 series in October 2020, there are no embedded magnets this time around (or in future devices as well?).

Initial speculation suggested that perhaps they got dropped because they might functionally conflict with the C1 cellular modem, a rumor that Apple promptly squashed. My guess, instead, is that this was a straightforward bill-of-materials cost reduction move on the company’s part, perhaps coupled with aspirations toward system weight and thickness reductions, and maybe even a desire to otherwise devote the available internal cavity volume for expanded battery capacity and the like. After all, as I’ve mentioned before, anyone using a case on their phone needs to pick a magnet-inclusive case option anyway, regardless of whether magnets are already embedded in the device. That all said, I’m still struck by the atypical-for-Apple backstep the omission of magnets represents, not to mention the Android-reminiscent aspect of it.

Future announcements?

The iPhone 16e isn’t the only announcement that Apple has made so far in 2025. Preceding it were, for example:

And what might be coming down the pike? Well, with today’s heavy discounts on current offerings as one possible indication of the looming next-generation queue’s contents, there’s likely to be:

  • An M4 upgrade to the 13” and 15” MacBook Air, and
  • An Apple Intelligence-supportive hardware update to the baseline iPad

Further down the road, I’m guessing we’ll also see:

You’ll note that I mindfully omitted a Vision Pro upgrade from the 2025 wishlist 😉 Stay tuned for more press release-based unveilings to come later this spring, the yearly announcement-rich WWDC this summer, and the company’s traditional yearly smartphone and computer family generational-upgrade events this fall. I’ll of course cover the particularly notable stuff here in the blog. And for now, I welcome your thoughts on today’s coverage in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Apple iPhone 16e: No more fiscally friendly “SE” for thee (or me) appeared first on EDN.

Former ams Osram CEO Alexander Everke to succeed Kim Schindelhauer as chairman of Aixtron’s Supervisory Board

Semiconductor today - Thu, 02/27/2025 - 18:15
Deposition equipment maker Aixtron SE in Herzogenrath, near Aachen, Germany has announced Alexander Everke, who has been a member of its Supervisory Board since May 2024, as the successor to the Supervisory Board chairmanship. Existing chairman Kim Schindelhauer will resign from the position at the conclusion of the Annual General Meeting on 15 May, when Aixtron will also propose the election of Ingo Bank as a new member of the Supervisory Board...

Pages

Subscribe to Кафедра Електронної Інженерії aggregator