Збирач потоків

Redefining Edge Computing: How the STM32V8 18nm Node Outperforms Legacy 40nm MCUs

ELE Times - 6 годин 44 хв тому

STMicroelectronics held a virtual media briefing, hosted by Patrick Aidoune, General Manager, General Purpose MCU Division at ST, on November 17, 2025. The briefing was held before their flagship event, the STM32 Summit, where they launched STM32V8, a new generation of STM32 microcontrollers.

STMicroelectronics introduced its new generation microcontroller, STM32V8, under the STM32 class recently. Built on an innovative 18nm process technology with FD-SOI and phase change memory (PCM) technology included, this microcontroller is the first of its kind in the world.  It is the first under 20nm process to use FD-SOI along with an embedded PCM technology.

FD-SOI Technology

The FD-SOI is a silicon technology, co-developed by ST, which brought innovation in the aerospace and automotive applications. The 18nm process, co-developed with the Samsung Foundry, provides a cost-competitive leap in both performance as well as power consumption.

The FD-SOI technology gives a strong robustness to ionising particles and reliability in harsh operating environments, which makes it particularly suitable for intense radiation exposure found in earth orbit systems. The FD-SOI also helps reduce the static power consumption, along with allowing operations on a lower voltage supply, while sustaining harsh industrial environments as well.

Key Features

STM32V8’s Arm Cortex-M85 core, along with the 18nm process, gives it a clock speed of up to 800MHz, making it the most powerful STM32 ever shipped. It has also been embedded with up to 4 Mbytes of user memory in a competitive dual bank, allowing bank swapping for seamless code updates.

Keeping in mind the needs of developers, the STM32V8 provides for more compute headroom, along with more security and improved efficiency. Compared it is 40nm process node with the same technologies, the STM32V8 brings with it improved performance, higher density, and better power efficiency.

Industrial Applications

This new microcontroller is a multipurpose system to benefit several industries:

  • Factory Automation and Robotics
  • Audio Applications
  • Smart Cities and Buildings
  • Energy Management Systems
  • Healthcare and Biosensing
  • Transportation (ebikes)

Achievements

ST’s new microcontroller has been selected by SpaceX for its high-speed connectivity system in the Starlink Satellite System.

“The successful deployment of the Starlink mini laser system in space, which uses ST’s STM32V8 microcontroller, marks a significant milestone in advancing high-speed connectivity across the Starlink network. The STM32V8’s high computing performance and integration of large embedded memory and digital features were critical in meeting our demanding real-time processing requirements, while providing a higher level of reliability and robustness to the Low Earth Orbit environment, thanks to the 18nm FD-SOI technology. We look forward to integrating the STM32V8 into other products and leveraging its capabilities for next-generation advanced applications,” said Michael Nicolls, Vice President, Starlink Engineering at SpaceX.

STM32V8, like its predecessors, is expected to draw significant benefit from ST’s edge AI ecosystem, which is under continued expansion. Currently, the STM32V8 is in early-stage access for selected customers with key OEMs’ availability as of the first quarter 2026 and with broader availability to follow.

Apart from unveiling the new generation microcontroller, ST also announced the expansion of its STM32 AI Model Zoo, which is part of the comprehensive ST Edge AI Suite of tools. The STM32 AI Model Zoo has more than 140 models from 60 model families for vision, audio, and sensing AI applications at the edge, making it the largest MCU-optimised library of its kind.

This AI Model Zoo has been designed, keeping in mind the requirements of both data scientists and embedded systems engineers, a model that’s accurate enough to be useful and that also fits within their energy and memory constraints.

The STM32 AI Model Zoo is the richest in the industry, for it not only offers multiple models, but also scripts to easily retrain models, evaluate accuracy, and deploy on boards. ST has also introduced native support for PyTorch models. This complements their existing support for TensorFlow, Keras AI frameworks, LiteRT, and ONNX formats, giving developers additional flexibility in their development workflow. They are also introducing more than 30 new families of models, which can use the same deployment pipeline. Many of these models have already been quantised and pruned, meaning that they offer significant memory size and inference time optimisations while preserving accuracy.

Additionally, they also announced the release of STM32 Sidekick, their new AI agent on the ST Community, available 24/7. This new AI agent is trained on official STM32 documentation (datasheets, reference manuals, user manuals, application notes, wiki entries, and community knowledge base articles) to help users locate relevant technical data, obtain concise summaries of complex topics, and discover insights and documents. Alongside, they announced STM32WL3R, a version of their STM32WL3 tailored for remote control applications supporting the 315 MHz band. The STM32WL3R is a sub-GHz wireless microcontroller with an ultra-low-power radio.

~ Shreya Bansal, Sub-Editor

The post Redefining Edge Computing: How the STM32V8 18nm Node Outperforms Legacy 40nm MCUs appeared first on ELE Times.

Vitrealab closes $11m Series A financing round

Semiconductor today - 6 годин 45 хв тому
Vitrealab GmbH of Vienna, Austria, a developer of photonic integrated circuits (PICs) for laser–LCoS-based augmented reality (AR) light engines, has closed a significantly oversubscribed $11m Series A financing round, led by LIFTT Italian Venture Capital and LIFTT EuroInvest with participation from Constructor Capital, aws Gründungsfonds, Gateway Ventures, PhotonVentures, xista Science Ventures, Moveon Technologies, and Hermann Hauser Investment...

🎓 Зимовий вступ 2026 у КПІ: нульовий курс «Відкритий шлях до вищої освіти»

Новини - 6 годин 55 хв тому
🎓 Зимовий вступ 2026 у КПІ: нульовий курс «Відкритий шлях до вищої освіти»
Image
kpi ср, 01/07/2026 - 12:00
Текст

З лютого 2026 року КПІ ім. Ігоря Сікорського відкриває зимовий набір на нульовий курс — підготовче відділення «Відкритий шлях до вищої освіти».

“‘Bharat’ will become a major player in entire electronics stack…”, Predicts Union Minister, Ashwini Vaishnaw

ELE Times - 7 годин 38 хв тому

Union Electronics and IT Minister Ashwini Vaishnaw predicted that ‘Bharat’ will become a major player in the entire electronics stack, in terms of design, manufacturing, operating system, applications, materials, and equipment.

In an X post, the Union Minister drew attention to a major milestone for Prime Minister Narendra Modi’s ‘Make in India’ initiative and making India a major producer economy since Apple shipped $50 billion worth of mobile phones in 2025.

“Electronics production has increased six times in the last 11 years. And electronics exports have grown 8 times under PM Modi’s focused leadership. This progress has propelled electronics products among the top three exported items,” Vaishnaw noted.

He further informed that 46 component manufacturing projects, laptop, server, and hearable manufacturers had added to the ecosystem, which are making electronics manufacturing a major driver of the manufacturing economy.

“Four semiconductor plants will start commercial production this year. Total jobs in electronics manufacturing are now 25 lakh, with many factories employing more than 5,000 employees in a single location. Some plants employ as many as 40,000 employees in a single location,” the minister informed, adding that “this is just the beginning”.

Last week, the industry welcomed the approval of 22 new proposals under the third tranche of the Electronics Components Manufacturing Scheme (ECMS) by the government, saying that it marks a decisive inflexion point in India’s journey towards deep manufacturing and the creation of globally competitive Indian champions in electronics components.

With this, the total number of ECMS-approved projects rises to 46, taking cumulative approved investments to over Rs 54,500 crore. Earlier tranches saw seven projects worth Rs 5,532 crore approved on October 22 and 17 projects amounting to Rs 7,172 crore on November 17. The rapid scale-up across tranches underscores the strong industry response and the growing confidence in India’s components manufacturing vision.

According to the IT Ministry, the 22 projects approved in the third tranche are expected to generate production worth Rs 2,58,152 crore and create 33,791 direct jobs.

The post “‘Bharat’ will become a major player in entire electronics stack…”, Predicts Union Minister, Ashwini Vaishnaw appeared first on ELE Times.

NVIDIA’s Jetson T4000 for Lightweight & Stable Edge AI Unveiled by EDOM

ELE Times - 10 годин 4 хв тому

EDOM Technology announced the introduction of the NVIDIA Jetson T4000 edge AI module, addressing the growing demand from system integrators, equipment manufacturers, and enterprise customers for balanced performance, power efficiency, and deployment flexibility. With powerful inference capability and a lightweight design, NVIDIA Jetson T4000 enables faster implementation of practical physical AI applications.

Powered by NVIDIA Blackwell architecture, NVIDIA Jetson T4000 supports Transformer Engine and Multi-Instance GPU (MIG) technologies. The module integrates a 12-core Arm Neoverse-V3AE CPU, three 25GbE network interfaces, and a wide range of I/O options, making it well-suited for low-latency, multi-sensor, and real-time computing requirements. In addition, Jetson T4000 features a third-generation programmable vision accelerator (PVA), dual encoders and decoders, and an optical flow accelerator. These dedicated hardware engines allow stable AI inference even under constrained compute and power budgets, making the platform particularly suitable for mid-range models and real-time edge applications.

For system integrators (SIs), the modular architecture of Jetson T4000, combined with NVIDIA’s mature software ecosystem, enables rapid integration of vision, sensing, and control systems. This significantly shortens development and validation cycles while improving project delivery efficiency, especially for multi-site and scalable edge AI deployments.

For equipment manufacturers, Jetson T4000’s compact form factor and low-power design allow flexible integration into a wide range of end devices, including advanced robotics, industrial equipment, smart terminals, machine vision systems, and edge controllers. These capabilities help manufacturers bring stable AI inference into products with limited space and power budgets, accelerating intelligent product upgrades.

Enterprise users can deploy Jetson T4000 across diverse scenarios such as smart factories, smart retail, security, and edge sensor data processing. By performing inference and data pre-processing at the edge, organisations can reduce system latency, lower cloud workloads, and improve overall operational efficiency—while maintaining system stability and deployment flexibility.

In robotics and automation applications, Jetson T4000 features low power consumption, high-speed I/O and a compact footprint, making it an ideal platform for small mobile robots, educational robots, and autonomous inspection systems, delivering efficient and reliable AI computing for a wide range of automation use cases.

NVIDIA Jetson product lineup spans from lightweight to high-performance modules, including Jetson T4000 and T5000, addressing diverse requirements ranging from compact edge devices and industrial control systems to higher-performance inference applications. With NVIDIA’s comprehensive AI development tools and SDKs, developers can rapidly port models, optimise inference performance, and seamlessly integrate AI capabilities into existing system architectures.

Beyond supplying Jetson T4000 modules, EDOM Technology leverages its extensive ecosystem of partners across chips, modules, system integration, and application development. Based on the specific development stages and requirements of system integrators, equipment manufacturers, and enterprise customers, EDOM provides end-to-end support—from early-stage planning and technical consulting to ecosystem enablement. By sharing ecosystem expertise and practical experience, EDOM helps both existing customers and new entrants to the edge AI domain quickly build application capabilities and deploy edge AI solutions tailored to real-world scenarios.

The post NVIDIA’s Jetson T4000 for Lightweight & Stable Edge AI Unveiled by EDOM appeared first on ELE Times.

Anritsu to Bring the Future of Electrification Testing at CES 2026

ELE Times - 10 годин 1 хв тому

Anritsu Corporation will exhibit Battery Cycler and Emulation Test System RZ-X2-100K-HG, planned for sale in the North American market as an evaluation solution for eMobility, at CES 2026 (Consumer Electronics Show), one of the world’s largest technology exhibitions to be held in Las Vegas, USA, from January 6 to January 9, 2026.

The launch of the RZ-X2-100K-HG in the North American market represents the first step in the global expansion efforts of TAKASAGO, LTD., which holds a significant share in the domestic EV development market, and it is an important measure looking ahead to future global market growth.

At CES 2026, a concept exhibition will showcase the Power HIL evaluation system combining the RZ-X2-100K-HG with dSPACE’s HIL simulator, demonstrating a new direction for the EV evaluation process.

Additionally, the power measurement solutions from DEWETRON, which joined the Anritsu Group in October 2025, will also be exhibited. Using a three-phase motor performance evaluation demonstration, we will present example applications.

About the RZ-X2-100K-HG

The RZ-X2-100K-HG is a test system developed by TAKASAGO, LTD. of the Anritsu Group, equipped with functions for charge-discharge testing and battery emulation that support high voltage and large current. It is a model based on the RZ-X2-100K-H, which has a proven track record in Japan, adapted to comply with the United States safety standards and input power specifications. This system is expected to be used for testing the performance, durability, and safety of automotive batteries and powertrain devices in North America.

About Power HIL

Power HIL (Power Hardware-in-the-Loop) is an extended simulation technology that combines virtual and real elements by adding a “real power supply function” to HIL (Hardware-in-the-Loop). Power HIL creates a virtual vehicle environment with real power, reproducing EV driving tests and charging tests compatible with multiple charging standards under conditions close to reality. This allows for high-precision and efficient evaluation of battery performance, safety, and charging compatibility without using an actual vehicle.

Terminology Explanation

[*] Battery Emulation Test System

A technology that simulates the behaviour of real batteries (voltage, current, internal resistance, etc.) using a power supply device to evaluate how in-vehicle equipment operates.

The post Anritsu to Bring the Future of Electrification Testing at CES 2026 appeared first on ELE Times.

Keysight’s Software Solution for Reliable AI Deployment in Safety-Critical Environments

ELE Times - 10 годин 52 хв тому

Keysight Technologies, Inc. introduced Keysight AI Software Integrity Builder, a new software solution designed to transform how AI-enabled systems are validated and maintained to ensure trustworthiness. As regulatory scrutiny increases and AI development becomes increasingly complex, the solution delivers transparent, adaptable, and data-driven AI assurance for safety-critical environments such as automotive.

AI systems operate as complex, dynamic entities, yet their internal decision processes often remain opaque. This lack of transparency creates significant challenges for industries, such as automotive, that must demonstrate safety, reliability, and regulatory compliance. Developers struggle to diagnose dataset or model limitations, while emerging standards — such as ISO/PAS 8800 for automotive and the EU AI Act- mandate explainability and validation without prescribing clear methods. Fragmented toolchains further complicate engineering workflows and heighten the risk of conformance gaps.

Keysight AI Software Integrity Builder introduces a unified, lifecycle-based framework that answers the critical question: “What is happening inside the AI system, and how do I ensure it behaves safely in deployment?” The solution equips engineering teams with the evidence needed for regulatory conformance and enables continuous improvement of AI models. Unlike fragmented toolchains that address isolated aspects of AI testing, Keysight’s integrated approach spans dataset analysis, model validation, real-world inference testing, and continuous monitoring.

Core capabilities of Keysight AI Software Integrity Builder include:

  • Dataset Analysis: Analyse data quality using statistical methods to uncover biases, gaps, and inconsistencies that may affect model performance.
  • Model-Based Validation: Explains model decisions and uncovers hidden correlations, enabling developers to understand the patterns and limitations of an AI system.
  • Inference-Based Testing: Evaluates how models behave under real-world conditions, detects deviations from training behaviour, and recommends improvements for future iterations.

While open-source tools and vendor solutions typically address only isolated aspects of AI testing, Keysight closes the gap between training and deployment. The solution not only validates what a model has learned, but also how it performs in operational scenarios — an essential requirement for high-risk applications such as autonomous driving.

Thomas Goetzl, Vice President and General Manager of Keysight’s Automotive & Energy Solutions, said: “AI assurance and functional safety of AI in vehicles are becoming critical challenges. Standards and regulatory frameworks define the objectives, but not the path to achieving a reliable and trustworthy AI deployment. By combining our deep expertise in test and measurement with advanced AI validation capabilities, Keysight provides customers with the tools to build trustworthy AI systems backed by safety evidence and aligned with regulatory requirements.”

With AI Software Integrity Builder, Keysight empowers engineering teams to move from fragmented testing to a unified AI assurance strategy, enabling them to deploy AI systems that are not only performant but also transparent, auditable, and compliant by design.

The post Keysight’s Software Solution for Reliable AI Deployment in Safety-Critical Environments appeared first on ELE Times.

Molecular Beam Epitaxy (MBE) Growth of GaAs-Based Devices

ELE Times - 12 годин 10 хв тому

Courtesy: Orbit & Skyline

In the semiconductor ecosystem, we are familiar with the chips that go into our devices. Of course, they do not start as chips but are made into the familiar form once the process is complete. It is easy to imagine how to arrive at that end in silicon-based technology, but things are far more interesting in the III-V tech world. Here, we must first achieve the said III-V film using a thin-film deposition method. It is obvious that this would form the bedrock of the device, and quality is critical. Minimal defects, highest mobility, and a plethora of demands following the advent of technology have made this aspect extremely important in today’s world.

In this blog, we will cover how Molecular Beam Epitaxy (MBE) enables the growth of GaAs-based devices, its history, advantages, challenges, and the wide range of optoelectronic applications it supports. Looking to optimise thin-film growth or improve device yield? Explore our Semiconductor FAB Solutions for end-to-end support across Equipment, Process, and Material Supply.

What Is Molecular Beam Epitaxy (MBE)?

Molecular Beam Epitaxy (MBE) is a well-known thin-film growth technique developed in the 1960s. Using ultra-high vacuum (UHV) conditions, it grows high-purity thin films with atomic-level control over the thickness and doping concentration of the layers. This provides excellent control to tune device properties and, in the case of III–V films, bandgap engineering. Such sought-after features make MBE widely renowned for producing the best-quality films, which currently lead device performance in applications such as LEDs, solar cells, sensors, detectors, and power electronics.

However, its major drawbacks include high costs and slow growth rates, limiting large-scale industry adoption. Need support with MBE tool installation, calibration, or fab floor setup? Our Global Field Engineering and Fab Facility Solutions teams can help.

A Brief History of MBE Technology

The concept of Molecular Beam Epitaxy was first introduced by K.G. Günther in a 1958 publication. Even though his films were not epitaxial—being deposited on glass, John Davey and Titus Pankey expanded his ideas to demonstrate the now-familiar MBE process for depositing GaAs epitaxial films on single-crystal GaAs substrates in 1968.

The final version of the technology was given by Arthur and Cho in the late 1960s, observing the MBE process using a Reflection High Energy Electron Diffraction (RHEED) in-situ process. If you work with legacy MBE platforms or require upgrade support, our Legacy Tool Management Services ensure continuity and extended tool life.

Why GaAs? The First Semiconductor Grown by MBE

The first semiconductor material to be grown using MBE, gallium arsenide or GaAs for short, is one of the leading III-V semiconductors in high-performance optoelectronics such as solar cells, photodetectors, lasers, etc. Due to its several interesting properties, such as a high band gap of 1.43 eV, high mobility, high absorption coefficient, and radiation hardness, it finds use in sophisticated applications such as space photovoltaics as well as infrared detectors and next-generation quantum devices.

Since GaAs was the first material to be studied using the MBE method, it is far better understood with decades of research on devices. The efficiency of heterojunction solar cells grown on substrates such as Ge was as high as 15-20% in the 1980s. Although the current numbers are the best in the industry, using MBE for growing GaAs solar cells comes with its own set of challenges and advantages:

  • Throughput and cost: Commercially, it is not as viable as some of the other vapor phase growth techniques since it is a slow and expensive process. Growth rates of MBE films are usually in the range of ~1.0 μm/h, which are far behind the CVD achieved rates of up to ~200 μm/h.
  • Thickness and uniformity: Solar cell structures require absorber layers with thicknesses of the order of several microns. Maintaining uniformity over such a range is not trivial.
  • Defect management: Thin films are beset with a range of defects such as dislocations, antisite defects, point defects, background impurities and so on. Optoelectronic devices suffer heavily due to the presence of defects as carrier lifetimes reduce and consequently open circuit voltage and fill factor. Therefore, multiple factors such as substrate quality, interface sharpness, and growth conditions are mandatory.
  • Doping and alloy incorporation: MBE is one of the best techniques to dope and make alloys, especially when it comes to III-V compounds. Band gap engineering to expand the available bandwidth for solar absorption is one of the most important advantages of using MBE. When making multiple junctions or tandem cells, several growth challenges, such as phase separation, strain, and exact control of the composition of each layer, are challenging.
  • Surface and interface quality: Interfacial strain is one of the major causes of loss of carriers due to recombination. When making solar cell stacks, there are multiple layers where interfaces are required, such as window layers, tunnel junctions, and passivation layers. MBE is excellent at providing abrupt interfaces due to its fast shutter speed and ultra-high vacuum conditions, resulting in high-performance devices.

A lot of the advantages of MBE are nullified due to its challenges, which makes it more of a hybrid technique when it comes to industrial applications. This has resulted in the usage of higher throughput methods, such as MOVPE/MOCVD, along with hybrid attempts to improve efficiency.

Other Optoelectronic Devices Grown Using MBE

In III-V materials and beyond, MBE has excelled in growing device-quality layers of several other types of optoelectronic structures:

  • LASERs and VCSELs: One of the most grown stacks by MBE is of AlGaAs/GaAs heterostructure for quantum well lasers and vertical cavity surface emitting lasers (VCSELs). AlGaAs/GaAs multi-quantum well VCSELs with distributed Bragg reflectors (DBRs) have been successfully demonstrated with threshold currents, continuous wave operations at elevated temperatures, GHz modulation speeds, etc.
  • Quantum Cascade LASERs (QCLs): The same GaAs/AlGaAs heterostructures have been fabricated for application in mid-infrared QCLs using MBE. Its specialty in producing abrupt interfaces and controlled doping is used in growth methods to reduce interface roughness and improve performance.
  • Infrared Photodetectors: A leading IR photodetector currently is HgCdTe (MCT), which has been grown using MBE on GaAs substrates. GaSb-based nBn detectors are also grown using superlattices of InAs/GaSb, which reduces lattice mismatch due to buffer layers.
  • High mobility 2D electron gas heterostructures: One of the most important discoveries of the last couple of decades has been that of 2-dimensional electron gas, which has led to applications such as high electron mobility transistor (HEMT). AlGaAs/GaAs heterostructures support the formation of this 2DEG, where the purity of the source material is critical. MBE grown films have shown mobilities as high as ~ 35 x 106 cm2/V.s.

Conclusion

MBE is a complex, slow process that has largely been confined to R&D labs traditionally. However, the quality of the deposited layers is unparalleled and has helped in improving and discovering new devices. In the last decade or so, there has been partial adoption of MBE in the industry due to the ability of the tool to provide cutting-edge device quality. However, mass adoption is unlikely due to the low quantity of wafers that are possible to grow at a time, and so we remain content with discovering the next generation of devices.

The post Molecular Beam Epitaxy (MBE) Growth of GaAs-Based Devices appeared first on ELE Times.

Cambridge GaN Devices appoints Fabio Necco as new CEO

Semiconductor today - Втр, 01/06/2026 - 22:54
Fabless firm Cambridge GaN Devices Ltd (CGD) — which was spun out of the University of Cambridge in 2016 to design, develop and commercialize power transistors and ICs that use GaN-on-silicon substrates — has appointed Fabio Necco as chief executive officer. The move is designed to drive forward CGD’s entry into key markets...

2 decade old SoC

Reddit:Electronics - Втр, 01/06/2026 - 21:59
2 decade old SoC

This is an SoC Camera sensor and controller from an old webcam likely manufactured in the early 2000s hence that chip is manufactured in 2004 (the year i was born in lol) i found this camera in my grandparents house a decade ago i grapped it as a kid and thought it was cool and disassembled it and through it in a big plastic bag along with my cool junk collection.

A decade later i found it's pcb (the shell is no where to be found lol) and desoldered it's components and found that SoC chip that i thought it's pretty cool!

submitted by /u/inevitable_47
[link] [comments]

CES 2026: Wi-Fi 8 silicon on the horizon with an AI touch

EDN Network - Втр, 01/06/2026 - 17:49

While Wi-Fi 7 adoption is accelerating among enterprises, Wi-Fi 8 routers and mesh systems could arrive as early as summer 2026. It’s important to note that the IEEE 802.11bn standard, widely known as Wi-Fi 8, is expected to be ratified in 2028. So, the gap between Wi-Fi 7’s launch and the potential availability of Wi-Fi 8 products in mid-2026 could shorten the typical cycle between Wi-Fi generations.

At CES 2026 in Las Vegas, Nevada, wireless chip vendors like Broadcom and MediaTek are unveiling their Wi-Fi silicon offerings. ASUS is also conducting real-world throughput tests of its Wi-Fi 8 concept routers at CES 2026.

Figure 1 Wi-Fi 8 aims to deliver a system-wide upgrade across speed, capacity, reach, and reliability. Source: Broadcom

Wi-Fi 8—aimed at boosting reliability and reducing latency in dense, interference-prone environments—marks a shift in Wi-Fi evolution. While Wi-Fi 8 maintains the same theoretical maximum data rate as Wi-Fi 7, it aims to improve effective throughput, reduce packet loss, and decrease latency for time-sensitive applications.

Another notable feature of Wi-Fi 8 designs is the incorporation of AI ingredients. Below is a short profile of an AI accelerator chip that claims to facilitate real-time agentic applications for residential consumers.

AI accelerator for Wi-Fi 8

Wi-Fi 8 proponents are quick to point out that it connects the wireless world with the AI future through highly reliable connectivity and low-latency responsiveness. Real-time, latency-sensitive applications are increasingly seeking to employ agentic AI, and for that, Wi-Fi 8 aims to prioritize consistent performance under challenging conditions.

Broadcom’s new accelerated processing unit (APU), unveiled at CES 2026, combines compute and networking ingredients with AI acceleration in a single silicon device. BCM4918—a system-on-chip (SoC) device blending compute acceleration, advanced networking, and security—aims to deliver high throughput, low latency, and intelligent optimization needed for the emerging AI-driven connected ecosystem.

The new AI accelerator for Wi-Fi 8 integrates a neural engine for on-device AI/ML inference and acceleration. It also incorporates networking engines to offload both wired and wireless data paths, enabling complete CPU bypass of all networking traffic. For built-in security, cryptographic protocol acceleration ensures end-to-end data protection without performance compromise.

“Our new BCM4918 APU, along with our full portfolio of Wi-Fi 8 chipsets, form the foundation of an AI-ready platform that not only enables immersive, intelligent user experiences but also does so with efficiency, security, and sustainability at its core,” said Mark Gonikberg, senior VP and GM of Broadcom’s Wireless and Broadband Communications Division.

Figure 2 When paired with BCM6714 and BCM6719 dual-band radios, BCM4918 APU allows designers to develop a unified compute-and-connectivity architecture. Source: Broadcom

AI compute plus connectivity

The BCM4918 APU is paired with two new dual-band Wi-Fi 8 radio devices: BCM6714 and BCM6719. While combining 2.4 GHz and 5 GHz operation into a single piece of silicon, these Wi-Fi 8 radios also feature on-chip 2.4-GHz power amplifiers, reducing external components and improving RF efficiency.

These dual-band radios, when paired with the BCM4918 APU, allow design engineers to quickly develop a unified compute-and-connectivity architecture that enables edge-AI processing, real-time optimization, and adaptive intelligence. The APU and dual-band radios for Wi-Fi 8 are now available to early access customers and partners.

Broadcom’s Gonikberg says that Wi-Fi 8 represents a turning point where broadband, connectivity, compute, and intelligence truly converge. The fact that it’s arriving ahead of schedule is a testament to its convergence merits, and that it’s more than a speed upgrade and could transform connection stability and responsiveness.

Related Content

The post CES 2026: Wi-Fi 8 silicon on the horizon with an AI touch appeared first on EDN.

Simple speedy single-slope ADC

EDN Network - Втр, 01/06/2026 - 15:00

Ages ago, humankind crawled out of the primordial analog ooze and began to do digital. They soon noticed and quantified a fundamental need to interconnect their new quantized numerical novelties with the classic continuum of the ancestral engineer’s world. Thus arose the ADC.

Of course, there were (and are) an abundance of ADC schemes and schematics. One of the earliest and simplest of these was the single-slope type.

Single slope ADCs come in two savory flavors. In one, a linear analog voltage ramp is generated and compared to the input signal. The time required for the ramp to rise from zero (or near) to equality with the input is proportional to the input’s amplitude and taken as its digital conversion. 

We recently saw an example contributed by Dr. Jordan Dimitrov to our own friendly Design Idea (DI) corner in “Voltage-to-period converter offers high linearity and fast operation.”

In a different cultivar of the single sloper, a capacitor is charged to the input voltage, then linearly ramped down to zero. The time required to do that is proportional to Vin and counts (pun!) as the conversion result. An (extremely!) simple and cheap example of this type was published here about two and a half years ago in “A “free” ADC.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

While simple and cheap are undeniably good things, too much of a good thing is sometimes not such a good thing. The circuit in Figure 1 adds a few refinements (and a bit more cost) to that basic design in pursuit of an order of magnitude (or two) better accuracy and perhaps a bit more speed.

Figure 1 Simple speedy single-slope (SSSS) ADC biphasic conversion cycle.

Here’s how it works:

  1. (CONVERT = 1) switch U1 charges C1 to Vin
  2. (CONVERT = 0) C1 is linearly discharged by 100 µA current sourced by Z1Q1

Note: Z1, C1, and R2 should be precision types.

Conversion occurs in two phases, selected by one GPIO bit configured for output (CONVERT/ACQUIRE).

During the ACQUIRE (1) interval SPDT switch U1 connects integrator capacitor C1 to the input source, charging it to Vin. The acquisition time constant of the charging is:

C1(R sZ1+ U1 Ron, + Q2’s input impedance) = ~10 µs

To complete the charge to ½-lsb-precision at 12-bit resolution, this needs an ACQUIRE interval of:

10µs*loge(2(12+1)) = 90µs

The controlling microcontroller can then return CONVERT to zero, which switches the input side of C1 to ground, driving the base of the comparator transistor negative for a voltage step of –Vin, plus a “smidgen” (~12 mV).

This last is contributed by C2 to compensate for the zero offset that would otherwise accrue from Q2’s finite voltage gain and storage time.

Q1’s emergence from saturation drives INTEGRATE positive. Here it remains until the discharge of C1 is complete and Q1 turns back ON. This interval is:

Vin*C1 / 100µA = 200µs/v = 1-ms maximum

If the connected counter/peripheral runs at 20 MHz, then the max-count accumulation and conversion resolution will be 4000, or 11.97 bits.

This 1-ms, or ~12-bit, conversion cycle is sketched in Figure 2.  Note that good integral nonlinearity (INL) and differential nonlinearity (DNL) are inherent.

Figure 2 The SSSS ADC waveshapes. The ACQUIRE duration (12 bits) is 90 µs. The INTEGRATE duration is 1ms max (Vin C1 / Iq1 = 200 µs/V). Amplitude is 5 Vpp.

 Of course, not all signal sources will gracefully tolerate the loading imposed by this conversion sequence, and not all applications will find the tolerance of available LM4041 references and R1C1 adequately precise.

Figure 3 shows fixes for both of these limitations. A typical RRIO CMOS amplifier for A1 eliminates the input loading problem, and the R5 trim provides a convenient means for improving conversion calibration.

Figure 3 A1 input buffer unloads Vin, and R5 calibration trim improves accuracy.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Simple speedy single-slope ADC appeared first on EDN.

Don’t Let Your RTL Designs Get Bugged!

ELE Times - Втр, 01/06/2026 - 13:08

Courtesy: Cadence

Are you still relying solely on simulation to validate your RTL design? Is there any more validation required?

Simulation has been a cornerstone of hardware verification for decades. Its ability to generate random stimuli and validate RTL across diverse scenarios has helped engineers uncover countless issues and ensure robust designs. However, simulation is inherently scenario-driven, which means certain rare corner cases can remain undetected despite extensive testing.

This is where formal verification adds significant value. Formal doesn’t just simply mathematically analyse the entire state space of your design; it checks every possible value and transition your design could ever encounter, providing exhaustive coverage that complements simulation. No corner case is left unchecked. No bug is left hiding. Together, they form a powerful verification strategy.

Why Formal Matters in Modern Validation

Any modern validation effort needs to take advantage of formal verification, where the apps in the Jasper Formal Verification Platform analyse a mathematical model of RTL design and find corner-case design bugs without needing test vectors. This can add value across the design and validation cycle. Let’s look at some standout Jasper applications: Jasper’s Superlint and Visualise can help designers to quickly find potential issues or examine RTL behaviours without formal expertise. Jasper’s FPV (Formal Property Verification) allows formal experts to create a formal environment and sign off on the IP, delivering the highest design quality and better productivity than doing block-level simulation. Jasper’s C2RTL is used to exhaustively verify critical math functions in CPUs, GPUs, TPUs, and other AI accelerator chips.

Jasper enables thorough validation in various targeted domains, including low power, security, safety, SoC integration, and high-level synthesis verification.

“The core benefit of formal exhaustive analysis is its ability to explore all scenarios, especially ones that are hard for humans to anticipate and create tests for in simulation.”

Why Formal? Why Now?

Here’s why formal verification matters now:

  • No more test vectors or random stimuli. Formally, mathematically, and automatically explores all reachable states; verification can start as soon as RTL is available without the need to create a simulation testbench.
  • Powerful for exploring corner-case bugs. Exhaustive formal analysis can catch corner case bugs that escape even the most creative simulation testbenches.
  • Early design bring-up made easy. Validate critical properties and interfaces before your full system is ready.
  • Debugging is a breeze. When something fails, formal provides a precise counterexample, often with the shortest trace, eliminating the need for endless log hunting.
  • Perfect partnership with simulation. Simulation and formal aren’t rivals; they are partners. Use simulation for broad system-level checks, and Formal for exhaustive property checking and signoff of critical blocks. Merge formal and simulation coverage for complete verification signoff.

Conclusion

As RTL designs grow in complexity and stakes rise across power, safety, and performance, relying on simulation alone is no longer enough. While simulation remains indispensable for system-level validation, formal verification fills the critical gaps by exhaustively exploring every reachable state and uncovering corner-case bugs that would otherwise slip through. By integrating formal early and throughout the design cycle, teams can accelerate bring-up, improve debug efficiency, and achieve higher confidence at signoff. In today’s silicon landscape, the most robust verification strategy isn’t about choosing between simulation and formal—it’s about combining both to ensure no bug goes unnoticed and no risk is left unchecked.

The post Don’t Let Your RTL Designs Get Bugged! appeared first on ELE Times.

Adapting Foundation IP to Exceed 2 nm Power Efficiency in Next-Gen Hyperscale Compute Engines

ELE Times - Втр, 01/06/2026 - 12:17

Courtesy: Synopsys

Competing in the booming data centre chip market often comes down to one factor: power efficiency. The less power a CPU, GPU, or AI accelerator requires to produce results, the more processing it can offer within a given power budget.

With data centres and their commensurate power needs growing exponentially, the energy consumption of each chip directly impacts the enormous costs of running gigawatt-scale AI data centres, where power and cooling account for 40–60% of operational expenditures.

To reduce the energy consumption of its workloads and gain a competitive edge, one software and cloud computing titan has made the strategic bet to design its own next-gen hyperscale System-on-Chip (SoC). By combining the advantages of new 2 nm-class process nodes with advanced, customised chip design techniques, the company is doubling down on the belief that innovation spanning process, design, and architecture can unlock new levels of power and cost efficiency.

 

Power play

To offer a compelling alternative in the market, the company knew that any new 2 nm design must push beyond the performance and efficiency process entitlement already baked into the scaling factors of the latest transistor fabrication methods. The transition to the 2 nm process is expected to provide 25–30% power reduction relative to the previous 3 nm node.

The company set an ambitious goal of achieving an additional 5% improvement on the 2 nm baseline. Through close collaboration with Synopsys — combining EDA software flow enhancements with our optimised Foundation IP logic library — the company exceeded its goal, achieving:

  • 34% reduced power consumption with the same baseline flow.
  • 51% reduced power consumption with an optimised flow.
  • 5% silicon area advantage over baseline with ISO performance.

The company also evaluated our 2 nm embedded memories, which exceeded SRAM scaling expectations compared to our 3 nm product. On average, the 2 nm memory instances delivered 12% higher speed, occupied 8% less area, and consumed 12% less power than their 3 nm counterparts.

Expert collaboration

Because the transition to 2 nm comes with a shift from FinFET to GAA architecture, the company’s SoC developers faced a particularly steep learning curve, with an increase in complexity and technology assimilation.

They engaged our team in the early stages of the project — the byproduct of a trusted working relationship that spans more than four generations of AI chip designs — and even licensed our Foundation IP before the availability of any silicon reports.

The company used our IP, reference methodology, and Fusion Compiler tool to explore all commercially available options for achieving their power budget requirements. While the early development cycles produced the silicon area advantage, they did not achieve the power scaling targets the company sought.

Adaptation and optimisation

Seeking additional assistance, the company inquired whether our EDA tools and IP could be leveraged to push the design’s performance further.

R&D experts from our IP and EDA groups began collaborating on the design. Starting with the standard logic libraries, the IP group worked closely with the company’s designers to adapt and optimise the libraries with new cells and updated modelling. Over several iterations, the teams delivered the 7.34% power benefit, with Synopsys PrimePower used for final power analysis.

Our Technology and Product Development Group then helped the company take it a step further. By developing new algorithms for Fusion Compiler, and after many trials based on the latest recommended power recipe, design flow optimisations produced a 9.51% combined power benefit.

At the same time, our application engineers worked closely with the company to provide the best solution from our broad portfolio of memory compilers. Weighing performance requirements with power and area targets, we were able to extend the benefit of 2 nm beyond instance-level scaling. In one key scenario, power was reduced by an additional 25% by using an alternative configuration that met the 2 nm requirements.

Conclusion

As hyperscale compute continues its relentless push toward higher performance within ever-tighter power envelopes, success at advanced nodes like 2 nm will hinge on more than process scaling alone. This collaboration demonstrates how tightly integrated innovation across Foundation IP, EDA flows, and design methodology can unlock efficiency gains well beyond baseline node benefits. By adapting standard libraries, optimising tool algorithms, and co-engineering memory configurations, the company not only surpassed its power-efficiency targets but also achieved meaningful area and performance advantages. The outcome underscores a broader industry lesson: at 2 nm and beyond, early engagement, deep expertise, and holistic optimisation across the silicon stack will be critical to building the next generation of power-efficient hyperscale compute engines.

The post Adapting Foundation IP to Exceed 2 nm Power Efficiency in Next-Gen Hyperscale Compute Engines appeared first on ELE Times.

Delta Electronics to Provide 110 MW to Prostarm Info Systems for Energy Storage Projects in India

ELE Times - Втр, 01/06/2026 - 11:07
Delta Electronics India, a provider of power management and smart green solutions, announced an agreement to supply 100 units of its ‘Make-in-India’ 1.1 MW bi-directional Power Conditioning Systems (PCS) to Prostarm Info Systems Ltd’s Battery Energy Storage System (BESS) projects across India, including certain undertakings by Bihar State Power Generation Company Ltd (BSPGCL) & Adani Electricity Mumbai Limited’s (AEML). By deploying advanced energy infrastructure in both metropolitan and regional markets, this collaboration supports India’s renewable integration, grid stability, and overall energy resilience.
Mr. Niranjan Nayak, Managing Director, Delta Electronics India, said, “India’s energy transition journey calls for strong collaborations that combine global technology leadership with local market expertise. Through this engagement with Prostarm for AEML’s BESS initiative, Delta reaffirms its commitment to building long-term and customer-centric collaboration that supports the nation’s sustainable growth. This initiative marks the largest-scale deployment so far of our made-in-India power conditioning systems for the country’s fast-evolving energy storage sector.”
Mr. Ram Agarwal, Whole Time Director & CEO, Prostarm Info Systems Ltd., said
“At Prostarm, we are committed to bringing advanced energy solutions that empower utilities and drive India’s clean energy transition. Partnering with Delta Electronics India for the AEML’s BESS project reflects our shared vision of delivering technology-led reliability and performance at scale. This collaboration not only strengthens our portfolio in energy storage but also sets a benchmark for strategic partnerships in India’s evolving power sector.”
The bi-directional PCS units (totalling 110 MW) will be deployed by Prostarm across multiple projects, including Bihar State Power Generation Company Ltd. (BSPGCL) & Adani Electricity Mumbai Limited’s (AEML) 11 MW/22 MWh BESS project in Mumbai and standalone BESS projects being developed on BESSPD (Battery Energy Storage Solution Power Developer) mode by Prostarm in the state of Bihar.
Mr. Rajesh Kaushal, Vice President, Energy Infrastructure Business Group, Delta Electronics India, added, “This is a significant milestone for our Power Conditioning Systems business in India. Our collaboration with Prostarm reflects a strong strategic relationship built on trust and shared vision. By delivering reliable and customised bi-directional PCS solutions, developed with a focus on localisation and Make-in-India manufacturing, Delta is well positioned to strengthen its role in enabling India’s evolving energy landscape.”
Mr. Prateek Srivastava, Vice President and BU-Head, Prostarm Info Systems Ltd., “The transition to clean energy is an investment in our future. We are fully committed to driving the green revolution by delivering cutting-edge technology, customised products, and innovative solutions designed for long-term performance and reliability to ensure the highest level of customer satisfaction. At PROSTARM, we firmly believe in promoting Make-in-India initiatives, collaboration, knowledge sharing, and partnering with a strong technology leader like Delta is truly a feather in our cap”.
Delta’s Power Conditioning Systems are produced at its own manufacturing site in Krishnagiri, Tamil Nadu, and are designed for utility-grade energy storage and microgrid applications, especially for key functions such as peak shaving, PV smoothing, and grid ancillary control. The system boasts up to 98.5% energy conversion efficiency, output power capacity as high as 1160 kVA, and scalability up to 5 units in parallel.

The post Delta Electronics to Provide 110 MW to Prostarm Info Systems for Energy Storage Projects in India appeared first on ELE Times.

TI’s vast automotive portfolio: Shift towards autonomous vehicles

ELE Times - Втр, 01/06/2026 - 09:36

Texas Instruments (TI) has introduced new automotive semiconductors and development resources to enhance safety and autonomy across vehicle models. TI’s scalable TDA5 high-performance computing system-on-a-chip (SoC) family offers power- and safety-optimised processing and edge artificial intelligence (AI) that supports up to Society of Automotive Engineers Level 3 vehicle autonomy. TI also unveiled the AWR2188, a single-chip, eight-by-eight 4D imaging radar transceiver, to help engineers simplify high-resolution radar systems. These devices, alongside the DP83TD555J-Q1 10BASE-T1S Ethernet physical layer (PHY), join TI’s broader automotive portfolio for next-generation advanced driver assistance systems (ADAS) and software-defined vehicles (SDVs). TI will be debuting these products at CES 2026, Jan. 6-9, in Las Vegas, Nevada.

“The automotive industry is moving toward a future where driving doesn’t require hands on the wheel,” said Mark Ng, director of automotive systems at TI. “Semiconductors are at the heart of bringing this vision of safer, smarter and more autonomous driving experiences to every vehicle. From detection and communication to decision-making, engineers can use TI’s end-to-end system offering to innovate what’s next in automotive.”

High-performance compute SoCs enable safe, scalable AI across vehicle models

To enhance safety and autonomy in next-generation vehicles, automakers are adopting central computing systems that support AI and sensor fusion for real-time decision-making. Designed for high-performance computing, TI’s TDA5 SoC family offers edge AI acceleration from 10 trillion operations per second (TOPS) to 1200 TOPS with power efficiency beyond 24 TOPS/W. This scalability, enabled by their chiplet-ready design with Universal Chiplet Interconnect Express interface technology, allows designers to implement different feature sets and support up to Level 3 autonomous driving using a single portfolio. Building on over two decades of experience in automotive processing, the family expands the performance of TI’s existing portfolio to enable automakers to centralise their computing architectures and process advanced AI models.

By integrating the latest generation of TI’s C7 neural processing unit (NPU), TDA5 SoCs provide up to 12 times the AI computing of previous generations with similar power consumption, eliminating the need for costly thermal solutions. This performance supports billions of parameters within language models and transformer networks, increasing in-vehicle intelligence while maintaining cross-domain functionality. The family features the latest Arm Cortex-A720AE cores, allowing automakers to integrate more safety, security and computing applications.

TDA5 SoCs reduce system complexity and costs by supporting cross-domain fusion of ADAS, in-vehicle infotainment and gateway systems within a single chip. Their safety-first architecture further simplifies systems by helping automakers meet Automotive Safety Integrity Level D safety standards without external components.

To simplify complex vehicle software management, TI is partnering with Synopsys to provide a Virtualiser development kit for TDA5 SoCs. The kit’s digital twin capabilities help engineers accelerate time-to-market for their SDVs by up to 12 months.

Single-chip, eight-by-eight radar transceiver achieves earlier, more accurate detection

With enhanced perception and reliability in any weather condition, radar is a fundamental technology for sophisticated ADAS and greater vehicle autonomy. Designed to meet global market needs, TI’s AWR2188 4D imaging radar transceiver integrates eight transmitters and eight receivers into a single launch-on-package chip. This integration simplifies higher-resolution radar systems because eight-by-eight configurations do not require cascading, while scaling up to higher channel counts requires fewer devices. The transceiver supports both satellite and edge architectures, offering automakers the flexibility to simplify and accelerate the global deployment of ADAS features across entry-level to premium vehicles.

The AWR2188 features enhanced analogue-to-digital converter data processing and a radar chirp signal slope engine, both supporting 30% faster performance than currently available solutions. This level of performance powers advanced radar use cases such as detecting lost cargo, distinguishing between closely positioned vehicles and identifying objects in high-dynamic-range scenarios. The transceiver can detect objects with greater accuracy at distances >350m, altogether enabling safer, more autonomous driving.

10BASE-T1S technology extends Ethernet to vehicle edge nodes

The acceleration toward SDVs and higher levels of autonomy is prompting a fundamental shift in subsystem architectures. Ethernet is an important enabler for this evolution, as it allows systems to collect and transmit more data across vehicle zones in real time through a simple, unified network architecture. TI’s new DP83TD555J-Q1 10BASE-T1S Ethernet Serial Peripheral Interface PHY with an integrated media access controller offers nanosecond time synchronisation, industry-leading reliability and Power over Data Line capabilities. These features enable engineers to extend high-performance Ethernet to vehicle edge nodes while reducing cable design complexity and costs.

With TI’s end-to-end system offering, which includes technologies for advanced sensing, reliable in-vehicle networking and efficient AI processing, automakers can develop systems that improve safety and automation levels across different vehicle models.

TI at CES 2026

In the Las Vegas Convention Centre North Hall, meeting room No. N115, TI will showcase how innovation across its analogue and embedded processing portfolios is reshaping what’s next in how people move, live and work. Demonstrations include advancements in vehicle technology and advanced mobility, smart homes and digital health, energy infrastructure, robotics, and data centres. See ti.com/CES.

Package, availability and pricing

  • The TDA54 software development kit is now available on TI.com to help engineers get started with the TDA54 Virtualiser development kit. Samples of the TDA54-Q1 SoC, the first device in the family, will be sampled to select automotive customers by the end of 2026.
  • Preproduction quantities of the AWR2188 transceiver and an evaluation module are now available upon request at TI.com.
  • Preproduction quantities of the DP83TD555J-Q1 10BASE-T1S Ethernet PHY and an evaluation module are now available upon request at TI.com.

The post TI’s vast automotive portfolio: Shift towards autonomous vehicles appeared first on ELE Times.

Made a dual rail transformer using binoucular core.

Reddit:Electronics - Пн, 01/05/2026 - 20:16
Made a dual rail transformer using binoucular core.

Not sure if this is a normal way to use these cores as i have no knowledge about it. But i came up with a way to get 2 isolated outputs from 1 input. The input windings go in the middle so from hole to hole and the 2 other windings are on the sides. This specific core gave 5.4v on output with 5v input but it was just put together with scraps to see if it works and it did really well.

submitted by /u/Whyjustwhydothat
[link] [comments]

Mission Microwave to design and deliver solid-state power block upconverters for Telesat Lightspeed

Semiconductor today - Пн, 01/05/2026 - 15:12
Mission Microwave Technologies LLC of Cypress, CA, USA has been awarded a contract to design and deliver solid-state power block upconverters (BUCs) for the Telesat Lightspeed Landing Stations...

Amazon’s Smart Plug: Getting inside requires more than just a tug

EDN Network - Пн, 01/05/2026 - 15:00

Amazon wisely doesn’t want naïve consumers poking around inside its high-voltage AC-switching devices. This engineer was also thwarted in his exploratory efforts…initially, at least.

Early last month, within a post detailing my forced-by-phaseout transition from Belkin’s Wemo smart plugs to TP-Link’s Kasa and Tapo devices, I mentioned that I’d originally considered a different successor:

Amazon was the first name that came to mind, but although its branded Smart Plug is highly rated, it’s only controllable via Alexa. I was looking for an ecosystem that, like Wemo, could be broadly managed, not only by the hardware supplier’s own app and cloud services but also by other smart home standards…

A curiosity-satisfying return-on-(minimal) investment

Even though I ended up going elsewhere, I still had a model #HD34BX Amazon Smart Plug sitting on my shelf. I’d bought it back in late November 2020 on sale for $4.99, 80% off the usual $24.99 price (and in response to, I’m guessing, per the purchase date, a Black Friday promotion). Regular readers already know what comes next: it’s teardown time!

Let’s start with some outer box shots, as usual (as with subsequent images), accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Note that, per my prior writeup’s “specific hardware requirement that needed to be addressed,” it supports (or at least claims to) up to 15A of current:

  • Input: 100-120V, 60 Hz, 15A Max
  • Output:
    • 120V, 60 Hz, 15A, resistive load
    • 120V, 60 Hz, 10A, inductive load
    • 120V, 60 Hz, 1/2 HP, motor load
    • 120V, 60 Hz, TV-5, incandescent
  • Operating Temperature: 0-35°C
  • IP Rating: IP30

thereby being capable of power-controlling not only low-wattage lamps but also coffee makers, curling irons, and the like:

See that translucent strip of tape at the upper right?

Wave buh-bye to it; it’s time to look inside:

Nifty cardboard-based device-retention mechanism left over at the bottom:

The bottom left literature snippet is the usual warranty, regulatory and other gobbledygook:

The one at right is a wisp of a quick-start guide:

But neither of them, trust me I already realize, is the fundamental motivation for why you’re here today. Instead, it’s our dissection subject (why was I having flashbacks to the recently viewed and greatly enjoyed 2025 version of Frankenstein as I wrote those prior words?):

Underneath the hole at far left is an activity-and-status LED. And rotating the smart plug 90°:

there’s the companion switch, which not only allows for manual power control of whatever’s plugged into it but also initiates a factory reset when pressed and held for an extended period.

Around back are specs-and-such, including the always-insightful FCC ID (2ALBG-2017), along with the line (“hot”) and neutral source blades and ground pin (Type B NEMA 5-15 in this case):

In contrast to its left-side sibling, the right side is comparatively bland (i.e., to clarify, there’s nothing under the penny):

as are the bottom:

and the top, for that matter, unless you’re into faintly embossed Amazon logos:

Tenuous adhesive

My first (few…seeming few dozen…) attempts to get inside via the visible seam around the backside edges, trying out various implements of destruction in the process, were for naught:

Though the efforts weren’t completely wasted, as they motivated me to finally break out the Dremel set that had been sitting around unused and collecting dust since…yikes…mid-2005, my Amazon order history just informed me:

and which delivered ugly but effective results (albeit leaving the smart plug headed for nowhere but the landfill afterwards):

First step: unscrew and disconnect the wire going from the front panel socket’s load (“hot”) slot to the PCB (where it’s soldered):

Like I said before…ugly but effective:

At the top (in this photo, to the left when originally assembled) are the light pipe that routes the LED (yet to be seen but presumably on the PCB) output to the front panel, along with the mechanical assembly for the left-side switch:

You’ve already seen one top view of the insides, three photos ago. Here’s another, this time standalone and rotated:

And here are four of the five other perspectives; the back view will come later. Front:

Left side, showing the PCB-mounted portion of the switch assembly:

Right behind the switch is the outward-pointing LED whose location I’d just prognosticated:

Right side:

And bottom:

Electron routing and switching

Onward. The ground pin from the back panel routes directly to the front panel socket’s ground slot, not interacting with any intermediary circuitry en route:

You’ve probably already noticed that the “PCB” is actually a three-PCB assembly: smaller ones at top and bottom, both 90°-connected to the main one at the back. To detach the latter from the back chassis panel requires removal of another screw:

Houston, we have liftoff:

This is interesting, at least to me. The neutral wire is attached to its corresponding back-panel blade with a screw, albeit also to the PCB at other end with solder:

but the line (“hot”) wire is soldered at both ends:

This seemingly inconsistent approach likely makes complete sense to those of you more versed in power electronics than me; please share your thoughts in the comments. For now…snip:

Assuming, per my earlier comments, that you’ve already noticed the three-PCB assembly, you might have also noticed some white tape on both sides of the mini-PCB located at the bottom. Wondering what’s underneath it? Me too:

The answer: not much of anything!

What’s the frequency, Kenneth?

(At least) one more mystery to go. We’ve already seen plenty of predictable AC switching and AC-to-DC conversion circuitry, but where’s all the digital and RF stuff that controls the AC switching, along with wirelessly communicating with the outside world? For the answer, I’ll direct your attention to the mini-PCB at the top, which you may recall initially glimpsing earlier:

What you’re looking at on the other side is the WCBN4520R, a Wi-Fi-plus-Bluetooth Low Energy module discussed in-depth in an informative Home Assistant forum thread I found.

Forum participants had identified the PCB containing the module as the WN4520L from LITE-ON Technology, with Realtek’s RTL8821CSH single-chip wireless controller and Rockchip Electronics’ RKNanoD dual Arm Cortex-M3 microcontroller supposedly inside the module. But a different teardown I found right before finalizing this piece instead shows MediaTek’s MT7697N:

A highly integrated single chip offering an application processor, low power 1T1R 802.11 b/g/n Wi‑Fi, Bluetooth subsystem and power management unit. The application processor subsystem contains an ARM Cortex‑M4 with floating point unit. It also supports a range of interfaces including UART, I2C, SPI, I2S, PWM, IrDA, and auxiliary ADC. Plus, it includes embedded SRAM/ROM.

as the main IC inside the module, accompanied by a Macronix 25L3233F (PDF) 32 Mbit serial flash memory. I’m going with the latter chip inventory take. Regardless, to the left of the module is a visible silhouette of the PCB-embedded antenna, and there’s also a SMA connector on the board for tethering to an optional external antenna, not used in this particular design.

And there you have it! As always, sound off with your thoughts in the comments, please!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Amazon’s Smart Plug: Getting inside requires more than just a tug appeared first on EDN.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів