Збирач потоків

Why gold-plated tactile switches matter for reliability

EDN Network - 2 години 59 хв тому
Automation and robotics equipment.

In electronic product design, the smallest components often have the biggest impact on system reliability. Tactile switches—used in control panels, wearables, medical devices, instrumentation, and industrial automation—are a prime example. These compact electromechanical devices must deliver a precise tactile response, stable contact resistance, and long service life despite millions of actuations and a wide range of operating conditions.

For design engineers, one of the most critical choices influencing tactile switch reliability is contact plating. Among available materials, gold plating offers unmatched advantages in conductivity, corrosion resistance, and mechanical stability. While its cost is higher than silver plating—and tin when used for terminal finishes—gold’s performance characteristics make it indispensable for mission-critical applications in which failure is not an option.

Understanding the role of plating in switch performance

The function of a tactile switch relies on momentary metal-to-metal contact closure. Over-repeated actuation, environmental exposure and mechanical wear can increase contact resistance or even lead to intermittent operation. Plating serves as a barrier layer, protecting the base metal (often copper, brass, or stainless steel) from corrosion and wear while also influencing the switch’s electrical behavior.

Different plating materials exhibit markedly different behaviors:

  • Tin (used only for terminal plating) offers low cost and good solderability but oxidizes quickly, raising contact resistance in low-current circuits.
  • Silver provides excellent conductivity, but it tarnishes in the presence of sulfur or humidity, forming insulating silver sulfide films.
  • Gold, though softer and more expensive, is chemically inert and does not oxidize or tarnish. It maintains stable, low contact resistance even under micro-ampere currents where other metals fail.

This property is crucial for tactile switches used in low-level signal applications, such as microcontroller input circuits, communication modules, or medical sensors, in which switching currents may be in the microamp to milliamp range. At such levels, even a thin oxide film can impede electron flow, creating unreliable or noisy signals.

The science behind gold’s stability

Gold’s chemical stability stems from its electronic configuration: Its filled d-orbitals make it resistant to oxidation and most chemical reactions. Its noble nature prevents formation of insulating oxides or sulfides, meaning the surface remains metallic and conductive throughout the switch’s service life.

From a materials engineering standpoint, plating thickness and uniformity are key. Gold layers used in tactile switches typically range from 0.1 to 1.0 µm, depending on required durability and environmental conditions. Thicker plating layers provide greater wear resistance but increase cost. Engineers should verify that the plating process, often electrolytic or autocatalytic, ensures full coverage on complex contact geometries to avoid thin spots that could expose the base metal.

Many switch manufacturers, such as C&K Switches, use gold-over-nickel systems. The nickel layer acts as a diffusion barrier, preventing copper migration into the gold and preserving long-term contact integrity. Without this barrier, copper atoms could diffuse to the surface over time, leading to porosity and surface discoloration that undermine conductivity.

When to specify gold plating

Selecting the right contact material for your tactile switch can make or break long-term reliability. Gold plating isn’t always necessary, but in the right applications, it’s indispensable.

  • Low-level or signal circuits: When switching currents fall below 100 mA, even thin oxide films can prevent reliable conduction. Gold’s inert surface ensures clean, consistent contact resistance for microcontroller inputs, logic circuits, sensors, and communication interfaces.
  • Mission-critical reliability: If system uptime or safety compliance is essential—such as in medical devices, aerospace, defense, or industrial safety systems—gold-plated switches prevent oxidation-related failures that could disrupt operations or endanger users.
  • Harsh or uncontrolled environments: Designs exposed to moisture, sterilization cycles, or outdoor weathering benefit from gold’s corrosion resistance. Examples include surgical tools, outdoor telecom nodes, and HVAC or factory automation controls.
  • Long lifecycle or high actuation counts: Gold plating resists fretting corrosion and wear, maintaining stable performance through hundreds of thousands to millions of actuations, critical in applications such as automotive HMI controls or consumer appliances with frequent use.
  • Signal integrity and noise sensitivity: In instrumentation, medical sensing, and precision measurement, gold’s smooth, oxide-free surface minimizes contact noise and bounce, ensuring clean signal transitions and reducing the need for debouncing circuitry.
  • Mixed-metal interfaces: Avoid combining gold with tin or silver on mating surfaces—galvanic reactions can accelerate corrosion. When other components use gold contacts, matching them with gold-plated tactile switches maintains uniform conductivity and compatibility.

Choose gold-plated tactile switches when reliability, environmental resistance, or low-current signal integrity outweighs incremental cost. In these cases, gold is not a luxury; it’s engineering insurance.

Reliability in harsh and low-signal environments

Gold plating’s reliability benefits become evident under extreme environmental or electrical conditions.

Medical devices and sterilization environments

Surgical and diagnostic instruments often undergo repeated steam autoclaving or chemical sterilization cycles. Moisture and elevated temperatures accelerate corrosion in conventional materials. Gold’s nonreactive surface resists degradation, ensuring consistent actuation force and electrical performance across hundreds of sterilization cycles. This reliability directly impacts patient safety and device regulatory compliance.

Outdoor telecommunications and IoT

Field-mounted communication hardware—base stations, gateways, or outdoor routers—encounters moisture, pollution, and temperature fluctuations. In such applications, tin or silver plating can oxidize within months, leading to noisy signals or switch failure. Gold-plated tactile switches preserve contact integrity, maintaining low and stable resistance even after prolonged environmental exposure.

Industrial automation and control

Industrial environments expose components to dust, vibration, and cleaning solvents. Gold’s smooth, ductile surface resists micro-pitting and fretting corrosion, while its low coefficient of friction contributes to predictable mechanical wear. As a result, switches maintain consistent tactile feedback over millions of actuations, a vital factor in HMI panels in which operator confidence depends on feel and repeatability.

Aerospace, defense, and safety-critical systems

In avionics and safety systems, even transient failures are unacceptable. Gold’s resistance to oxidation and its stable performance across −40°C to 125°C enable designers to meet MIL-spec and IPC reliability standards. The material’s immunity to metal whisker formation, common in tin coatings, eliminates one of the most insidious causes of short-circuits in mission-critical electronics.

Automation and robotics equipment.Automation and robotics equipment benefit from gold-plated tactile switches that deliver long electrical life and immunity to oxidation in high-cycle production environments. (Source: Shutterstock) Tackling common mechanical and electrical issues Contact bounce reduction

Mechanical contacts inherently produce bounce, a rapid, undesired make-or-break sequence that occurs as the metal contacts settle. Bounce introduces signal noise and may require software or hardware debouncing. Gold’s micro-smooth surface reduces surface asperities, shortening bounce duration and producing cleaner signal transitions. This improves response time and may simplify firmware filtering or eliminate RC snubber circuits.

Metal whisker mitigation

Tin and zinc surfaces can spontaneously grow metallic whiskers under stress, causing shorts or leakage currents. Gold plating’s crystalline structure is stable and does not support whisker growth, a key reliability advantage in fine-pitch or high-density electronics.

Thermal and mechanical stability

Gold has a low coefficient of thermal expansion mismatch with typical nickel underplates, minimizing stress during thermal cycling. It does not harden or crack under high temperatures, allowing switches to function consistently from cold-storage conditions (−55°C) to high-heat appliance environments (>125°C surface temperature).

Electrical characteristics: low-level signal switching

Many engineers underestimate how contact material impacts performance in low-current circuits. When switching below approximately 100 mA, oxide film resistance dominates contact behavior. Non-noble metals can form surface barriers that block electron tunneling, leading to contact resistance in the tens or hundreds of ohms. Gold’s stable surface keeps contact resistance in the 10- to 50-mΩ range throughout the product’s life.

Additionally, gold’s low and stable contact resistance minimizes contact noise, which can be especially important in digital logic and analog sensing circuits. For instance, in a patient monitoring device using microvolt-level signals, a transient resistance increase of just a few ohms can cause erroneous readings or false triggers. Gold plating ensures clean signal transmission even at the lowest currents.

Balancing cost and performance

It’s true that gold plating adds material and process costs. However, lifecycle analysis often reveals a compelling return on investment. In applications in which switch replacement or failure results in downtime, service calls, or warranty claims, the incremental cost of gold plating is negligible compared with the total system value.

Manufacturers help designers manage cost by offering hybrid switch portfolios. For example, C&K’s KMR, KSC, and KSR tactile switch families include both silver-plated and gold-plated versions. This allows designers to standardize on a footprint while selecting the appropriate contact material for each function: gold for logic-level or safety-critical inputs, silver for higher-current or less demanding tasks.

Littelfuse's KSC2 gold-plated tactile switches.KSC2 Series tactile switches, available with gold-plated contacts, combine long electrical life and stable actuation in compact footprints for HVAC, security, and home automation applications. (Source: C&K Switches) Design considerations and best practices

When specifying gold-plated tactile switches, engineers should evaluate both electrical and environmental parameters to ensure the plating delivers full value:

  • Current rating and load type: Gold excels in “dry circuit” switching below 100 mA. For higher currents (>200 mA), arcing can erode gold surfaces; mixed or dual plating (gold plus silver) may be more appropriate.
  • Environmental sealing: Use sealed switch constructions (IP67 or higher) when exposure to fluids or contaminants is expected. This complements gold plating and extends operating life.
  • Plating thickness: For harsh environments or long lifecycles (>1 million actuations), specify a thicker gold layer (≥0.5 µm). Thinner flash layers (0.1 µm) are adequate for indoor or low-stress use.
  • Base metal compatibility: Always ensure the plating stack includes a nickel diffusion barrier to prevent copper migration.
  • Mating surface design: Gold-to-gold contacts perform best. Avoid mixing gold with tin on the mating side, which can cause galvanic corrosion.
  • Actuation force and feel: Gold’s lubricity affects tactile response slightly; designers should verify that chosen switches maintain the desired haptic feel across temperature and wear cycles.

By integrating these considerations early in the design process, engineers can prevent many reliability issues that otherwise surface late in validation or field deployment.

Lifecycle testing and qualification standards

High-reliability applications frequently require validation under standards such as:

  • IEC 60512 (electromechanical component testing)
  • MIL-DTL-83731F (for aerospace-grade switches)
  • AEC-Q200 (automotive passive component qualification)

Gold-plated tactile switches often exceed these standards, maintaining consistent contact resistance after 105 to 106 mechanical actuations, temperature cycling, humidity exposure, and vibration. Some miniature switch series, such as the C&K KSC2 and KSC4 families, can endure as many as 5 million actuations, highlighting how material selection plays a critical role in overall system durability.

Practical benefits: From design efficiency to end-user experience

For engineers, specifying gold-plated tactile switches yields several tangible advantages:

  • Reduced maintenance: Longer life and fewer field failures minimize warranty and service costs.
  • Simplified circuit design: Low and stable contact resistance can eliminate the need for additional filtering or conditioning circuits.
  • Enhanced system reliability: Predictable behavior across temperature, humidity, and lifecycle improves compliance with functional-safety standards such as ISO 26262 or IEC 60601.
  • Improved user experience: Consistent tactile feel and reliable operation translate to higher perceived quality and brand reputation.

For the end user, these benefits manifest as confidence—buttons that always respond, equipment that lasts, and interfaces that feel precise even after years of use.

Designing for a connected, reliable future

As electronic systems become smarter, smaller, and more interconnected, tolerance for failure continues to shrink. A single faulty switch can disable a medical device, interrupt a network node, or halt an industrial process. Choosing gold-plated tactile switches is therefore not simply a materials decision; it’s a reliability strategy.

Gold’s unique combination of chemical inertness, electrical stability, and mechanical durability ensures consistent performance across millions of cycles and the harshest conditions. For design engineers striving to deliver long-lived, premium-quality products, gold plating provides both a technical safeguard and a competitive edge.

In the end, reliability begins at the contact surface—and when that surface is gold, the connection is built to last.

About the author

Michaela Schnelle, senior associate product manager at Littelfuse.Michaela Schnelle is a senior associate product manager at Littelfuse, based in Bremen, Germany, covering the C&K tactile switches portfolio. She joined Littelfuse 16 years ago and works with customers and distributors worldwide to support design activities and new product introductions. She focuses on product positioning, training, and collaboration to help customers bring reliable designs to market.

The post Why gold-plated tactile switches matter for reliability appeared first on EDN.

CES 2026: Multi-link, 20-MHz IoT boost Wi-Fi 7 prospects

EDN Network - 3 години 58 хв тому

Wi-Fi 7 enters 2026 with a crucial announcement made at the CES 2026 in Las Vegas, Nevada. The Wi-Fi Alliance is introducing the 20-MHz device category for Wi-Fi 7, aimed at addressing the needs of the broader Internet of Things (IoT) ecosystem. Add Wi-Fi 7’s multi-link IoT capability to this, and you have a more consistent, always‑connected experience for applications such as security cameras, video doorbells, alarm systems, medical devices, and HVAC systems.

The 802.11be standard, widely known as Wi-Fi 7, was drafted in 2024, and the formal standard followed in 2025. From Wi-Fi 1 to Wi-Fi 5, the focus was on increasing the connection’s data rate. But then the industry realized that a mere increase in speed wasn’t beneficial.

“The challenge shifted to managing traffic on the network as more devices were coming onto the network,” said Sivaram Trikutam, senior VP of wireless products at Infineon Technologies. “So, the focus in Wi-Fi 6 shifted toward increasing the efficiency of the network.”

The industry then took Wi-Fi 7 to the next level in terms of efficiency over the past two years, especially with the emergence of high-performance applications. The challenge shifted to how multiple devices on the network could share spectrum efficiently so they could all achieve a useful data rate.

The quest to support multiple devices, at the heart of Wi-Fi 7 design, eventually led to the Wi-Fi Alliance’s announcement that even a 20 MHz IoT device can now be certified as a Wi-Fi 7 device. The Wi-Fi 7 certification program, expanded to include 20-MHz IoT devices, could have a profound impact on this wireless technology’s future.

Figure 1 Wi-Fi 7 in access points and routers is expected to overtake Wi-Fi 6/6E in 2028. Source: Infineon

20-MHz IoT in Wi-Fi 7’s fold

Unlike notebooks and smartphones, 20-MHz devices don’t require a high data rate. IoT applications like door locks, thermostats, security cameras, and robotic vacuum cleaners need to be connected, but they don’t require gigabit data rates; they typically need 15 Mbps. What they demand is high-quality, reliable connectivity, as these devices sit at difficult locations from a wireless perspective.

At CES 2026, Infineon unveiled what it calls the industry’s first 20-MHz Wi-Fi 7 device for IoT applications. ACW741x, part of Infineon’s AIROC family of multi-protocol wireless chips, integrates a tri-radio encompassing Wi-Fi 7, Bluetooth LE 6.0 with channel sounding, and IEEE 802.15.4 Thread with Matter ecosystem support in a single device.

Figure 2 ACW741x integrates radios for Wi-Fi 7, Bluetooth LE 6.0, and IEEE 802.15.4 Thread in a single chip. Source: Infineon

The ACW741x tri-radio chip also integrates wireless sensing capabilities, adding contextual awareness to IoT devices and facilitating home automation and personalization applications. Here, Wi-Fi Channel State Information (CSI) based on the 802.11bf standard enables enhanced Wi-Fi sensing with intelligence sharing between same-network devices. Next, channel sounding delivers accurate, secure, and low-power ranging with centimeter-level accuracy.

ACW741x is optimized for a 20-MHz design to support battery-operated applications such as security cameras, door locks, and thermostats that require ultra-low Wi-Fi-connected standby power. It bolsters link reliability with adaptive band switching to mitigate congestion and interference.

Adaptive band switching without disconnecting from the network opens the door to Wi-Fi 7 multi-link for IoT devices while maintaining concurrent links across 2.4 GHz, 5 GHz, and 6 GHz frequency bands. ACW741x supports Wi-Fi 7 multi-link for IoT, enhancing robustness in congested environments.

Multi-link for IoT devices

Wi-Fi operates in three bands—2.4 GHz, 5 GHz, and 6 GHz—and when a device connects to an access point, it must choose a band. Once connected, it cannot change it, even if that band gets congested. That will change in Wi-Fi 7, which connects virtually to all three bands with a single RF chain at no extra system cost.

Wi-Fi 7 operates in the best frequency band, enhancing robustness in congestion in home networks and interference across neighboring networks. “Multi-link for IoT allows establishing connections at all bands, and a device can dynamically select which band to use at a given point via active band switching without disconnecting from the networking,” said Trikutam. “And you can move from one band to another by disconnecting and reconnecting within 7 to 10 seconds.”

That’s crucial because the number of connected devices in a home is growing rapidly, from 10 to 15 devices after pandemic to more than 50 devices in 2025 in a U.S. and European home. Add this to the introduction of 20-MHz IoT devices in Wi-Fi 7’s fold, and you have a rosy picture for this wireless technology’s future.

Figure 3 Multi-link for IoT enables wireless connections across all three frequency bands. Source: Infineon

According to the Wi-Fi Alliance, shipments of access points supporting the standard rose from 26.3 million in 2024 to a projected 66.5 million in 2025. And ABI Research projects that the transition to Wi-Fi 7 will accelerate further in 2026, with a forecast annual shipment number of Wi-Fi 7 access points at 117.9 million.

Related Content

The post CES 2026: Multi-link, 20-MHz IoT boost Wi-Fi 7 prospects appeared first on EDN.

LiDAR’s power and size problem

EDN Network - 5 годин 13 хв тому

Awareness of LiDAR and advanced laser technologies has grown significantly in recent years. This is in no small part due to their use in autonomous vehicles such as those from Waymo, Nuro, and Cruise, plus those from traditional brands such as Volvo, Mercedes, and Toyota. It’s also making its way into consumer applications; for example, the iPhone Pro (12 and up) includes a LiDAR scanner for time-of-flight (ToF) distance calculations.

The potential of LiDAR technologies extends beyond cars, including applications such as range-finding in golf and hunting sights. However, the nature of the technology used to power all these systems means that solutions currently on the market tend to be bulkier and more power-intensive than is ideal. Even within automotive, the cost, power consumption, and size of LiDAR modules continue to limit adoption.

Tesla, for example, has chosen to leave out LiDAR completely and rely primarily on vision cameras. Waymo does use LiDAR, but has reduced the number of sensors in its sixth-generation vehicles: from five to four.

Overcoming the known power and size limitations in LiDAR design is critical to enabling scalable, cost-effective adoption across markets. Doing so also creates the potential to develop new application sectors, such as bicycle traffic or blind-spot alerts.

In this article, we’ll examine the core technical challenges facing laser drivers that have tended to restrict wider use. We’ll also explore a new class of laser driver that is both smaller and significantly more power efficient, helping to address these issues.

Powering ToF laser drivers

The main power demand within a LiDAR module comes from the combination of the laser diode and its associated driver that together generate pulsed emissions in the visible or near-infrared spectrum. Depending on the application, the LiDAR may need to measure distances up to several hundred meters, which can require optical power of 100-200 W. Since the efficiency of the laser diodes is typically 20-30%, the peak driving power delivered to the laser must be around 1 kW.

On the other hand, the pulse duration must be short to ensure accuracy and adequate resolution, particularly for objects at close distances. In addition, since the peak optical power is high, limiting the pulse duration is critical to ensure the total energy conforms to health guidelines for eye safety. Fulfilling all these requirements typically calls for pulses of 5 ns or less.

Operating the laser thus requires the driver to switch a high current at extremely high speed. Standing in the designer’s way, the inductance associated with circuit connections, board parasitics, and bondwires of IC packages is enough to prevent the current from changing instantaneously.

These small parasitic inductances are intrinsic to the circuit and cannot be eliminated. However, by introducing a parallel capacitance, it is possible to create a resonant circuit that takes advantage of this inductance to achieve a short pulse duration. If the overall parasitic inductance is about 1 nH and the pulse duration is to be a few nanoseconds, the capacitance can be only a few nano Farads or less. With such a low value of capacitance, the applied voltage must be on the order of 100 V to achieve the desired peak power in the laser. This must be provided by boosting the available supply voltage.

Discrete laser driver

Figure 1 shows the circuit diagram for a resonant laser-diode driver, including the resonant capacitor (Csupply) and effective circuit inductance (Lbond). A boost regulator provides the high voltage needed to operate the resonant circuit.

Figure 1 Resonant gate driver and boost regulator, including the resonant capacitor (Csupply) and effective circuit inductance (Lbond). (Source: Silanna Semiconductor)

The circuit requires a boost voltage regulator, depicted as Boost voltage regulator (VR) in the diagram, to provide the high voltage needed at Csupply to deliver the required energy. The circuit as shown contains a discrete gate driver for the main switching transistor (FET), which must be controlled separately to generate the desired switching signals.

In addition, isolation resistance is needed between Cfilter and Csupply, shown in the diagram, to ensure the resonant circuit can operate properly. This is relatively inefficient, as no more than 50% of the energy is transferred from the filter side to Csupply.

Handheld equipment limitations

In smaller equipment types, such as handheld ranging devices and action cameras, the high voltage must be derived from a small battery of low nominal voltage—typically a 3-V CR2 or a 3.7-V (nominal voltage, up to 4.2 V) lithium battery—which is usually the main power source.

Figure 2 shows a comparable schematic for a laser-diode driver powered from a 3.7-V rechargeable lithium battery. Achieving the required voltage using a discrete boost VR and laser-diode driver is complex, and designers need to be very careful about efficiency.

Multiple step-up converters are often used, but efficiency drops rapidly. If two stages are used, each with an efficiency of 90%, the combined efficiency across the two stages is only 81%.

Figure 2 A laser driver operated from a rechargeable lithium battery, two stages are used for a combined efficiency of 80%. (Source: Silanna Semiconductor)

In addition, there are stringent constraints on enclosure size, and the devices are often sealed to prevent dust or water ingress. On the other hand, sealing also prevents cooling airflow, thereby making thermal management more difficult. In addition, high overall efficiency is essential to maximize battery life while ensuring the high optical power needed for long range and high accuracy.

Circuit layout and size

The high speeds and slew rates involved in making the LiDAR transmitter work call for proper consideration of circuit layout and component selection. A gallium nitride (GaN) transistor is typically preferred for its ability to support fast switching at high voltage compared to an ordinary silicon MOSFET. Careful attention to ground connections is also required to prevent voltage overshoots and ground bounce from disrupting proper transistor switching and potentially damaging the transistor.

Also, a compact module design is difficult to achieve due to efficiency limitations and thermal management challenges. The inefficiencies in the discrete circuit implementation mean operating at high power produces high losses and increased self-heating that can cause the operating temperature to rise. However, while short pulses can reduce the average thermal load, current slew rates must be extremely high. If this cannot be maintained consistently, extra losses, more heat, and degraded performance can result.

A heatsink is the preferred thermal management solution, although a large heatsink can be needed, leading to a larger overall module size and increased bill of materials cost. In addition, ensuring eye safety calls for a fast shutdown in the event of a circuit fault.

Bringing the boost stage, isolation, GaN FET driver, and control logic into a single compact IC (see Figure 3) achieves greater functional integration and offers a route to higher efficiency, smaller form factors, and enhanced safety through nanosecond-level fault response.

Figure 3 An integrated driver designed for resonant capacitor charging combines short pulse width with high power and efficiency. This circuit was implemented with Silanna SL2001 dual-output driver. (Source: Silanna Semiconductor)

While leveraging resonant-capacitor charging to achieve short, tightly controlled pulse duration, this integration avoids the energy losses incurred in the capacitor-to-capacitor transfer circuitry. The fault sensing and reporting can be brought on-chip, alongside these timing and control features.

This approach is seen in LiDAR driver ICs like the Silanna FirePower family, which integrate all the functions needed for charging and firing edge-emitting laser (EEL) or vertical-cavity surface-emitting laser (VCSEL) resonant-mode laser diodes at sub-3-ns pulse width. Figure 4 shows how an experimental setup produced a 400-W pulse of 2.94 ns, operating with a capacitor voltage boosted to 120 V with a resonant capacitor value of 2.48 nF.

Figure 4 Test pulse produced using integrated driver and circuit configuration as in Figure 3. (Source: Silanna Semiconductor)

The driver maintains control of the resonant capacitor energy and eliminates any effects of input voltage fluctuations, while on-chip logic sets the output power and performs fault monitoring to ensure eye safety. The combined effects of advanced integration and accurate logic-based control can save 90% of charging power losses compared to a discrete implementation and realize an overall charging efficiency of 85%. The control logic and fault monitoring are configured through an I2C connection.

Of the two devices in this family, the SL2001 works with a supply voltage from 3 V to 24 V and provides a dual GaN/MOS drive that enables peak laser power greater than 1000 W with a pulse-repetition frequency up to several MHz. The second device, the SL2002, is a single-channel driver targeted for lower power applications and is optimized for low input voltage (3 V-6 V) operation. Working off a low supply voltage, this driver’s 80-V laser diode voltage and 1 MHz repetition rate are suited to handheld applications such as rangefinders and 3D mapping devices. Figure 5 shows how the SL2002 can simplify the driving circuit for a battery-operated ranging device powered from a 3.7 V lithium battery.

Figure 5 Simplified circuit diagram for low-voltage battery-operated ranging. (Source: Silanna Semiconductor)

Shrinking LiDAR modules

LiDAR has been a key component in the success of automated driving, working in conjunction with other sensors, including radar, cameras, and ultrasonic detectors, to complete the vehicle’s perception system. However, LiDAR modules must become smaller and more energy-efficient to earn their place in future vehicle generations and fulfil opportunities beyond the automotive sphere.

Focusing innovation on the laser-driving circuitry unlocks the path to next-generation LiDAR that is smaller, faster, and more energy-efficient than before. New, single-chip drivers that deliver high optical output power with tightly controlled, nanosecond pulse width enable LiDAR to address tomorrow’s cars as well as handheld devices such as rangefinders.

Ahsan Zaman is Director of Marketing at Silanna Semiconductor, Inc. for the FirePowerTM Laser Drivers line of products. He joined the company in 2018 through the acquisition of Appulse Power, a Toronto, Canada-based Startup company for AC-DC power supplies, where he was a co-founder and VP of Engineering. Prior to that, Ahsan received his B.A.Sc., M.A.Sc., and Ph.D. degrees in Electrical Engineering from the University of Toronto, Canada, in 2009, 2012, and 2015, respectively. He has more than a decade of experience in power converter architectures, mixed-signal IC design, low-volume and high-efficiency power management solutions for portable electronic devices, and advanced control methods for high-frequency switch-mode power supplies. Ahsan has previously collaborated with industry-leading semiconductor companies such as Qualcomm, TI, NXP, EXAR etc., and co-authored more than 20 IEEE conference and journal publications, and holds several patents in this field

Related Content

The post LiDAR’s power and size problem appeared first on EDN.

Germanium Mining Corp joins US National Defense Industrial Association

Semiconductor today - 5 годин 22 хв тому
Publicly traded mineral exploration company Germanium Mining Corp of Vancouver, BC, Canada has been accepted as a new member of the US National Defense Industrial Association (NDIA), which supports collaboration across industry, government and academia to strengthen US national security and the defense industrial base. Germanium Mining Corp is already a member of the Nevada Mining Association...

Redefining Edge Computing: How the STM32V8 18nm Node Outperforms Legacy 40nm MCUs

ELE Times - 8 годин 3 хв тому

STMicroelectronics held a virtual media briefing, hosted by Patrick Aidoune, General Manager, General Purpose MCU Division at ST, on November 17, 2025. The briefing was held before their flagship event, the STM32 Summit, where they launched STM32V8, a new generation of STM32 microcontrollers.

STMicroelectronics introduced its new generation microcontroller, STM32V8, under the STM32 class recently. Built on an innovative 18nm process technology with FD-SOI and phase change memory (PCM) technology included, this microcontroller is the first of its kind in the world.  It is the first under 20nm process to use FD-SOI along with an embedded PCM technology.

FD-SOI Technology

The FD-SOI is a silicon technology, co-developed by ST, which brought innovation in the aerospace and automotive applications. The 18nm process, co-developed with the Samsung Foundry, provides a cost-competitive leap in both performance as well as power consumption.

The FD-SOI technology gives a strong robustness to ionising particles and reliability in harsh operating environments, which makes it particularly suitable for intense radiation exposure found in earth orbit systems. The FD-SOI also helps reduce the static power consumption, along with allowing operations on a lower voltage supply, while sustaining harsh industrial environments as well.

Key Features

STM32V8’s Arm Cortex-M85 core, along with the 18nm process, gives it a clock speed of up to 800MHz, making it the most powerful STM32 ever shipped. It has also been embedded with up to 4 Mbytes of user memory in a competitive dual bank, allowing bank swapping for seamless code updates.

Keeping in mind the needs of developers, the STM32V8 provides for more compute headroom, along with more security and improved efficiency. Compared it is 40nm process node with the same technologies, the STM32V8 brings with it improved performance, higher density, and better power efficiency.

Industrial Applications

This new microcontroller is a multipurpose system to benefit several industries:

  • Factory Automation and Robotics
  • Audio Applications
  • Smart Cities and Buildings
  • Energy Management Systems
  • Healthcare and Biosensing
  • Transportation (ebikes)

Achievements

ST’s new microcontroller has been selected by SpaceX for its high-speed connectivity system in the Starlink Satellite System.

“The successful deployment of the Starlink mini laser system in space, which uses ST’s STM32V8 microcontroller, marks a significant milestone in advancing high-speed connectivity across the Starlink network. The STM32V8’s high computing performance and integration of large embedded memory and digital features were critical in meeting our demanding real-time processing requirements, while providing a higher level of reliability and robustness to the Low Earth Orbit environment, thanks to the 18nm FD-SOI technology. We look forward to integrating the STM32V8 into other products and leveraging its capabilities for next-generation advanced applications,” said Michael Nicolls, Vice President, Starlink Engineering at SpaceX.

STM32V8, like its predecessors, is expected to draw significant benefit from ST’s edge AI ecosystem, which is under continued expansion. Currently, the STM32V8 is in early-stage access for selected customers with key OEMs’ availability as of the first quarter 2026 and with broader availability to follow.

Apart from unveiling the new generation microcontroller, ST also announced the expansion of its STM32 AI Model Zoo, which is part of the comprehensive ST Edge AI Suite of tools. The STM32 AI Model Zoo has more than 140 models from 60 model families for vision, audio, and sensing AI applications at the edge, making it the largest MCU-optimised library of its kind.

This AI Model Zoo has been designed, keeping in mind the requirements of both data scientists and embedded systems engineers, a model that’s accurate enough to be useful and that also fits within their energy and memory constraints.

The STM32 AI Model Zoo is the richest in the industry, for it not only offers multiple models, but also scripts to easily retrain models, evaluate accuracy, and deploy on boards. ST has also introduced native support for PyTorch models. This complements their existing support for TensorFlow, Keras AI frameworks, LiteRT, and ONNX formats, giving developers additional flexibility in their development workflow. They are also introducing more than 30 new families of models, which can use the same deployment pipeline. Many of these models have already been quantised and pruned, meaning that they offer significant memory size and inference time optimisations while preserving accuracy.

Additionally, they also announced the release of STM32 Sidekick, their new AI agent on the ST Community, available 24/7. This new AI agent is trained on official STM32 documentation (datasheets, reference manuals, user manuals, application notes, wiki entries, and community knowledge base articles) to help users locate relevant technical data, obtain concise summaries of complex topics, and discover insights and documents. Alongside, they announced STM32WL3R, a version of their STM32WL3 tailored for remote control applications supporting the 315 MHz band. The STM32WL3R is a sub-GHz wireless microcontroller with an ultra-low-power radio.

~ Shreya Bansal, Sub-Editor

The post Redefining Edge Computing: How the STM32V8 18nm Node Outperforms Legacy 40nm MCUs appeared first on ELE Times.

Vitrealab closes $11m Series A financing round

Semiconductor today - 8 годин 3 хв тому
Vitrealab GmbH of Vienna, Austria, a developer of photonic integrated circuits (PICs) for laser–LCoS-based augmented reality (AR) light engines, has closed a significantly oversubscribed $11m Series A financing round, led by LIFTT Italian Venture Capital and LIFTT EuroInvest with participation from Constructor Capital, aws Gründungsfonds, Gateway Ventures, PhotonVentures, xista Science Ventures, Moveon Technologies, and Hermann Hauser Investment...

🎓 Зимовий вступ 2026 у КПІ: нульовий курс «Відкритий шлях до вищої освіти»

Новини - 8 годин 14 хв тому
🎓 Зимовий вступ 2026 у КПІ: нульовий курс «Відкритий шлях до вищої освіти»
Image
kpi ср, 01/07/2026 - 12:00
Текст

З лютого 2026 року КПІ ім. Ігоря Сікорського відкриває зимовий набір на нульовий курс — підготовче відділення «Відкритий шлях до вищої освіти».

“‘Bharat’ will become a major player in entire electronics stack…”, Predicts Union Minister, Ashwini Vaishnaw

ELE Times - 8 годин 56 хв тому

Union Electronics and IT Minister Ashwini Vaishnaw predicted that ‘Bharat’ will become a major player in the entire electronics stack, in terms of design, manufacturing, operating system, applications, materials, and equipment.

In an X post, the Union Minister drew attention to a major milestone for Prime Minister Narendra Modi’s ‘Make in India’ initiative and making India a major producer economy since Apple shipped $50 billion worth of mobile phones in 2025.

“Electronics production has increased six times in the last 11 years. And electronics exports have grown 8 times under PM Modi’s focused leadership. This progress has propelled electronics products among the top three exported items,” Vaishnaw noted.

He further informed that 46 component manufacturing projects, laptop, server, and hearable manufacturers had added to the ecosystem, which are making electronics manufacturing a major driver of the manufacturing economy.

“Four semiconductor plants will start commercial production this year. Total jobs in electronics manufacturing are now 25 lakh, with many factories employing more than 5,000 employees in a single location. Some plants employ as many as 40,000 employees in a single location,” the minister informed, adding that “this is just the beginning”.

Last week, the industry welcomed the approval of 22 new proposals under the third tranche of the Electronics Components Manufacturing Scheme (ECMS) by the government, saying that it marks a decisive inflexion point in India’s journey towards deep manufacturing and the creation of globally competitive Indian champions in electronics components.

With this, the total number of ECMS-approved projects rises to 46, taking cumulative approved investments to over Rs 54,500 crore. Earlier tranches saw seven projects worth Rs 5,532 crore approved on October 22 and 17 projects amounting to Rs 7,172 crore on November 17. The rapid scale-up across tranches underscores the strong industry response and the growing confidence in India’s components manufacturing vision.

According to the IT Ministry, the 22 projects approved in the third tranche are expected to generate production worth Rs 2,58,152 crore and create 33,791 direct jobs.

The post “‘Bharat’ will become a major player in entire electronics stack…”, Predicts Union Minister, Ashwini Vaishnaw appeared first on ELE Times.

NVIDIA’s Jetson T4000 for Lightweight & Stable Edge AI Unveiled by EDOM

ELE Times - 11 годин 22 хв тому

EDOM Technology announced the introduction of the NVIDIA Jetson T4000 edge AI module, addressing the growing demand from system integrators, equipment manufacturers, and enterprise customers for balanced performance, power efficiency, and deployment flexibility. With powerful inference capability and a lightweight design, NVIDIA Jetson T4000 enables faster implementation of practical physical AI applications.

Powered by NVIDIA Blackwell architecture, NVIDIA Jetson T4000 supports Transformer Engine and Multi-Instance GPU (MIG) technologies. The module integrates a 12-core Arm Neoverse-V3AE CPU, three 25GbE network interfaces, and a wide range of I/O options, making it well-suited for low-latency, multi-sensor, and real-time computing requirements. In addition, Jetson T4000 features a third-generation programmable vision accelerator (PVA), dual encoders and decoders, and an optical flow accelerator. These dedicated hardware engines allow stable AI inference even under constrained compute and power budgets, making the platform particularly suitable for mid-range models and real-time edge applications.

For system integrators (SIs), the modular architecture of Jetson T4000, combined with NVIDIA’s mature software ecosystem, enables rapid integration of vision, sensing, and control systems. This significantly shortens development and validation cycles while improving project delivery efficiency, especially for multi-site and scalable edge AI deployments.

For equipment manufacturers, Jetson T4000’s compact form factor and low-power design allow flexible integration into a wide range of end devices, including advanced robotics, industrial equipment, smart terminals, machine vision systems, and edge controllers. These capabilities help manufacturers bring stable AI inference into products with limited space and power budgets, accelerating intelligent product upgrades.

Enterprise users can deploy Jetson T4000 across diverse scenarios such as smart factories, smart retail, security, and edge sensor data processing. By performing inference and data pre-processing at the edge, organisations can reduce system latency, lower cloud workloads, and improve overall operational efficiency—while maintaining system stability and deployment flexibility.

In robotics and automation applications, Jetson T4000 features low power consumption, high-speed I/O and a compact footprint, making it an ideal platform for small mobile robots, educational robots, and autonomous inspection systems, delivering efficient and reliable AI computing for a wide range of automation use cases.

NVIDIA Jetson product lineup spans from lightweight to high-performance modules, including Jetson T4000 and T5000, addressing diverse requirements ranging from compact edge devices and industrial control systems to higher-performance inference applications. With NVIDIA’s comprehensive AI development tools and SDKs, developers can rapidly port models, optimise inference performance, and seamlessly integrate AI capabilities into existing system architectures.

Beyond supplying Jetson T4000 modules, EDOM Technology leverages its extensive ecosystem of partners across chips, modules, system integration, and application development. Based on the specific development stages and requirements of system integrators, equipment manufacturers, and enterprise customers, EDOM provides end-to-end support—from early-stage planning and technical consulting to ecosystem enablement. By sharing ecosystem expertise and practical experience, EDOM helps both existing customers and new entrants to the edge AI domain quickly build application capabilities and deploy edge AI solutions tailored to real-world scenarios.

The post NVIDIA’s Jetson T4000 for Lightweight & Stable Edge AI Unveiled by EDOM appeared first on ELE Times.

Anritsu to Bring the Future of Electrification Testing at CES 2026

ELE Times - 11 годин 49 хв тому

Anritsu Corporation will exhibit Battery Cycler and Emulation Test System RZ-X2-100K-HG, planned for sale in the North American market as an evaluation solution for eMobility, at CES 2026 (Consumer Electronics Show), one of the world’s largest technology exhibitions to be held in Las Vegas, USA, from January 6 to January 9, 2026.

The launch of the RZ-X2-100K-HG in the North American market represents the first step in the global expansion efforts of TAKASAGO, LTD., which holds a significant share in the domestic EV development market, and it is an important measure looking ahead to future global market growth.

At CES 2026, a concept exhibition will showcase the Power HIL evaluation system combining the RZ-X2-100K-HG with dSPACE’s HIL simulator, demonstrating a new direction for the EV evaluation process.

Additionally, the power measurement solutions from DEWETRON, which joined the Anritsu Group in October 2025, will also be exhibited. Using a three-phase motor performance evaluation demonstration, we will present example applications.

About the RZ-X2-100K-HG

The RZ-X2-100K-HG is a test system developed by TAKASAGO, LTD. of the Anritsu Group, equipped with functions for charge-discharge testing and battery emulation that support high voltage and large current. It is a model based on the RZ-X2-100K-H, which has a proven track record in Japan, adapted to comply with the United States safety standards and input power specifications. This system is expected to be used for testing the performance, durability, and safety of automotive batteries and powertrain devices in North America.

About Power HIL

Power HIL (Power Hardware-in-the-Loop) is an extended simulation technology that combines virtual and real elements by adding a “real power supply function” to HIL (Hardware-in-the-Loop). Power HIL creates a virtual vehicle environment with real power, reproducing EV driving tests and charging tests compatible with multiple charging standards under conditions close to reality. This allows for high-precision and efficient evaluation of battery performance, safety, and charging compatibility without using an actual vehicle.

Terminology Explanation

[*] Battery Emulation Test System

A technology that simulates the behaviour of real batteries (voltage, current, internal resistance, etc.) using a power supply device to evaluate how in-vehicle equipment operates.

The post Anritsu to Bring the Future of Electrification Testing at CES 2026 appeared first on ELE Times.

Keysight’s Software Solution for Reliable AI Deployment in Safety-Critical Environments

ELE Times - 12 годин 11 хв тому

Keysight Technologies, Inc. introduced Keysight AI Software Integrity Builder, a new software solution designed to transform how AI-enabled systems are validated and maintained to ensure trustworthiness. As regulatory scrutiny increases and AI development becomes increasingly complex, the solution delivers transparent, adaptable, and data-driven AI assurance for safety-critical environments such as automotive.

AI systems operate as complex, dynamic entities, yet their internal decision processes often remain opaque. This lack of transparency creates significant challenges for industries, such as automotive, that must demonstrate safety, reliability, and regulatory compliance. Developers struggle to diagnose dataset or model limitations, while emerging standards — such as ISO/PAS 8800 for automotive and the EU AI Act- mandate explainability and validation without prescribing clear methods. Fragmented toolchains further complicate engineering workflows and heighten the risk of conformance gaps.

Keysight AI Software Integrity Builder introduces a unified, lifecycle-based framework that answers the critical question: “What is happening inside the AI system, and how do I ensure it behaves safely in deployment?” The solution equips engineering teams with the evidence needed for regulatory conformance and enables continuous improvement of AI models. Unlike fragmented toolchains that address isolated aspects of AI testing, Keysight’s integrated approach spans dataset analysis, model validation, real-world inference testing, and continuous monitoring.

Core capabilities of Keysight AI Software Integrity Builder include:

  • Dataset Analysis: Analyse data quality using statistical methods to uncover biases, gaps, and inconsistencies that may affect model performance.
  • Model-Based Validation: Explains model decisions and uncovers hidden correlations, enabling developers to understand the patterns and limitations of an AI system.
  • Inference-Based Testing: Evaluates how models behave under real-world conditions, detects deviations from training behaviour, and recommends improvements for future iterations.

While open-source tools and vendor solutions typically address only isolated aspects of AI testing, Keysight closes the gap between training and deployment. The solution not only validates what a model has learned, but also how it performs in operational scenarios — an essential requirement for high-risk applications such as autonomous driving.

Thomas Goetzl, Vice President and General Manager of Keysight’s Automotive & Energy Solutions, said: “AI assurance and functional safety of AI in vehicles are becoming critical challenges. Standards and regulatory frameworks define the objectives, but not the path to achieving a reliable and trustworthy AI deployment. By combining our deep expertise in test and measurement with advanced AI validation capabilities, Keysight provides customers with the tools to build trustworthy AI systems backed by safety evidence and aligned with regulatory requirements.”

With AI Software Integrity Builder, Keysight empowers engineering teams to move from fragmented testing to a unified AI assurance strategy, enabling them to deploy AI systems that are not only performant but also transparent, auditable, and compliant by design.

The post Keysight’s Software Solution for Reliable AI Deployment in Safety-Critical Environments appeared first on ELE Times.

Molecular Beam Epitaxy (MBE) Growth of GaAs-Based Devices

ELE Times - 13 годин 28 хв тому

Courtesy: Orbit & Skyline

In the semiconductor ecosystem, we are familiar with the chips that go into our devices. Of course, they do not start as chips but are made into the familiar form once the process is complete. It is easy to imagine how to arrive at that end in silicon-based technology, but things are far more interesting in the III-V tech world. Here, we must first achieve the said III-V film using a thin-film deposition method. It is obvious that this would form the bedrock of the device, and quality is critical. Minimal defects, highest mobility, and a plethora of demands following the advent of technology have made this aspect extremely important in today’s world.

In this blog, we will cover how Molecular Beam Epitaxy (MBE) enables the growth of GaAs-based devices, its history, advantages, challenges, and the wide range of optoelectronic applications it supports. Looking to optimise thin-film growth or improve device yield? Explore our Semiconductor FAB Solutions for end-to-end support across Equipment, Process, and Material Supply.

What Is Molecular Beam Epitaxy (MBE)?

Molecular Beam Epitaxy (MBE) is a well-known thin-film growth technique developed in the 1960s. Using ultra-high vacuum (UHV) conditions, it grows high-purity thin films with atomic-level control over the thickness and doping concentration of the layers. This provides excellent control to tune device properties and, in the case of III–V films, bandgap engineering. Such sought-after features make MBE widely renowned for producing the best-quality films, which currently lead device performance in applications such as LEDs, solar cells, sensors, detectors, and power electronics.

However, its major drawbacks include high costs and slow growth rates, limiting large-scale industry adoption. Need support with MBE tool installation, calibration, or fab floor setup? Our Global Field Engineering and Fab Facility Solutions teams can help.

A Brief History of MBE Technology

The concept of Molecular Beam Epitaxy was first introduced by K.G. Günther in a 1958 publication. Even though his films were not epitaxial—being deposited on glass, John Davey and Titus Pankey expanded his ideas to demonstrate the now-familiar MBE process for depositing GaAs epitaxial films on single-crystal GaAs substrates in 1968.

The final version of the technology was given by Arthur and Cho in the late 1960s, observing the MBE process using a Reflection High Energy Electron Diffraction (RHEED) in-situ process. If you work with legacy MBE platforms or require upgrade support, our Legacy Tool Management Services ensure continuity and extended tool life.

Why GaAs? The First Semiconductor Grown by MBE

The first semiconductor material to be grown using MBE, gallium arsenide or GaAs for short, is one of the leading III-V semiconductors in high-performance optoelectronics such as solar cells, photodetectors, lasers, etc. Due to its several interesting properties, such as a high band gap of 1.43 eV, high mobility, high absorption coefficient, and radiation hardness, it finds use in sophisticated applications such as space photovoltaics as well as infrared detectors and next-generation quantum devices.

Since GaAs was the first material to be studied using the MBE method, it is far better understood with decades of research on devices. The efficiency of heterojunction solar cells grown on substrates such as Ge was as high as 15-20% in the 1980s. Although the current numbers are the best in the industry, using MBE for growing GaAs solar cells comes with its own set of challenges and advantages:

  • Throughput and cost: Commercially, it is not as viable as some of the other vapor phase growth techniques since it is a slow and expensive process. Growth rates of MBE films are usually in the range of ~1.0 μm/h, which are far behind the CVD achieved rates of up to ~200 μm/h.
  • Thickness and uniformity: Solar cell structures require absorber layers with thicknesses of the order of several microns. Maintaining uniformity over such a range is not trivial.
  • Defect management: Thin films are beset with a range of defects such as dislocations, antisite defects, point defects, background impurities and so on. Optoelectronic devices suffer heavily due to the presence of defects as carrier lifetimes reduce and consequently open circuit voltage and fill factor. Therefore, multiple factors such as substrate quality, interface sharpness, and growth conditions are mandatory.
  • Doping and alloy incorporation: MBE is one of the best techniques to dope and make alloys, especially when it comes to III-V compounds. Band gap engineering to expand the available bandwidth for solar absorption is one of the most important advantages of using MBE. When making multiple junctions or tandem cells, several growth challenges, such as phase separation, strain, and exact control of the composition of each layer, are challenging.
  • Surface and interface quality: Interfacial strain is one of the major causes of loss of carriers due to recombination. When making solar cell stacks, there are multiple layers where interfaces are required, such as window layers, tunnel junctions, and passivation layers. MBE is excellent at providing abrupt interfaces due to its fast shutter speed and ultra-high vacuum conditions, resulting in high-performance devices.

A lot of the advantages of MBE are nullified due to its challenges, which makes it more of a hybrid technique when it comes to industrial applications. This has resulted in the usage of higher throughput methods, such as MOVPE/MOCVD, along with hybrid attempts to improve efficiency.

Other Optoelectronic Devices Grown Using MBE

In III-V materials and beyond, MBE has excelled in growing device-quality layers of several other types of optoelectronic structures:

  • LASERs and VCSELs: One of the most grown stacks by MBE is of AlGaAs/GaAs heterostructure for quantum well lasers and vertical cavity surface emitting lasers (VCSELs). AlGaAs/GaAs multi-quantum well VCSELs with distributed Bragg reflectors (DBRs) have been successfully demonstrated with threshold currents, continuous wave operations at elevated temperatures, GHz modulation speeds, etc.
  • Quantum Cascade LASERs (QCLs): The same GaAs/AlGaAs heterostructures have been fabricated for application in mid-infrared QCLs using MBE. Its specialty in producing abrupt interfaces and controlled doping is used in growth methods to reduce interface roughness and improve performance.
  • Infrared Photodetectors: A leading IR photodetector currently is HgCdTe (MCT), which has been grown using MBE on GaAs substrates. GaSb-based nBn detectors are also grown using superlattices of InAs/GaSb, which reduces lattice mismatch due to buffer layers.
  • High mobility 2D electron gas heterostructures: One of the most important discoveries of the last couple of decades has been that of 2-dimensional electron gas, which has led to applications such as high electron mobility transistor (HEMT). AlGaAs/GaAs heterostructures support the formation of this 2DEG, where the purity of the source material is critical. MBE grown films have shown mobilities as high as ~ 35 x 106 cm2/V.s.

Conclusion

MBE is a complex, slow process that has largely been confined to R&D labs traditionally. However, the quality of the deposited layers is unparalleled and has helped in improving and discovering new devices. In the last decade or so, there has been partial adoption of MBE in the industry due to the ability of the tool to provide cutting-edge device quality. However, mass adoption is unlikely due to the low quantity of wafers that are possible to grow at a time, and so we remain content with discovering the next generation of devices.

The post Molecular Beam Epitaxy (MBE) Growth of GaAs-Based Devices appeared first on ELE Times.

Cambridge GaN Devices appoints Fabio Necco as new CEO

Semiconductor today - Втр, 01/06/2026 - 22:54
Fabless firm Cambridge GaN Devices Ltd (CGD) — which was spun out of the University of Cambridge in 2016 to design, develop and commercialize power transistors and ICs that use GaN-on-silicon substrates — has appointed Fabio Necco as chief executive officer. The move is designed to drive forward CGD’s entry into key markets...

2 decade old SoC

Reddit:Electronics - Втр, 01/06/2026 - 21:59
2 decade old SoC

This is an SoC Camera sensor and controller from an old webcam likely manufactured in the early 2000s hence that chip is manufactured in 2004 (the year i was born in lol) i found this camera in my grandparents house a decade ago i grapped it as a kid and thought it was cool and disassembled it and through it in a big plastic bag along with my cool junk collection.

A decade later i found it's pcb (the shell is no where to be found lol) and desoldered it's components and found that SoC chip that i thought it's pretty cool!

submitted by /u/inevitable_47
[link] [comments]

CES 2026: Wi-Fi 8 silicon on the horizon with an AI touch

EDN Network - Втр, 01/06/2026 - 17:49

While Wi-Fi 7 adoption is accelerating among enterprises, Wi-Fi 8 routers and mesh systems could arrive as early as summer 2026. It’s important to note that the IEEE 802.11bn standard, widely known as Wi-Fi 8, is expected to be ratified in 2028. So, the gap between Wi-Fi 7’s launch and the potential availability of Wi-Fi 8 products in mid-2026 could shorten the typical cycle between Wi-Fi generations.

At CES 2026 in Las Vegas, Nevada, wireless chip vendors like Broadcom and MediaTek are unveiling their Wi-Fi silicon offerings. ASUS is also conducting real-world throughput tests of its Wi-Fi 8 concept routers at CES 2026.

Figure 1 Wi-Fi 8 aims to deliver a system-wide upgrade across speed, capacity, reach, and reliability. Source: Broadcom

Wi-Fi 8—aimed at boosting reliability and reducing latency in dense, interference-prone environments—marks a shift in Wi-Fi evolution. While Wi-Fi 8 maintains the same theoretical maximum data rate as Wi-Fi 7, it aims to improve effective throughput, reduce packet loss, and decrease latency for time-sensitive applications.

Another notable feature of Wi-Fi 8 designs is the incorporation of AI ingredients. Below is a short profile of an AI accelerator chip that claims to facilitate real-time agentic applications for residential consumers.

AI accelerator for Wi-Fi 8

Wi-Fi 8 proponents are quick to point out that it connects the wireless world with the AI future through highly reliable connectivity and low-latency responsiveness. Real-time, latency-sensitive applications are increasingly seeking to employ agentic AI, and for that, Wi-Fi 8 aims to prioritize consistent performance under challenging conditions.

Broadcom’s new accelerated processing unit (APU), unveiled at CES 2026, combines compute and networking ingredients with AI acceleration in a single silicon device. BCM4918—a system-on-chip (SoC) device blending compute acceleration, advanced networking, and security—aims to deliver high throughput, low latency, and intelligent optimization needed for the emerging AI-driven connected ecosystem.

The new AI accelerator for Wi-Fi 8 integrates a neural engine for on-device AI/ML inference and acceleration. It also incorporates networking engines to offload both wired and wireless data paths, enabling complete CPU bypass of all networking traffic. For built-in security, cryptographic protocol acceleration ensures end-to-end data protection without performance compromise.

“Our new BCM4918 APU, along with our full portfolio of Wi-Fi 8 chipsets, form the foundation of an AI-ready platform that not only enables immersive, intelligent user experiences but also does so with efficiency, security, and sustainability at its core,” said Mark Gonikberg, senior VP and GM of Broadcom’s Wireless and Broadband Communications Division.

Figure 2 When paired with BCM6714 and BCM6719 dual-band radios, BCM4918 APU allows designers to develop a unified compute-and-connectivity architecture. Source: Broadcom

AI compute plus connectivity

The BCM4918 APU is paired with two new dual-band Wi-Fi 8 radio devices: BCM6714 and BCM6719. While combining 2.4 GHz and 5 GHz operation into a single piece of silicon, these Wi-Fi 8 radios also feature on-chip 2.4-GHz power amplifiers, reducing external components and improving RF efficiency.

These dual-band radios, when paired with the BCM4918 APU, allow design engineers to quickly develop a unified compute-and-connectivity architecture that enables edge-AI processing, real-time optimization, and adaptive intelligence. The APU and dual-band radios for Wi-Fi 8 are now available to early access customers and partners.

Broadcom’s Gonikberg says that Wi-Fi 8 represents a turning point where broadband, connectivity, compute, and intelligence truly converge. The fact that it’s arriving ahead of schedule is a testament to its convergence merits, and that it’s more than a speed upgrade and could transform connection stability and responsiveness.

Related Content

The post CES 2026: Wi-Fi 8 silicon on the horizon with an AI touch appeared first on EDN.

Simple speedy single-slope ADC

EDN Network - Втр, 01/06/2026 - 15:00

Ages ago, humankind crawled out of the primordial analog ooze and began to do digital. They soon noticed and quantified a fundamental need to interconnect their new quantized numerical novelties with the classic continuum of the ancestral engineer’s world. Thus arose the ADC.

Of course, there were (and are) an abundance of ADC schemes and schematics. One of the earliest and simplest of these was the single-slope type.

Single slope ADCs come in two savory flavors. In one, a linear analog voltage ramp is generated and compared to the input signal. The time required for the ramp to rise from zero (or near) to equality with the input is proportional to the input’s amplitude and taken as its digital conversion. 

We recently saw an example contributed by Dr. Jordan Dimitrov to our own friendly Design Idea (DI) corner in “Voltage-to-period converter offers high linearity and fast operation.”

In a different cultivar of the single sloper, a capacitor is charged to the input voltage, then linearly ramped down to zero. The time required to do that is proportional to Vin and counts (pun!) as the conversion result. An (extremely!) simple and cheap example of this type was published here about two and a half years ago in “A “free” ADC.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

While simple and cheap are undeniably good things, too much of a good thing is sometimes not such a good thing. The circuit in Figure 1 adds a few refinements (and a bit more cost) to that basic design in pursuit of an order of magnitude (or two) better accuracy and perhaps a bit more speed.

Figure 1 Simple speedy single-slope (SSSS) ADC biphasic conversion cycle.

Here’s how it works:

  1. (CONVERT = 1) switch U1 charges C1 to Vin
  2. (CONVERT = 0) C1 is linearly discharged by 100 µA current sourced by Z1Q1

Note: Z1, C1, and R2 should be precision types.

Conversion occurs in two phases, selected by one GPIO bit configured for output (CONVERT/ACQUIRE).

During the ACQUIRE (1) interval SPDT switch U1 connects integrator capacitor C1 to the input source, charging it to Vin. The acquisition time constant of the charging is:

C1(R sZ1+ U1 Ron, + Q2’s input impedance) = ~10 µs

To complete the charge to ½-lsb-precision at 12-bit resolution, this needs an ACQUIRE interval of:

10µs*loge(2(12+1)) = 90µs

The controlling microcontroller can then return CONVERT to zero, which switches the input side of C1 to ground, driving the base of the comparator transistor negative for a voltage step of –Vin, plus a “smidgen” (~12 mV).

This last is contributed by C2 to compensate for the zero offset that would otherwise accrue from Q2’s finite voltage gain and storage time.

Q1’s emergence from saturation drives INTEGRATE positive. Here it remains until the discharge of C1 is complete and Q1 turns back ON. This interval is:

Vin*C1 / 100µA = 200µs/v = 1-ms maximum

If the connected counter/peripheral runs at 20 MHz, then the max-count accumulation and conversion resolution will be 4000, or 11.97 bits.

This 1-ms, or ~12-bit, conversion cycle is sketched in Figure 2.  Note that good integral nonlinearity (INL) and differential nonlinearity (DNL) are inherent.

Figure 2 The SSSS ADC waveshapes. The ACQUIRE duration (12 bits) is 90 µs. The INTEGRATE duration is 1ms max (Vin C1 / Iq1 = 200 µs/V). Amplitude is 5 Vpp.

 Of course, not all signal sources will gracefully tolerate the loading imposed by this conversion sequence, and not all applications will find the tolerance of available LM4041 references and R1C1 adequately precise.

Figure 3 shows fixes for both of these limitations. A typical RRIO CMOS amplifier for A1 eliminates the input loading problem, and the R5 trim provides a convenient means for improving conversion calibration.

Figure 3 A1 input buffer unloads Vin, and R5 calibration trim improves accuracy.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Simple speedy single-slope ADC appeared first on EDN.

Don’t Let Your RTL Designs Get Bugged!

ELE Times - Втр, 01/06/2026 - 13:08

Courtesy: Cadence

Are you still relying solely on simulation to validate your RTL design? Is there any more validation required?

Simulation has been a cornerstone of hardware verification for decades. Its ability to generate random stimuli and validate RTL across diverse scenarios has helped engineers uncover countless issues and ensure robust designs. However, simulation is inherently scenario-driven, which means certain rare corner cases can remain undetected despite extensive testing.

This is where formal verification adds significant value. Formal doesn’t just simply mathematically analyse the entire state space of your design; it checks every possible value and transition your design could ever encounter, providing exhaustive coverage that complements simulation. No corner case is left unchecked. No bug is left hiding. Together, they form a powerful verification strategy.

Why Formal Matters in Modern Validation

Any modern validation effort needs to take advantage of formal verification, where the apps in the Jasper Formal Verification Platform analyse a mathematical model of RTL design and find corner-case design bugs without needing test vectors. This can add value across the design and validation cycle. Let’s look at some standout Jasper applications: Jasper’s Superlint and Visualise can help designers to quickly find potential issues or examine RTL behaviours without formal expertise. Jasper’s FPV (Formal Property Verification) allows formal experts to create a formal environment and sign off on the IP, delivering the highest design quality and better productivity than doing block-level simulation. Jasper’s C2RTL is used to exhaustively verify critical math functions in CPUs, GPUs, TPUs, and other AI accelerator chips.

Jasper enables thorough validation in various targeted domains, including low power, security, safety, SoC integration, and high-level synthesis verification.

“The core benefit of formal exhaustive analysis is its ability to explore all scenarios, especially ones that are hard for humans to anticipate and create tests for in simulation.”

Why Formal? Why Now?

Here’s why formal verification matters now:

  • No more test vectors or random stimuli. Formally, mathematically, and automatically explores all reachable states; verification can start as soon as RTL is available without the need to create a simulation testbench.
  • Powerful for exploring corner-case bugs. Exhaustive formal analysis can catch corner case bugs that escape even the most creative simulation testbenches.
  • Early design bring-up made easy. Validate critical properties and interfaces before your full system is ready.
  • Debugging is a breeze. When something fails, formal provides a precise counterexample, often with the shortest trace, eliminating the need for endless log hunting.
  • Perfect partnership with simulation. Simulation and formal aren’t rivals; they are partners. Use simulation for broad system-level checks, and Formal for exhaustive property checking and signoff of critical blocks. Merge formal and simulation coverage for complete verification signoff.

Conclusion

As RTL designs grow in complexity and stakes rise across power, safety, and performance, relying on simulation alone is no longer enough. While simulation remains indispensable for system-level validation, formal verification fills the critical gaps by exhaustively exploring every reachable state and uncovering corner-case bugs that would otherwise slip through. By integrating formal early and throughout the design cycle, teams can accelerate bring-up, improve debug efficiency, and achieve higher confidence at signoff. In today’s silicon landscape, the most robust verification strategy isn’t about choosing between simulation and formal—it’s about combining both to ensure no bug goes unnoticed and no risk is left unchecked.

The post Don’t Let Your RTL Designs Get Bugged! appeared first on ELE Times.

Adapting Foundation IP to Exceed 2 nm Power Efficiency in Next-Gen Hyperscale Compute Engines

ELE Times - Втр, 01/06/2026 - 12:17

Courtesy: Synopsys

Competing in the booming data centre chip market often comes down to one factor: power efficiency. The less power a CPU, GPU, or AI accelerator requires to produce results, the more processing it can offer within a given power budget.

With data centres and their commensurate power needs growing exponentially, the energy consumption of each chip directly impacts the enormous costs of running gigawatt-scale AI data centres, where power and cooling account for 40–60% of operational expenditures.

To reduce the energy consumption of its workloads and gain a competitive edge, one software and cloud computing titan has made the strategic bet to design its own next-gen hyperscale System-on-Chip (SoC). By combining the advantages of new 2 nm-class process nodes with advanced, customised chip design techniques, the company is doubling down on the belief that innovation spanning process, design, and architecture can unlock new levels of power and cost efficiency.

 

Power play

To offer a compelling alternative in the market, the company knew that any new 2 nm design must push beyond the performance and efficiency process entitlement already baked into the scaling factors of the latest transistor fabrication methods. The transition to the 2 nm process is expected to provide 25–30% power reduction relative to the previous 3 nm node.

The company set an ambitious goal of achieving an additional 5% improvement on the 2 nm baseline. Through close collaboration with Synopsys — combining EDA software flow enhancements with our optimised Foundation IP logic library — the company exceeded its goal, achieving:

  • 34% reduced power consumption with the same baseline flow.
  • 51% reduced power consumption with an optimised flow.
  • 5% silicon area advantage over baseline with ISO performance.

The company also evaluated our 2 nm embedded memories, which exceeded SRAM scaling expectations compared to our 3 nm product. On average, the 2 nm memory instances delivered 12% higher speed, occupied 8% less area, and consumed 12% less power than their 3 nm counterparts.

Expert collaboration

Because the transition to 2 nm comes with a shift from FinFET to GAA architecture, the company’s SoC developers faced a particularly steep learning curve, with an increase in complexity and technology assimilation.

They engaged our team in the early stages of the project — the byproduct of a trusted working relationship that spans more than four generations of AI chip designs — and even licensed our Foundation IP before the availability of any silicon reports.

The company used our IP, reference methodology, and Fusion Compiler tool to explore all commercially available options for achieving their power budget requirements. While the early development cycles produced the silicon area advantage, they did not achieve the power scaling targets the company sought.

Adaptation and optimisation

Seeking additional assistance, the company inquired whether our EDA tools and IP could be leveraged to push the design’s performance further.

R&D experts from our IP and EDA groups began collaborating on the design. Starting with the standard logic libraries, the IP group worked closely with the company’s designers to adapt and optimise the libraries with new cells and updated modelling. Over several iterations, the teams delivered the 7.34% power benefit, with Synopsys PrimePower used for final power analysis.

Our Technology and Product Development Group then helped the company take it a step further. By developing new algorithms for Fusion Compiler, and after many trials based on the latest recommended power recipe, design flow optimisations produced a 9.51% combined power benefit.

At the same time, our application engineers worked closely with the company to provide the best solution from our broad portfolio of memory compilers. Weighing performance requirements with power and area targets, we were able to extend the benefit of 2 nm beyond instance-level scaling. In one key scenario, power was reduced by an additional 25% by using an alternative configuration that met the 2 nm requirements.

Conclusion

As hyperscale compute continues its relentless push toward higher performance within ever-tighter power envelopes, success at advanced nodes like 2 nm will hinge on more than process scaling alone. This collaboration demonstrates how tightly integrated innovation across Foundation IP, EDA flows, and design methodology can unlock efficiency gains well beyond baseline node benefits. By adapting standard libraries, optimising tool algorithms, and co-engineering memory configurations, the company not only surpassed its power-efficiency targets but also achieved meaningful area and performance advantages. The outcome underscores a broader industry lesson: at 2 nm and beyond, early engagement, deep expertise, and holistic optimisation across the silicon stack will be critical to building the next generation of power-efficient hyperscale compute engines.

The post Adapting Foundation IP to Exceed 2 nm Power Efficiency in Next-Gen Hyperscale Compute Engines appeared first on ELE Times.

Delta Electronics to Provide 110 MW to Prostarm Info Systems for Energy Storage Projects in India

ELE Times - Втр, 01/06/2026 - 11:07
Delta Electronics India, a provider of power management and smart green solutions, announced an agreement to supply 100 units of its ‘Make-in-India’ 1.1 MW bi-directional Power Conditioning Systems (PCS) to Prostarm Info Systems Ltd’s Battery Energy Storage System (BESS) projects across India, including certain undertakings by Bihar State Power Generation Company Ltd (BSPGCL) & Adani Electricity Mumbai Limited’s (AEML). By deploying advanced energy infrastructure in both metropolitan and regional markets, this collaboration supports India’s renewable integration, grid stability, and overall energy resilience.
Mr. Niranjan Nayak, Managing Director, Delta Electronics India, said, “India’s energy transition journey calls for strong collaborations that combine global technology leadership with local market expertise. Through this engagement with Prostarm for AEML’s BESS initiative, Delta reaffirms its commitment to building long-term and customer-centric collaboration that supports the nation’s sustainable growth. This initiative marks the largest-scale deployment so far of our made-in-India power conditioning systems for the country’s fast-evolving energy storage sector.”
Mr. Ram Agarwal, Whole Time Director & CEO, Prostarm Info Systems Ltd., said
“At Prostarm, we are committed to bringing advanced energy solutions that empower utilities and drive India’s clean energy transition. Partnering with Delta Electronics India for the AEML’s BESS project reflects our shared vision of delivering technology-led reliability and performance at scale. This collaboration not only strengthens our portfolio in energy storage but also sets a benchmark for strategic partnerships in India’s evolving power sector.”
The bi-directional PCS units (totalling 110 MW) will be deployed by Prostarm across multiple projects, including Bihar State Power Generation Company Ltd. (BSPGCL) & Adani Electricity Mumbai Limited’s (AEML) 11 MW/22 MWh BESS project in Mumbai and standalone BESS projects being developed on BESSPD (Battery Energy Storage Solution Power Developer) mode by Prostarm in the state of Bihar.
Mr. Rajesh Kaushal, Vice President, Energy Infrastructure Business Group, Delta Electronics India, added, “This is a significant milestone for our Power Conditioning Systems business in India. Our collaboration with Prostarm reflects a strong strategic relationship built on trust and shared vision. By delivering reliable and customised bi-directional PCS solutions, developed with a focus on localisation and Make-in-India manufacturing, Delta is well positioned to strengthen its role in enabling India’s evolving energy landscape.”
Mr. Prateek Srivastava, Vice President and BU-Head, Prostarm Info Systems Ltd., “The transition to clean energy is an investment in our future. We are fully committed to driving the green revolution by delivering cutting-edge technology, customised products, and innovative solutions designed for long-term performance and reliability to ensure the highest level of customer satisfaction. At PROSTARM, we firmly believe in promoting Make-in-India initiatives, collaboration, knowledge sharing, and partnering with a strong technology leader like Delta is truly a feather in our cap”.
Delta’s Power Conditioning Systems are produced at its own manufacturing site in Krishnagiri, Tamil Nadu, and are designed for utility-grade energy storage and microgrid applications, especially for key functions such as peak shaving, PV smoothing, and grid ancillary control. The system boasts up to 98.5% energy conversion efficiency, output power capacity as high as 1160 kVA, and scalability up to 5 units in parallel.

The post Delta Electronics to Provide 110 MW to Prostarm Info Systems for Energy Storage Projects in India appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів