Feed aggregator

Nuvoton Introduces Automotive-grade, Filter-Free 3W Class-D Audio Amplifier NAU83U25YG

ELE Times - 3 hours 52 min ago

The New High-Efficiency Audio Solution Ideal for Dashboard, eCall, and T-Box Applications

Nuvoton announced NAU83U25YG, a new automotive-grade Class-D audio amplifier. The NAU83U25YG Class-D amplifier features high-efficiency stereo, digital input, and delivers up to 3W (4 Ω load) or 1.7W (8 Ω load) output power. Featuring a two-wire gain adjustment interface, it is the ideal choice for automotive electronics applications such as dashboards, eCall, and T-Box systems.

As automotive electronics enter the era of the “smart cockpit,” vehicle intelligence has become a key industry focus. This trend is driving increasing functional requirements for audio solution providers in automotive applications. Nuvoton Technology strictly adheres to automotive industry standards, offering AEC-Q100 qualified products for automotive applications. To simplify system design, our solutions support digital I2S audio signal input from the vehicle’s main controller, reducing the need for external components and minimizing PCB size. Additionally, our digital amplifiers help prevent circuit interference and effectively solve EMI issues.

The NAU83U25YG stereo Class-D audio amplifier has advanced features like 80 dB PSRR, 90% efficiency, ultra-low quiescent current (i.e. 2.1 mA at 3.7V for 2 channels) and superior EMI performance. It offers lower distortion, reduced background noise, and a wider dynamic range. Additionally, this new amplifier supports comprehensive device protection.

NAU83U25YG Key Features

  1. Gain Setting via I²C interface, 22 dB to -62 dB
  2. Powerful Stereo Class-D Amplifier, 2ch x 3.0W (4Ω @ 5V, 10% THD+N)
  3. Low Output Noise: 18 μVrms @ 0 dB gain
  4. Comprehensive Device Protection:
  • Overcurrent Protection (OCP)
  • Undervoltage Lockout (UVLO)
  • Overtemperature Protection (OTP)
  • Clock Termination Protection (CTP)
  1. Click-and-Pop Suppression
  2. Package: QFN-20
  3. Operating Temperature Range: -40℃ ~ +105℃
  4. Automotive Grade: AEC-Q100 qualification & TS16949 compliant

Superior EMI Performance, Filter-Free
The NAU83U25YG amplifier stands out by eliminating the need for an external output filter, thanks to its spread-spectrum-oscillator technology and slew-rate control, effectively reducing electromagnetic interference (EMI). Moreover, it offers enhanced immunity and power supply rejection ratio (PSRR) of > 80 dB at 217 Hz. Making the NAU83U25YG an excellent fit for Class-D audio amplifiers in wireless and AM (Amplitude Modulation) frequency band applications.

Leap Forward in Efficiency, Power
The Class-D topology represents a significant leap forward in both power efficiency and noise minimization in audio devices. By generating a binary square wave, Class-D amplifiers efficiently amplify the signal through power device switching. Compared to Class-AB devices, Class-D amplifiers offer power efficiencies that are two-thirds better.

The NAU83U25YG Class-D audio amplifier excels in driving a 4 Ω load with an impressive output power of up to 3W and fast start-up time of just 14 msec.

NAU83U25YG Target Applications

The new Class-D audio amplifier is designed for automotive electronics applications including dashboards, eCall, ADAS (Advanced Driver Assist Systems) and T-Box.

The post Nuvoton Introduces Automotive-grade, Filter-Free 3W Class-D Audio Amplifier NAU83U25YG appeared first on ELE Times.

Cadence Accelerates Development of Billion-Gate AI Designs with Innovative Power Analysis Technology Built on NVIDIA

ELE Times - 4 hours 30 min ago

New Cadence Palladium Dynamic Power Analysis App enables designers of AI/ML chips and systems to create more energy-efficient designs and accelerate time to market

Cadence announced a significant leap forward in the power analysis of pre-silicon designs through its close collaboration with NVIDIA. Leveraging the advanced capabilities of the Cadence Palladium Z3 Enterprise Emulation Platform, utilizing the new Cadence Dynamic Power Analysis (DPA) App, Cadence and NVIDIA have achieved what was previously considered impossible: hardware accelerated dynamic power analysis of billion-gate AI designs, spanning billions of cycles within a few hours with up to 97 percent accuracy. This milestone enables semiconductor and systems developers targeting AI, machine learning (ML) and GPU-accelerated applications to design more energy-efficient systems and accelerate their time to market.

The massive complexity and computational requirements of today’s most advanced semiconductors and systems present a challenge for designers, who have until now been unable to accurately predict their power consumption under realistic conditions. Conventional power analysis tools cannot scale beyond a few hundred thousand cycles without requiring impractical timelines. In close collaboration with NVIDIA, Cadence has overcome these challenges through hardware-assisted power acceleration and parallel processing innovations, enabling previously unattainable precision across billions of cycles in early-stage designs.

“Cadence and NVIDIA are building on our long history of introducing transformative technologies developed through deep collaboration,” said Dhiraj Goswami, corporate vice president and general manager at Cadence. “This project redefined boundaries, processing billions of cycles in as few as two to three hours. This empowers customers to confidently meet aggressive performance and power targets and accelerate their time to silicon.”

“As the era of agentic AI and next-generation AI infrastructure rapidly evolves, engineers need sophisticated tools to design more energy-efficient solutions,” said Narendra Konda, vice president, Hardware Engineering at NVIDIA. “By combining NVIDIA’s accelerated computing expertise with Cadence’s EDA leadership, we’re advancing hardware-accelerated power profiling to enable more precise efficiency in accelerated computing platforms.”

The Palladium Z3 Platform uses the DPA App to accurately estimate power consumption under real-world workloads, allowing functionality, power usage and performance to be verified before tapeout, when the design can still be optimized. Especially useful in AI, ML and GPU-accelerated applications, early power modeling increases energy efficiency while avoiding delays from over- or under-designed semiconductors. Palladium DPA is integrated into the Cadence analysis and implementation solution to allow designers to address power estimation, reduction and signoff throughout the entire design process, resulting in the most efficient silicon and system designs possible.

The post Cadence Accelerates Development of Billion-Gate AI Designs with Innovative Power Analysis Technology Built on NVIDIA appeared first on ELE Times.

NUBURU hits first milestone in Tekne acquisition with initial stake and launch of US JV

Semiconductor today - 5 hours 9 min ago
NUBURU Inc of Centennial, CO, USA — which was founded in 2015 and developed and previously manufactured high-power industrial blue lasers — has successfully executed the first milestone under the phased acquisition plan for Tekne S.p.A. established following formal notice received from the Italian government under the ‘Golden Power’ framework...

ams OSRAM doubles UV-C LED efficiency to 10.2%

Semiconductor today - 6 hours 8 min ago
ams OSRAM of Premstaetten, Austria and Munich, Germany has evaluated a new UV-C LED emitting at a wavelength of 265nm, with a lifespan exceeding 20,000 hours, that delivers over 10% efficiency at 200mW power, as validated by Germany’s National Metrology Institute Physikalisch-Technische Bundesanstalt (PTB). With these specifications, the UV-C LED can replace conventional mercury discharge lamps in the future...

Ranovus’ $100m investment to develop and scale optical semiconductor manufacturing in Ontario

Semiconductor today - 6 hours 17 min ago
Invest Ontario is supporting an investment of over $100m by Ranovus Inc of Ottawa, Ontario, Canada (which develops and manufactures multi-terabit photonics interconnect solutions for data-center and communications networks) to develop and scale optical semiconductors that are powering the next generation of artificial intelligence (AI) infrastructure. The project will expand Ranovus’ manufacturing facility in Ottawa and create 125 new jobs, more than doubling the existing workforce...

Federated Learning Definition, Types, Examples and Applications

ELE Times - 6 hours 18 min ago

A form of distributed machine learning known as “federated learning” uses data from edge devices, such as laptops, smartphones, and wearable technology, to train machine learning and deep learning algorithms without transferring the data to a central server.

Among the several advantages it confers are meeting latency constraints, promoting data privacy and security, and making parameter updates in a distributed manner.

It is thus a decentralized approach to machine learning, where, across multiple organizations or devices, data can be used to collaboratively build machine learning models without anyone sharing the actual private data. Instead of raw data being moved to some central server, only some updates or parameter values are exchanged, thus ensuring the privacy of the data and also its security.

Federated learning is an approach that thereby supports data privacy on the one hand, in that training data remains local and only aggregated insights are exchanged, while on the other hand, the federated data are used for improving model accuracy.

Types of Federated Learning:

  • Horizontal Federated Learning:

Horizontal federated learning protects privacy by allowing several parties with distinct users but comparable data attributes to work together to build a model without exchanging raw data.

  • Vertical Federated Learning

Vertical Federated Learning occurs when multiple clients share the same users but possess different features. It enables collaborative model training across organizations that hold complementary data about the same individuals, without exchanging raw data.

  • Federated Transfer Learning:

Federated transfer learning is basically making federated learning meet transfer learning so that clients with different data can collaborate. This allows models to transfer knowledge even if the clients have different features and user distributions, thus aiding a common project in optimizing its performance without the exchange of raw data.

Federated learning can also be divided into two categories based on the size of the participating clients: Cross-Device Federated Learning and Cross-Silo Federated Learning.

How federated learning works:

Federated learning is a privacy-preserving machine learning technology by which multiple devices or organizations set about building a shared model collaboratively without disclosing any raw data. A central server starts the process by selecting a global model and disseminating it among their client devices. Each client trains the model on their own private dataset, hence sensitive information remains on the client device. When training is completed, clients submit only the updated model parameters (weights or gradients) to the server. The aggregator then combines the clients’ updates, usually performing an averaging operation known as Federated Averaging or FedAvg, with to update the global model. This improved global model is redistributed for more rounds of training, and so on. Consequently, the model learns from several data sources while ensuring the privacy and security of the data. This is especially useful in hospitals, finances, and mobile apps.

Applications of Federated Learning:

  • Autonomous Vehicle:

Federated learning enables self-driving cars to be safer and smarter through real-time awareness of road terrain, faster decisions on the spot, and continuous model updating. Vehicles share insights locally like hazards or weather changes without sending raw data, allowing onboard AI to react instantly while improving overall system accuracy over time.

  • Mobile and Edge Devices

FL enables more intelligent and private user experiences in mobile technologies. For instance, Google Gboard learns from user typing behaviors right on the device to enhance text predictions. Through local training, voice assistants such as Google Assistant and Siri improve speech recognition and customisation. Without jeopardizing user privacy, FL also offers individualized content recommendations.

  • Industrial IoT

Federated learning in IoT allows machines and sensors to train models locally with their own data while not actually sharing it. Only these model updates are communicated to the central server for a combined update in performance. This serves predictive maintenance and anomaly detection while rendering operational data private and secure.

  • Finance

In the financial industry, FL is a method for banks and other financial institutions to collaborate against fraud, measuring creditworthiness, evaluating market risks, and so forth. Training of the model occurs on distributed data sources, thus providing institutions with a wider perspective while preserving customer building laws with regards to data sovereignty.

  • Cybersecurity

With the FL approach, one can detect anomalies and forecast malicious threats on the basis of observed local attack patterns. This constitutes a decentralized approach toward developing defense, thereby ensuring that sensitive logs are not merged together. Biometric authentication systems take one step further by endowing local training that keeps personal identifiers locked on the device.

Federated learning advantages:

Federating learning keeps data in a local device, which enhances privacy and security. It reduces bandwidth usage, supports personalized models, and allows learning from a broader and diversified data source without centralizing information that is disclosive.

Federated learning disadvantages:

It requires high resources on the device, lacks consistency in data distribution over users, and faces barriers of coordination and debugging. It may also train slow the models and be less accurate than centralized ones.

Federated Learning Examples:

Some of the use cases for Federated learning are:

  • Google Gboard: Improve predictive text and suggestions without the need to upload data of user typing.
  • Healthcare: Hospitals train the model on patient data-essentially-training without sharing sensitive records.
  • Finance: Banks employ federated learning to detect fraud across institutions without exposing customer data.
  • Google: Google uses FL to enhance on-device machine learning systems, such as the “Hey Google” detection in Google Assistant, enabling users to issue voice commands.

The post Federated Learning Definition, Types, Examples and Applications appeared first on ELE Times.

Novosense Debuts 3D Dual-Output Hall Latches for Auto Motor Control

AAC - 14 hours 55 min ago
The new series of dual-output Hall latches allows system designers to measure both speed and direction using a single integrated chip.

Three-level buck controllers boost USB-C efficiency

EDN Network - Thu, 08/21/2025 - 21:36

Two voltage controllers from Renesas feature a three-level buck topology for battery charging and voltage regulation in USB-C systems. With wide input and output voltage ranges, the RAA489300 and RAA489301 are well-suited for multiport USB-PD chargers, portable power stations, robots, drones, and other high-efficiency DC/DC applications.

The three-level topology adds two switches and a flying capacitor to a conventional buck converter. The capacitor lowers voltage stress on the switches, enabling the use of lower-voltage FETs with better efficiency and reducing conduction and switching losses. It also allows a smaller inductor, with ripple only about 25% of that in a two-level design, further cutting inductor losses.

In addition to the three-level buck configuration, the controllers offer passthrough mode in both forward and reverse directions, enabling high efficiency when input and output voltages are equal. Key parameters are programmable via an SMBus/I²C-compatible interface.

The devices differ in voltage range and switching frequency. The RAA489300 operates from an input of 4.5 V to 57.6 V and an output of 3 V to 54.912 V, with a programmable switching frequency up to 400 kHz (800 kHz at the switching node). The RAA489301 supports an input of 4.5 V to 24 V and output of 3 V to 21 V, with a programmable frequency up to 367 kHz (734 kHz at the switching node).

The RAA489300 and RAA489301 are available now in 4×4 mm, 32-lead TQFN packages.

RAA489300 product page 

RAA489301 product page 

Renesas Electronics

The post Three-level buck controllers boost USB-C efficiency appeared first on EDN.

Early access opens for BittWare 3U VPX cards

EDN Network - Thu, 08/21/2025 - 21:36

BittWare’s early access program helps customers speed development of systems using its upcoming 3U VPX cards with AMD Ryzen processors and Versal SoCs. Launching later this year, these ruggedized, SWaP-optimized cards support mission-critical aerospace and defense applications.

The 3U VPX products integrate Ryzen x86 embedded CPUs with Versal RF and Gen 2 adaptive SoCs for high-speed signal capture and real-time multi-sensor processing. They comply with Sensor Open Systems Architecture (SOSA) and VITA 48 standards, meeting the mechanical and cooling requirements for high-reliability deployments. The cards are well-suited for radar, sensor fusion, electronic warfare, signals intelligence, UAVs, and image-processing workloads.

Customers can apply for early access to gain exclusive technical details, roadmap visibility, and direct engagement with experts on next-generation designs. NDAs with both AMD and BittWare are required.

BittWare

The post Early access opens for BittWare 3U VPX cards appeared first on EDN.

GaN transistors drive long-pulse radar

EDN Network - Thu, 08/21/2025 - 21:35

Ampleon has launched four 700-W GaN-on-SiC RF transistors for S-band radar systems, operating between 2.7 GHz and 3.5 GHz. The CLS3H2731 and CLS3H3135 series leverage a radar-optimized GaN-on-SiC platform that combines frequency-specific design, long-pulse support, and robust thermal performance—features beyond standard GaN transistors.

The transistors span two frequency ranges: 2.7 GHz to 3.1 GHz and 3.1 GHz to 3.5 GHz. Devices in each range are offered in flanged (SOT502A) and leadless ceramic (SOT502B) packages to meet diverse mechanical and thermal requirements. The lineup includes:

Internally pre- and post-matched, the transistors offer high input impedance and support pulse lengths up to 300 µs with duty cycles of 10–20%. Low thermal resistance ensures reliable operation under high duty cycles. Designed for advanced radar transmitters, they are well-suited for air traffic control, ground and naval defense, weather monitoring, surveillance, and particle acceleration.

Now in mass production, the RF transistors are available through Ampleon’s global distributors, including DigiKey and Mouser.

Ampleon

The post GaN transistors drive long-pulse radar appeared first on EDN.

Clock buffers pair low jitter with I/O flexibility

EDN Network - Thu, 08/21/2025 - 21:35

Operating from DC to 3.1 GHz, the SKY53510, SKY53580, and SKY53540 clock buffers from Skyworks provide 10, 8, and 4 outputs, respectively. These low-jitter devices support high-speed communication infrastructure, including data centers, 5G networks, and PCIe 7.0.

Each device integrates a 3:1 input multiplexer that accepts two universal inputs—compatible with LVPECL, LVDS, S-LVDS, HCSL, CML, SSTL, and HSTL—as well as a crystal input (also usable with a single-ended clock). The inputs support slew rates down to 0.75 V/ns. Differential outputs are arranged in two banks, with each bank independently selectable as LVPECL, LVDS, HCSL, or tristate and powered by its own 1.8-V, 2.5-V, or 3.3-V supply.

The buffers achieve low additive jitter, specified at 35 fs typical (47 fs max) at 156.25 MHz and 3 fs at 100 MHz for PCIe 7. Multiple on-chip LDO regulators provide >70 dBc PSRR in noisy environments, while a -166 dBc/Hz noise floor allows operation with Synchronous Ethernet (SyncE) at 156.25 MHz.

Samples and production quantities of the SKY53510/80/40 clock buffers are available now.

SKY53510 product page 

SKY53580 product page 

SKY53540 product page 

Skyworks Solutions 

The post Clock buffers pair low jitter with I/O flexibility appeared first on EDN.

ESD diodes raise surge ratings to 540 W

EDN Network - Thu, 08/21/2025 - 21:35

Vishay’s single-line VGSOTxx and two-line VGSOTxxC ESD protection diodes offer reverse working voltages from 3.3 V to 36 V. Compared to earlier GSOTxx/xxC devices, the automotive-grade diodes provide improved heat dissipation, supporting higher peak pulse power ratings up to 540 W and current ratings up to 44 A for an 8/20-µs pulse.

Both series come in SOT-23 packages and serve as drop-in replacements for GSOT devices, providing unidirectional ESD protection. The VGSOTxxC series employs a dual common-anode configuration that also enables bidirectional protection. Its dual diodes can be paralleled to double surge power ratings, line capacitance, and reverse leakage current.

The RoHS-compliant devices deliver ESD immunity per IEC 61000-4-2 and ISO 10605 at ±30 kV air and contact discharge, and meet AEC-Q101 HBM class H3B at >8 kV. Their high power and current handling make them well-suited for automotive, industrial, consumer, communications, medical, and military applications.

Samples and production quantities of the ESD protection diodes are available now, with lead times of 12 weeks.

VGSOTxx product page 

VGSOTxxC product page 

Vishay Intertechnology 

The post ESD diodes raise surge ratings to 540 W appeared first on EDN.

Nvidia Issues Major Update of CUDA Toolkit to Accelerate CPUs and GPUs

AAC - Thu, 08/21/2025 - 20:00
CUDA Version 13 features new CPU resources, unified Arm platforms, and additional operating systems supported.

What initiates lightning? There’s a new and likely better answer.

EDN Network - Thu, 08/21/2025 - 16:31

Engineers across many disciplines are aware of and concerned with lightning—and for good reasons. A lightning strike can cause significant structural damage, house and forest fires, and severe electrical surges (Figure 1).

Figure 1 The intensity of a lightning strike is always awe-inspiring and represents a millisecond-level transient of hundreds of kiloamps. Source: Science Daily

Even if the strike is not directly on the equipment (in which case the unit is probably “fried”), the associated transients induced in nearby wires and paths can be damaging. Lightning can also be mystifying: some people who have been hit have no ill effects; others have some temporary or long-lasting physical and mental impairments; and for some….well, you know how it ends.

Measuring lightning

For these reasons, protection against the effects of lightning to the extent possible is an important factor in many designs. These efforts can include the use of lightning rods, which provide low-impedance paths to Earth ground functioning as a near-infinite source and sink for electrons, gas-discharge tubes (GDTs), and metal oxide varistors (MOVs), among other devices. Implementing protection is especially challenging when there are multiple strikes, as they can erode the capabilities of the protective devices.

This natural phenomenon occurs most frequently during thunderstorms, but has also been observed during volcanic eruptions, extremely intense forest fires, and surface nuclear detonations. There are many available numbers for the voltages, currents, timing, and temperature ranges associated with lightning. While there is obviously no single lightning waveform, Figure 2 shows representative data; note the maximum current of several hundred kiloamps.

Figure 2 These are representative values for lighting-stroke current versus time and current magnitudes; these are not the only ones, of course. Source: Kingsmill Industries Ltd

Researchers have studied lightning for many decades, using a variety of techniques ranging from “man-made” lightning in controlled enclosures, to field measurements in lightning-prone areas, to instigating it with a grounded wire launched into a lightning-prone cloud. There’s also the futile quest to direct and capture lightning’s energy into some sort of project store-and-use scheme. (For fictional demonstration, see the 1931 classic Frankenstein, where lightning is used to energize the doctor’s monster-like creation, or the end of the 1985 classic Back to the Future, where lightning is captured by a rod on the clock tower and used to recharge the flux capacitor of the DeLorean time-travel vehicle 😉.

The standard explanations for lightning and its initiation are like this one from Wikipedia: “Lightning is a powerful natural electrical discharge caused by a buildup of static electricity within storm clouds. This buildup occurs when ice crystals and water droplets collide in the turbulent environment of a cumulonimbus cloud, separating charges within the cloud. When the electrical potential becomes too great, it discharges, creating a bright flash of light and a loud sound known as thunder.”

But what really happens inside the cloud?

Well, maybe that’s only a partial answer, or perhaps it’s misleading. Why so? For decades, scientists have understood the mechanics of a lightning strike, but exactly what sets it off inside thunderclouds remained a lingering mystery. Apparently, it’s much more than static electricity potential finally reaching a “flashover” level.

That mystery may now be solved, as a team at Pennsylvania State University (Penn State) has produced what they say is the complete story. It’s far more complicated than just a huge static-electricity burst; it’s really a mixture of cosmic rays, X-rays, and high-energy electrons.

Their work involves some deep physics and complex analysis. It also introduced me to some new acronyms: initial breakdown pulses (IBPs), narrow bipolar events (NBEs), energetic in cloud pulses (EIPs), and terrestrial gamma ray flashes (TGFs), flickering gamma ray flashes (FGFs), and Initial Electric Field Change (IEC).

They have taken both historical lighting-related data (and there is a lot of that available from multiple sources) with current measurements, presented a hypothesis, correlated the data, developed models, and ran simulations, and put it all together. The result is a plausible explanation that seems to fit the facts, although with natural events such as lightning, you can never be completely sure.

The Penn State research team, led by professor of electrical engineering Victor Pasko, explained how intense electric fields within thunderclouds accelerate electrons. These fast-moving electrons collide with molecules such as nitrogen and oxygen, generating X-rays and sparking a rapid surge of new electrons and high-energy photons. This chain reaction then creates the necessary conditions for a lightning bolt to form, showing the link between X-rays, electric fields, and the physics of electron avalanches.

These electrons radiate energetic photons (X-rays) as they scatter by the nuclei of nitrogen and oxygen atoms in air. These X-rays radiate in all directions, and some fractions are radiated in the opposite direction of electron motion. These particular X-rays lead to the seeding of new relativistic seed electrons due to the photoelectric effect and thus a strong amplification of the original avalanche.

To validate their explanation, the team used mathematical modeling to simulate atmospheric events that match what scientists have observed in the field. These observations involve photoelectric processes in Earth’s atmosphere, where high-energy electrons—triggered by cosmic rays from space—multiply within the electric fields of thunderstorms and release short bursts of high-energy photons. This process, known as a terrestrial gamma-ray flash, consists of invisible but naturally occurring bursts of X-rays and associated very high frequency (VHF) radiation pulses, Figure 3.

Figure 3 A conceptual representation of conditions required for transition from fast positive breakdown (FPB) to fast negative breakdown (FNB) based on relationship between the relativistic feedback threshold E0/δ and the minimum negative streamer propagation fields E—cr/δ. Source: Pennsylvania State University

They demonstrated how electrons, accelerated by strong electric fields in thunderclouds, produce X-rays as they collide with air molecules like nitrogen and oxygen, and create an avalanche of electrons that produce high-energy photons that initiate lightning. They used the model to match field observations—collected by other research groups using ground-based sensors, satellites, and high-altitude spy planes—to the conditions in the simulated thunderclouds.

I’ll admit: it’s pretty intense stuff, as demonstrated by a read-through of their paper “Photoelectric Effect in Air Explains Lightning Initiation and Terrestrial Gamma Ray Flashespublished in the Journal of Geophysical Research. (I do have one minor objection: I wish they did not use the term “photoelectric effect” in the title or body of the paper. Although that phrase is technically correct as they use it, I associate it with Einstein’s groundbreaking 1905 paper, which resolved all the contradictions of the data of this phenomenon and instead proposed photons as energy quanta, for which he received the Nobel Prize.)

While the root causes of lightning, as delineated in the work of the Penn State team, are not directly relevant to engineers whose designs must tolerate nearby lightning strikes, it’s still interesting to see what is going on and how even our modern science may still not have all the answers to such a common occurrence. In other words, there’s still a lot to learn about basic natural events.

Have you ever been involved with a design that had to be lightning-tolerant? What standards did you try to follow? What techniques and components did you use? How did you test it to verify the performance?

Related content

References

The post What initiates lightning? There’s a new and likely better answer. appeared first on EDN.

📢 День Першокурсника 2025

Новини - Thu, 08/21/2025 - 15:05
📢 День Першокурсника 2025
Image
kpi чт, 08/21/2025 - 15:05
Текст

Дорогі першокурсники, ласкаво просимо до великої і дружньої родини Київського політеху! Запрошуємо вас долучитися до університетських заходів на території кампусу, де ви зможете більше дізнатися про студентське життя та майбутнє навчання.

Top 10 Deep Learning Companies in India

ELE Times - Thu, 08/21/2025 - 12:00

India has fast emerged as a global AI and deep learning innovation hub.India is become a hub for some of the most discerning deep learning applications in retail, healthcare, banking, and autonomous systems due to the rising demand.Many small, medium, and large enterprises are integrating artificial intelligence technologies to gain competitive advantage both in the domestic and overseas markets.This article will explore the top 10 deep learning companies in India.

  1. Tata Consultancy Services (TCS)

With its Ignio platform, TCS is leading the way in enterprise-grade deep learning solutions. Neural networks are used for predictive analytics, intelligent automation, and anomaly detection. To improve operations and decision-making, it is extensively used in banking, retail, and healthcare.

  1. Infosys

Infosys Nia is an AI platform, powered by deep learning, developed by Infosys, that enables usage scenarios such as automation, business intelligence, and predictive modeling. It is used in industries to help streamline processes, predict trends, and improve customer service.

  1. Wipro AI

Wipro concentrates on deep learning techniques in NLP and computer vision. Their solutions target cybersecurity, cloud AI, and digital transformation; they allow the clients to detect threats and automatically analyze visual data.

  1. Arya.ai

Arya.ai builds deep learning platforms such as BUDDHA to assist enterprises in deploying AI models with little human intervention. It specializes in automated architecture search, model explainability, and compliance-ready systems, particularly for regulated sectors like finance and insurance.

  1. HCL Tech

HCL Tech has its own applications for deep learning in predictive maintenance, healthcare diagnosis, and IT infrastructure management. Models are built not only to detect failures of systems before they actually do, but also assist in medical image analysis for speedy diagnoses.

  1. Tech Mahindra

Tech Mahindra applies deep learning into telecom, 5G and IoT ecosystems. Through these AI-powered platforms, the customer experience is enhanced by real-time personalization, and network performance is optimized via smart data modeling.

  1. Mad Street Den

Mad Street Den, through its platform Vue.ai, focuses on computer vision applications in retail automation. Their deep learning-based models enable visual search, automated tagging, and personalized styling, consequently revolutionizing e-commerce experience.

  1. Fractal Analytics

Fractal Analytics works in applying deep learning to provide AI solutions in customer analytics, forecasting, computer vision, and NLP (natural language processing) in the sectors of healthcare and finance. Furthermore, it imparts AI training through its own institute, the Fractal Analytics Academy, and pursues the implementation of fractal machine learning for enhancing model efficiency and scalability.

  1. Haptik’s

Haptik’s deep learning abilities cover real-time analytics, customer self-service, and pre-sales guidance, giving enterprises a complete conversational experience.

  1. Zensar Technologies

Zensar Technologies furthers deep learning in AI and ML activities. The company uses deep learning techniques as part of the Vinci AIOps platform, an operational platform that improves IT operations through event correlation, anomaly detection, root cause analysis, and intelligent automation. This system thereby uses deep learning and NLP to learn and respond intelligently to IT systems.

Conclusion:

India’s deep learning ecosystem is rising at lightning pace. Indian companies, from the likes of established IT giants TCS and Infosys to swanky startups like Mad Street Den, contributing to shaping the global AI landscape with revolutionary applications.

The post Top 10 Deep Learning Companies in India appeared first on ELE Times.

Активні студенти та аспіранти КПІ ім. Ігоря Сікорського відзначені КМДА і Солом'янською РДА!

Новини - Thu, 08/21/2025 - 11:23
Активні студенти та аспіранти КПІ ім. Ігоря Сікорського відзначені КМДА і Солом'янською РДА!
Image
kpi чт, 08/21/2025 - 11:23
Текст

📌 За особливі досягнення у розбудові столиці – міста-героя Києва премії Київського міського голови отримали:

TI semiconductors enable advanced Earth-observation capabilities of ISRO’s first-of-its-kind NISAR mission

ELE Times - Thu, 08/21/2025 - 09:37

Decade-long partnership overcame complex payload design challenges to empower next-generation environmental research from space

  • A deeply-coupled partnership between TI and SAC-ISRO helped enable the mission payloads for the NISAR satellite, which is currently orbiting Earth.
  • TI’s space-grade power management, mixed signal and analog technologies optimize system performance and allow the satellite to operate in the harsh environment of space over the mission’s lifetime.
  • NISAR is the first satellite to use dual-band synthetic aperture radar technology to monitor the Earth’s ecosystems, natural hazards and climate patterns.

Texas Instruments (TI) semiconductors are enabling the radar imaging and scientific exploration payloads for the NASA-Indian Space Research Organization (ISRO) synthetic aperture radar (NISAR) satellite, which was recently launched into orbit. The launch of the satellite culminates a decade-long partnership between TI and the ISRO to optimize the performance of the electronic systems responsible for this Earth-observation mission. NISAR is equipped with TI’s radiation-hardened and radiation-tolerant products that enable designers to maximize power density, precision and performance in their satellite systems.

Engineering a first-of-its-kind satellite for Earth observation

The ISRO describes NISAR as the first Earth-observation mission to use dual-band synthetic aperture radar (SAR) technology, enabling the system to capture precise, high-resolution images during the day, night and all weather conditions. TI’s technology is enabling the satellite’s next-generation capabilities through efficient power management, high-speed data transfer, and precise signal sampling and timing.

The NISAR satellite will image the entire planet every 12 days, offering scientists greater understanding of changes to Earth’s ecosystems, ice mass, vegetation biomass, sea-level rise and groundwater levels. The agencies also expect the data to improve real-time monitoring of natural hazards such as earthquakes, tsunamis, volcanoes and landslides.

“From selecting the right products to ensuring consistent support across development cycles, TI’s technical expertise helped us navigate complex payload requirements,” said Shri Nilesh Desai, Director, Space Applications Centre (SAC), ISRO. “A deeply coupled partnership, specifically focused on high-impact mixed signal and analog semiconductors, enabled ISRO to meet the system-level requirements for a satellite in low Earth orbit. Together, we achieved the space-grade performance standards needed for this important mission.”

Addressing complex design challenges with TI’s space-grade portfolio

Throughout the project life cycle, TI’s system expertise and space-grade semiconductors, which are designed to withstand the harshest space environments, helped enable the advanced S-band SAR capabilities of the NISAR mission. The company provided:

  • Radiation-hardened power management die for SAC-ISRO developed point-of-load hybrid power module, helping optimize size, weight and power for the mission payloads.
  • Analog-to-digital converters with ultra-high sampling rates and high resolution, allowing the satellite payload to generate fine-grained, high resolution radar imagery.
  • High-performance interface technology, which enables high-speed data transfer between different satellite subsystems to ensure reliable communication.
  • A clocking solution that enables the precise time alignment and synchronous, coherent sampling required for high-precision SAR systems.

“As the NISAR satellite is now in orbit, I reflect on the decade-long partnership that brought us here and how our teams are already looking to what’s next, developing new technologies that will enable future missions,” said Elizabeth Jansen, TI India’s sales and applications director. “Building on more than 60 years of expertise, TI’s radiation-hardened and radiation-tolerant semiconductors are ready to meet the evolving demands of the space market. Our broad and reliable space-grade portfolio is ever-expanding and pushing the limits of what’s possible in the next frontier.”

The post TI semiconductors enable advanced Earth-observation capabilities of ISRO’s first-of-its-kind NISAR mission appeared first on ELE Times.

Infineon strengthens startup ecosystem in India

ELE Times - Thu, 08/21/2025 - 08:05
  • Infineon India has signed a Memorandum of Understanding (MoU) with the Department for Promotion of Industry and Internal Trade (DPIIT)
  • This further strengthens Infineon’s long-standing commitment to foster the country’s startup ecosystem
  • Recent startup success stories contribute to energy-efficient e-mobility and smart e-health solutions

India is rapidly emerging as a hub for semiconductor innovation. As a global leader in power semiconductors and the Internet of Things (IoT), Infineon has been collaborating with Indian start-ups for years, recognizing the importance of this in accelerating innovation. With a focus on supporting advancement and entrepreneurship in the country the company has formed partnerships with various organizations, including NITI Aayog, Startup India, and the Ministry of Electronics and Information Technology (MEITY), to promote the “Make in India” initiative and foster startup growth.

Memorandum sparks startup innovation in IoT, electromobility, and security

As part of its ongoing efforts, Infineon India has signed a Memorandum of Understanding (MoU) with the Department for Promotion of Industry and Internal Trade (DPIIT) this year. The MoU aims to develop, foster, and promote the country’s innovation ecosystem by encouraging and supporting engineering students, product startups, innovators, and entrepreneurs through design challenges using Infineon’s innovative products to address applications of relevance for India.

“We are committed to empowering India’s startup ecosystem in microelectronics”, said Vinay Shenoy, Managing Director of Infineon India. “Partnerships such as the MoU with DPIIT allow us to work with innovative startups, giving them access to state-of-the-art technologies and our local and global networks. In return, we tap into their agility and entrepreneurial spirit, driving mutual growth and strengthening India’s innovation ecosystem.”

Propelling the Indian startup ecosystem

Infineon India has collaborated with various incubators and innovation ecosystems for years, including the Foundation for Science Innovation & Development at IISC Bangalore, IIT Madras Incubation Cell, and Artpark, AI & Robotics Technology Park @IISC. These partnerships have enabled the company to support startups and innovators in the country, and provide them with access to resources, expertise, and funding. Some of the key initiatives undertaken by Infineon India include the AI Challenge with Startup India and AGNIi, the solar pump motor drive challenge, and the MoU with MEITY to support the MEITY startup hub. These initiatives have helped to promote innovation and entrepreneurship in the country and have provided a platform for startups and innovators to showcase their ideas and products.

Startup collaborations for sustainable e-mobility and smart e-health

Recent Infineon partnerships with startups like e-Drift Electric, EYDelta or Mimyik are successful examples of collaboration with significant impact on e-mobility and e-charging as well as smart health solutions.

As part of Infineon’s co-innovation program, e-Drift Electric is contributing to the development of electric vehicle (EV) charging infrastructure. The start-up is focusing on creating energy-efficient modules using Infineon’s Si-SiC-MOSFET portfolio. As the adoption of EVs accelerates, it is increasingly important to develop an energy-efficient and robust charging infrastructure to ensure a cleaner and greener future for transportation in India.

For EYDelta the partnership with Infineon enables a faster product development and manufacturing of electric motors and motor controllers for multiple sectors like e-mobility, drones, and aerospace. By integrating AI-driven diagnostics and cloud-connectivity the solutions enable smarter IoT-ecosystems, help optimizing energy consumption, reduce emissions, and drive sustainable transportation systems in India and abroad.

The cooperation with Mimyk, a startup, spun out of the Indian Institute of Science Bangalore, is focusing on metabolic health monitoring. Infineon provided latest microcontrollers as well as access to the global Infineon semiconductor network. This partnership will accelerate development cycles and transform health monitoring to make health tracking smarter and easily accessible for everyone.

This demonstrates how Infineon’s co-innovation program fosters a strong ecosystem in India, empowering startups to grow as well as accelerating innovation-to-customer value, together.

The post Infineon strengthens startup ecosystem in India appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator