Збирач потоків

DIY Lighthouse tracker using custom PCB and ESP32-C3

Reddit:Electronics - 4 години 3 хв тому
DIY Lighthouse tracker using custom PCB and ESP32-C3

Hey everyone,
I am currently developing a custom tracker using the lighthouse trackers from a VR headset (HTC vive). The end goal is tracking small robots indoors for ~$10-15 per unit.

For that I built a custom PCB in the simplest way possible, as I am still quite a beginner in electronics.

I am using 2 BPW-34 photodiodes - they have no IR filter built in, so i'm using floppy disk film as a cheap IR bandpass which works surprisingly well.

To amplify and filter the signal i used an op-amp as somehow better options such as the TS4231 were not sourceable easily for me. It seems like most of these chips are sold out or hard to get by.

But even with just that a very basic tracking that captures the laser pulses from the lighthouse worked!
For the future I will try to use at least 3 sensors to be able to maybe position objects in space as well.

https://youtu.be/bWUpHzh0yHs

submitted by /u/monkeydance26
[link] [comments]

КПІшники — переможці інженерного челенджу від Brave1

Новини - 5 годин 7 хв тому
КПІшники — переможці інженерного челенджу від Brave1
Image
kpi вт, 03/17/2026 - 16:47
Текст

Команда Факультету електроніки (ФЕЛ) КПІ ім. Ігоря Сікорського «Оленячі роги» здобула перемогу в інженерному челенджі Brave1 у межах ініціативи Brave Students.

Sivers, O-Net and Enablence partner to develop external light sources for AI data centers

Semiconductor today - 7 годин 39 хв тому
Sivers Semiconductors AB of Kista, Sweden (which supplies RF beam-former ICs and lasers for AI data-center, SATCOM, defense and telecom applications) has announced a strategic partnership with optical communication device, module and subsystem maker O-Net Technologies (Group) Co Ltd of Shenzhen, China and Enablence Technologies Inc of Ottawa, Ontario, Canada (which designs and manufactures optical components) to develop an advanced external light source (ELS) module with Sivers laser arrays to support co-packaged optics (CPO) roll-out in AI data centers and high-performance computing (HPC) systems. O-Net will serve as the ODM partner, integrating Sivers’ laser arrays and Enablence’s NxN Star Coupler to deliver a scalable ELS module for scale-out and scale-up optical systems...

Smart EV Charging in India: How AI and ML Are Optimising Grid, Pricing and Reliability

ELE Times - 7 годин 57 хв тому

India’s electric mobility transition is entering a decisive phase. While early discourse focused on vehicle innovation and battery chemistry, the spotlight has now shifted toward charging infrastructure, specifically, how intelligent systems can make it scalable, reliable, and grid-compatible. Artificial Intelligence (AI) and Machine Learning (ML) are no longer experimental add-ons; they are becoming the operational backbone of modern EV charging ecosystems.

From predictive maintenance and grid-responsive load management to dynamic pricing and battery safety modelling, Indian charging operators are embedding AI at every layer of infrastructure. Industry leaders such as Tata Power, Statiq, ChargeZone, and Bolt.Earth, Intellicar (Fabric IoT), and Coulomb AI are redefining what it means to deploy “smart” infrastructure in a high-growth, power-sensitive market like India.

AI-Driven Infrastructure Planning & Site Selection

Charging infrastructure planning in India can no longer rely on static demographic assumptions or simple traffic counts. With capital expenditure per fast-charging site running high, predictive intelligence has become central to ensuring ROI viability. AI-driven site selection models now ingest multi-layered datasets including vehicular density heatmaps, dwell-time patterns, telematics feeds, grid capacity data, and urban expansion forecasts to simulate demand even before physical deployment.

Such geospatial optimisation is particularly critical for India’s highway corridors and Tier-II cities, where deployment miscalculations can significantly impact utilisation rates. By integrating predictive analytics with grid feasibility mapping, operators are achieving measurable improvements in charger usage efficiency and long-term sustainability.

Predictive Maintenance & Reliability Enhancement

Reliability remains the most decisive performance metric in charging infrastructure. A single non-operational charger can undermine customer trust and disrupt fleet operations. AI-powered predictive maintenance is addressing this challenge by transforming chargers into continuously monitored, self-reporting assets.

Modern charging stations now incorporate IoT sensors that track temperature fluctuations, voltage irregularities, connector wear, vibration signatures, and cooling system performance. These data streams feed machine learning models capable of detecting anomaly patterns weeks before a component failure occurs.

Operators such as ChargeZone are leveraging AI-driven network management systems to monitor thousands of charging points simultaneously, ensuring SLA compliance and minimising revenue loss from unexpected outages. The result is not just improved uptime but a tangible reduction in ‘charge anxiety’ among users.

Smart Charging & Dynamic Load Management

India’s distribution grids were not originally designed for high-density EV loads. Uncoordinated charging can create localised transformer stress and peak demand spikes. AI-driven smart charging systems are mitigating this risk by dynamically balancing load in real time.

By analysing grid capacity constraints, renewable energy availability, historical consumption curves, and user charging behaviour, AI systems intelligently stagger charging sessions without compromising user convenience. Time-of-Use (ToU) optimisation algorithms further encourage off-peak charging, reducing stress on urban feeders.

Pratik Kamdar, Co-founder & CEO of Neuron Energy: “AI and advanced software are emerging as the backbone of the modern EV ecosystem… [enabling] features such as real-time monitoring, predictive diagnostics, and faster charging capabilities that are increasingly prioritised by customers”.

Battery-integrated charging hubs deployed by ChargeZone further demonstrate how AI can shave peak demand and buffer grid volatility, a critical capability as EV adoption accelerates.

Dynamic Pricing & Revenue Optimisation

The economics of EV charging depend heavily on utilisation efficiency and tariff structuring. Traditional flat-rate pricing models often fail to respond to fluctuating grid conditions or consumer demand patterns. AI-powered dynamic pricing engines are now enabling real-time tariff modulation.

By factoring in grid stress indicators, occupancy rates, historical usage behaviour, and localised demand forecasts, AI models optimise pricing structures that balance revenue maximisation with consumer fairness.

Raghav Bharadwaj, CEO of Bolt.earth: On operational optimisation: “EV charging cannot be treated like a pure software startup… Every station’s economics must be optimised from day one. We measure success by uptime, utilisation, and energy delivered” (Source: Industry Perspectives).

Machine learning also supports customer segmentation, allowing differentiated pricing for fleet operators, subscription users, and retail consumers; thereby strengthening long-term business sustainability.

Vehicle-to-Grid (V2G) Technology

Vehicle-to-Grid technology introduces a paradigm shift in which EVs function as distributed energy storage assets capable of feeding electricity back into the grid. While regulatory frameworks in India are still evolving, AI is already playing a central role in enabling safe and optimised bidirectional charging.

AI algorithms determine optimal discharge windows, forecast grid demand spikes, and ensure battery health parameters remain within safe thresholds during V2G cycles. Without such intelligent orchestration, bidirectional charging could accelerate battery degradation.

As India moves toward distributed energy markets, AI-enabled V2G systems could unlock new revenue streams for EV owners and fleet operators alike.

Battery Safety & Thermal Management

Fast charging environments introduce elevated thermal risks, making battery safety paramount. AI-driven Battery Management Systems (BMS) are now capable of predicting thermal runaway scenarios before they escalate into critical failures.

Using chemistry-specific modelling and real-time telemetry data, machine learning algorithms estimate State-of-Charge (SoC) with accuracy exceeding 95% while simultaneously forecasting degradation patterns. This is particularly important given India’s mix of lithium iron phosphate (LFP) and nickel manganese cobalt (NMC) chemistries across vehicle categories.

Such advancements are not only improving safety but also extending battery lifecycle economics, a critical factor in total cost of ownership calculations.

User Experience Enhancement

Beyond engineering efficiency, AI is reshaping the end-user journey. Intelligent routing systems now guide drivers to available chargers based on real-time occupancy predictions. Machine learning models calculate accurate charge time estimations by factoring in battery health, ambient temperature, and charger capacity.

Anshuman Divyanshu, CEO – EVSE, Exicom: “Ease comes from thoughtful design. Chargers and apps should feel intuitive… Selecting a connector, activating a session, pairing with an app, and making a payment. These steps shouldn’t feel like a technical exercise. A properly designed charger should operate like familiar everyday technology“.

Meanwhile, Statiq integrates predictive booking systems that mitigate congestion during peak hours. AI personalisation engines recommend preferred stations based on historical behaviour, payment patterns, and travel routes, creating a frictionless digital experience.

Cloud Computing & Edge AI Integration

The scalability of AI-driven charging infrastructure depends on a hybrid architecture that balances edge responsiveness with cloud intelligence. Edge computing processes latency-sensitive operations such as load modulation and fault isolation in real time, while cloud platforms handle macro-level optimisation, fleet analytics, and model retraining.

Arvind Gopalakrishnan, CTO & CIO at SUN Mobility: “We are leveraging AI to build robust, data-driven platforms that optimise EV charging, routing, and energy distribution across urban and intercity networks… enabling real-time decision-making and improving grid efficiency”.

Cybersecurity frameworks are also increasingly AI-driven, employing anomaly detection algorithms to identify spoofing attempts and data breaches in highly connected charging ecosystems.

The Road Ahead: 2026–2030

As India moves toward deeper electrification, AI is poised to become the central nervous system of charging infrastructure. Self-healing networks, autonomous fleet charging depots, AI-integrated smart city command centres, and revenue-generating distributed energy marketplaces are no longer distant possibilities; they are emerging realities.

Khushboo Shrivastava, CEO of Coulomb AI, concludes, “The competitiveness of future charging networks will not be defined by hardware density alone, but by algorithmic intelligence. AI is what transforms infrastructure into an ecosystem.”

In the coming decade, India’s EV charging expansion will be defined less by the number of chargers deployed and more by the intelligence embedded within them. The evolution from hardware-centric infrastructure to software-defined energy ecosystems has already begun.

By: Shreya Bansal, Sub-Editor

The post Smart EV Charging in India: How AI and ML Are Optimising Grid, Pricing and Reliability appeared first on ELE Times.

Bengaluru Gets a World-Class Electronics Co-Innovation Hub as Henkel Launches Advanced Application Center

ELE Times - 8 годин 42 хв тому

Henkel has announced the launch of its Customer Application Centre in Bengaluru, reinforcing its commitment to India’s rapidly expanding electronics manufacturing sector. The new facility will serve as a collaborative innovation hub where Henkel experts and customers can co-develop, test, and validate advanced adhesive and thermal management solutions for next-generation electronics manufacturing.

The new facility represents one of Henkel’s most significant application engineering commitments in the India Middle East and Africa (IMEA) region, and is designed to address a critical gap in India’s electronics value chain: the absence of localized, world-class application testing and validation infrastructure that allows manufacturers to develop, qualify, and scale advanced materials solutions without the time and cost of sending work overseas.

India’s electronics manufacturing sector has grown nearly sixfold over the past decade. The momentum is accelerating, driven by the rapid build-out of data centre and AI computing infrastructure, 5G and fibre network expansion, electric vehicle charging systems, industrial automation, and advanced medical devices. Each of these sectors depends critically on high-performance adhesives, thermal management materials, and protective coatings, and each demands faster, more localised application engineering support than India’s ecosystem has traditionally been able to provide.

Bengaluru was a natural choice. The city’s concentration of semiconductor design talent, electronics R&D centres, and global OEM engineering teams makes it the single most important node in India’s electronics innovation ecosystem. Locating the centre here puts Henkel’s application expertise directly alongside the engineers and manufacturers who need it most.

“India’s electronics manufacturing ecosystem is at an inflexion point, and Bengaluru is at the centre of it,” said S. Sunil Kumar, Country President – India, Henkel. “What manufacturers across our focus sectors increasingly need is not just world-class materials, but a local partner who can co-develop, test, and validate those materials under real production conditions, and help them move from concept to market faster. That is precisely what this centre is designed to do. It is our most tangible expression yet of Henkel’s long-term commitment to India’s electronics future.”

The 5,000 sq. ft. facility, of which approximately 2,400 sq. ft. is dedicated laboratory and testing space, is built to replicate actual electronics manufacturing conditions, allowing customers to evaluate and optimise materials and processes before committing to production scale. Around 60-65% of the investment has gone into advanced lab and testing equipment, with 20-25% directed at customer co-development infrastructure.

The facility serves five high-growth sectors: telecom and 5G infrastructure, data centres and AI computing, power electronics and EV systems, industrial automation, and medical electronics. Its key capabilities span advanced thermal management testing, precision dispensing systems, electrical characterisation tools, and rapid-cure chambers, supporting the full journey from prototyping and material validation through to production readiness.

The centre directly supports India’s Make-in-India and Production-Linked-Incentive objectives by bringing application engineering, process optimisation, and reliability validation onshore. A substantial share of activities that Indian electronics manufacturers previously had to route through overseas facilities, or simply defer, can now be conducted locally, compressing development cycles and accelerating time to market.

Henkel application experts will work side-by-side with customer engineering teams at the facility, co-developing solutions tailored to specific device architectures and manufacturing requirements. This collaboration model is central to the centre’s design and is what distinguishes it from a conventional testing laboratory.

The post Bengaluru Gets a World-Class Electronics Co-Innovation Hub as Henkel Launches Advanced Application Center appeared first on ELE Times.

Navitas debuts 800V–6V DC–DC power delivery board at NVIDIA GTC

Semiconductor today - 10 годин 2 хв тому
Navitas Semiconductor Corp of Torrance, CA, USA — which provides GaNFast gallium nitride (GaN) and GeneSiC silicon carbide (SiC) power semiconductors — has announced its latest DC–DC power delivery board (PDB) powered by GaNFast technology, enabling direct conversion from 800V to 6V in one power stage. This eliminates the traditional 48V intermediate bus converter (IBC) stage within the compute server trays, maximizing system efficiency, reliability and valuable real-estate, to deliver a simple power delivery solution to support advanced NVIDIA AI infrastructure...

Milestone Systems Redefines the Open Platform for an AI-Native Era

ELE Times - 10 годин 11 хв тому

Milestone Systems has announced significant advancements to its XProtect video management software (VMS) and BriefCam video analytics. The XProtect App Platform, a new containerised application platform for VMS, and a new BriefCam analytics engine are designed to deliver increased reliability, greater customisation, more efficient hardware utilisation, and full readiness for Generative AI and analytics, empowering security teams to stay ahead as demands evolve.

Cameras and sensors collect more data than ever before. Today, the challenge has shifted from capturing information to understanding it – and turning it into actionable insight. Surfacing the most urgent threats and the most valuable operational insights requires AI and analytics tools built for the scale of modern video.

Even as capabilities advance, integrating new functionality still requires time, expertise, and coordination. Even routine software updates introduce operational risk. The possibility of system downtime often forces security teams to delay the very innovations that would make their operations more effective.

For solution developers creating the next generation of VMS applications, building and distributing solutions across thousands of customer environments adds another layer of complexity.

Milestone has built its new solutions to address these challenges – without requiring customers to replace what already works.

Building the Future of Video Management with the XProtect App Platform

Milestone’s new XProtect App Platform is a component that brings the latest VMS applications – including solutions like AI, analytics, access control, and more – into a surveillance system without friction.

The XProtect App Platform amplifies existing infrastructure by enabling customers to unlock insight from new AI tools, customise their systems quickly and safely, and install updates without downtime.

Built on a Linux-based, containerised architecture, the platform runs alongside existing XProtect installations and extends what the system can do without changing how it operates. Because each application and service runs in its own container, isolated from the core VMS and from other apps, customers can install apps and updates without requiring a full system restart or disrupting live operations. 

Delivering next-generation analytics that scale with BriefCam’s new engine

BriefCam’s engine has been redesigned to deliver scalable analytics capabilities – with significant improvements to real-time processing, scalability, and workflow efficiency. Thanks to better resource utilisation, users will see an improvement of 38%* in real-time throughput. All processing can be run on-premise, with no cloud dependencies.

The new engine enables investigators to translate witness statements into searches using plain language instead of filters, identify key moments to reduce review time and turn fragmented video into a connected narrative, and train BriefCam with custom categorisations to match their organisational needs.

Andrew Burnett, Chief Technology Officer, Milestone Systems, said: 

“The rapid growth of AI in video security has created an urgent need for platforms that can keep pace. Together with our partners and customers, we are co-creating the next generation of our technology on our open platform foundation. The XProtect App Platform and the new BriefCam engine are two major steps forward – giving organisations the flexibility to adapt quickly and confidently, as well as powerful on-premise intelligence that doesn’t compromise data sovereignty or operational control.”

Innovation across the ecosystem: App Centre and Developer Portal

The XProtect App Platform runs applications from the Milestone App Centre — the home for applications developed by both Milestone and our technology partners. The App Centre enables customers to browse, test, and install verified applications that extend the capabilities of their XProtect VMS. This makes it easier to discover new functionality, add AI analytics, or test emerging innovations without risk to live operations.

To support this, Milestone is introducing a new set of tools for developers and technology partners across the ecosystem. The Milestone Developer Portal consolidates everything developers need to build applications for the open platform in one place — from idea to development to release — providing a single, simple path to reach Milestone customers worldwide. The portal will be generally available by the end of 2026.

The XProtect App Platform and the new BriefCam engine are available now for early access customers. General availability is currently planned for late 2026.

The post Milestone Systems Redefines the Open Platform for an AI-Native Era appeared first on ELE Times.

5N Plus announces changes to board

Semiconductor today - 10 годин 16 хв тому
Specialty semiconductor and performance materials producer 5N Plus Inc (5N+) of Montréal, Québec, Canada has announced upcoming changes to its board of directors. Michael Hanley will be proposed for election as a new independent director at the annual meeting of shareholders on 7 May. Jean-Marie Bourassa, a board member since 2007, is not standing for re-election...

🖼 Запрошуємо на виставку «Права в наших руках»

Новини - 10 годин 42 хв тому
🖼 Запрошуємо на виставку «Права в наших руках»
Image
kpi вт, 03/17/2026 - 11:12
Текст

До Дня українського добровольця у Державному політехнічному музеї ім. Бориса Патона при КПІ ім. Ігоря Сікорського відкрили виставку «Права в наших руках».

What the special section on chiplets design has to offer

EDN Network - 10 годин 43 хв тому

Why chiplets and why now? A special section at EDN provides a detailed treatment of this revolutionary silicon technology that’s transforming the semiconductor industry at a time when AI is forcing every serious silicon team to modularize, mix-and-match, and move faster.

This special section will chart key building blocks of chiplet technology—3D ICs, advanced packaging, compute subsystems, heterogeneous integration, interconnects, memory wall, and more—while separating hype from reality.

Find out how system-on-chip (SoC) designs differ from multi-die systems and how standards are evolving in the multi-die chiplets world. Next, a senior executive from a chiplet startup shares how it’s advancing AI systems with HBM4- and SPHBM4-based DRAM solutions.

A technical piece takes a closer look at the chiplet-based design flow and the sequence of tasks, which appears nearly identical to that of a monolithic system-on-chip (SoC) design on the surface. Though in reality, chiplet designs significantly diverge from most SoC designs.

Another article will outline eight best practices for multi-die designs, given that these designs introduce new engineering complexities in areas such as packaging, verification, and thermal dynamics. For instance, it will show how designers can treat packaging as part of the design and engineer the interconnect like a subsystem.

Another article presents 3D ICs as a practical framework for heterogeneous integration. After listing the unique challenges of advanced packaging, it offers tips for efficient 3D IC design and an expert guide to heterogeneous integration.

Then there is a blog taking a sneak peek at co-packaged optics (CPO) challenges and how advances in photonics are aiming to overcome them, including signal integrity, thermal management, optical alignment, and cost. CPO offers a vital alternative to semiconductor packaging built around copper interconnects.

Stay tuned for this chiplets design summit, one article at a time.

Related Content

The post What the special section on chiplets design has to offer appeared first on EDN.

Дослідниця Віра Гуськова: "Наука – це складно, але воно того варте"

Новини - 10 годин 54 хв тому
Дослідниця Віра Гуськова: "Наука – це складно, але воно того варте"
Image
Інформація КП вт, 03/17/2026 - 11:00
Текст

Сьогодні ми є свідками, як штучний інтелект небаченими темпами проникає в усі сфери життя, змінюючи ринок праці, наукові дослідження, соціальні комунікації, глобальну безпеку та економіку. КПІ ім. Ігоря Сікорського одним із перших почав застосовувати ШІ у своїй діяльності. Зокрема, науковці кафедри штучного інтелекту НН ІПСА зосереджені на тому, щоб ШІ залишався інструментом прогресу та сприяв сталому розвитку.

NXP and NVIDIA Collaborate to Deliver New Innovations for Advanced Physical AI

ELE Times - 12 годин 6 хв тому

NXP Semiconductors N.V. announced innovative robotics solutions for reliable, secure, real-time data processing and transport and advanced networking, enabling sensor fusion, machine vision and precision motor control. First in a series of NXP’s foundational robotics solutions, these ready-to-deploy solutions were developed in collaboration with NVIDIA and implement NVIDIA Holoscan Sensor Bridge with NXP’s highly integrated SoCs. This reduces discrete components, significantly shrinking footprint, power and cost, while also simplifying the software complexity of robotic sensing and actuation, including humanoid form factors.

Physical AI is the next frontier of innovation, featuring systems that can sense, interpret, and interact with their surroundings with precision, reliability and safety. Humanoid robots are one of the most advanced embodiments of physical AI, requiring secure, reliable, low-latency data processing and transport throughout the robot body to enable synchronised motion, dense sensor fusion and advanced actuation.

NXP’s new integrated robot body solutions directly address this challenge, delivering powerful edge intelligence and low-latency networking to enable safe, secure, real-time communication. These solutions seamlessly integrate NVIDIA Holoscan Sensor Bridge into NXP’s software enablement, allowing developers to easily implement real-time processing and establish a direct transport route between the body and pre-specified regions of the robot brain, substantially reducing latency. This significantly simplifies the challenges of bringing AI into the physical world, where real-time decision-making is a critical requirement.

“Physical AI is redefining what machines can do in the real world, and humanoid robots represent the most complex expression of that revolution,” said Charles Dachs, Executive Vice President and General Manager, Secure Connected Edge, NXP Semiconductors. “By combining NXP’s deep expertise in edge processing, secure networking, functional safety and real-time control with NVIDIA robotics platforms, we are greatly simplifying physical AI development, enabling seamless connectivity between the physical AI edge and the central brain. This is just the beginning of what NXP will deliver to accelerate the ecosystem for physical AI.”

“The development of autonomous machines requires a high-performance computing architecture that can synchronise complex motor controls with real-time perception,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “By integrating NVIDIA Holoscan Sensor Bridge into its edge portfolio, NXP is providing developers with a scalable foundation to accelerate the deployment of physical AI.”

The collaboration between NXP and NVIDIA helps define a unified architecture for full-body humanoid robotics. NXP’s edge processors, motor control MCUs, automotive-grade networking technology, high-throughput asymmetric data transport capability acquired through Aviva Links, and functional safety expertise built on decades of automotive experience, combined with NVIDIA AI infrastructure, create a flexible, energy-efficient system architecture for next-generation robots.

The first Holoscan Sensor Bridge-ready solutions in NXP’s robotic portfolio include a machine vision solution based on the i.MX 95 applications processor delivering high-bandwidth data to the robot brain. It also includes a motor control solution based on a kinematic chain of i.MX RT1180 crossover MCUs, aggregated by NXP’s S32J TSN switch, directly connect to the brain. This motor control solution features integrated support for popular industrial protocols such as EtherCAT® and TSN. These flexible and software‑driven solutions are highly integrated to reduce footprint, power and cost, without sacrificing performance, safety or security, providing a complete, scalable foundation for full-body humanoid robot design.

The post NXP and NVIDIA Collaborate to Deliver New Innovations for Advanced Physical AI appeared first on ELE Times.

EDOM Showcases Physical AI & Robotics Applications at GTC 2026

ELE Times - 12 годин 59 хв тому

EDOM Technology will participate in NVIDIA GTC for the third consecutive year under the theme “From AI to Action: Physical AI in Motion.” Together with ecosystem partners, EDOM will showcase its latest AI computing platforms, key components, and system integration capabilities. At Booth #242, EDOM will present physical AI and robotics solutions powered by NVIDIA Jetson hardware and software resources, demonstrating how AI technologies are being developed and deployed across diverse fields such as smart healthcare, vision recognition, and speech understanding. The showcase will highlight EDOM’s capability to connect the AI ecosystem and accelerate industry innovation through integrated solutions.

During the GTC exhibition, EDOM Technology plans to demonstrate several innovative applications that combine physical AI with edge computing. With the introduction of the NVIDIA Jetson Thor, AI inference and control architectures can now be integrated into a single system, providing a high-performance computing foundation for real-time closed-loop control and multimodal sensing, and accelerating the real-world deployment of multimodal intelligent robots.

In the robot interaction showcase, EDOM collaborates with Algoltek to present “Dexterous Hand AI.” Powered by 4D AI Vision technology and vision-action models, the system can recognise and predict audience hand gestures in real time and respond accordingly. This demonstration highlights low-latency AI inference and instant feedback, while also presenting a complete Sim-to-Real workflow, from virtual simulation to physical deployment. EDOM also partners with Nexuni to present “AI Workforce: Embodied Intelligence.” Built on the NVIDIA Thor platform, the system enables dual-arm robot manipulation through few-shot learning. The demonstration shows a service robot performing household tasks, including object recognition, grasping, and fabric folding. Because fabric is a highly deformable material, handling it requires complex visual perception, state estimation, and coordinated dual-arm control. By leveraging Jetson Thor’s edge AI inference and real-time dynamic path correction, the system continuously adjusts its movements during the folding process, improving task success rate and operational stability. This highlights the potential of physical AI in smart manufacturing, human-robot collaboration, and service robotics.

In the area of enterprise AI and smart biotech applications, EDOM collaborates with Avalanche Computing to showcase the “Secure Offline Generative AI” platform. Combining enterprise-grade private LLMs with real-time speech intelligence, the platform runs on the NVIDIA Jetson edge platform, enabling low-latency offline speech recognition and semantic analysis. This allows generative AI to deliver real-time interaction within highly secure enterprise environments. EDOM is also working with CyteSi to present the “Software-Defined High-Throughput Wet Lab,” an AI-driven laboratory automation platform. CyteSi’s EWOD (Electrowetting-on-Dielectric) technology digitally controls micro-droplets with precision, supporting workflows such as NGS sample preparation, drug discovery, and synthetic biology research. The system uses NVIDIA Jetson as its edge computing core, integrating biochips from Japan Display Inc. (JDI) with real-time image analysis capabilities to provide an intuitive user interface and highly efficient experimental workflow, advancing AI adoption in smart biotech research and automated laboratories.

In addition, EDOM will showcase the “NVIDIA Jetson Thor Peripheral Ecosystem,” integrating a range of EDOM-certified peripheral components, including high-speed storage, Wi-Fi 6/6E and 5G communication modules, GMSL and MIPI cameras, 10G high-speed networking, sensors, camera modules, and high-speed I/O interfaces. This ecosystem helps developers rapidly build next-generation robotics and physical AI systems. Through comprehensive hardware integration and platform support, EDOM assists robotics developers in accelerating product design, deployment, and mass production, further expanding the physical AI and Jetson Thor ecosystem.

Jeffrey Yu, CEO of EDOM Technology, stated, “The key to physical AI lies not only in computing performance, but in comprehensive hardware–software integration and ecosystem collaboration. Through NVIDIA’s Three-Computer architecture and the Jetson Thor platform, we help customers build scalable, production-ready, end-to-end AI solutions—from model training and simulation validation to edge deployment.” He further noted that EDOM is not just a hardware supplier, but is committed to integrating peripheral modules, AI software frameworks, and partner resources to help enterprises accelerate the adoption of AI and robotics technologies.

The post EDOM Showcases Physical AI & Robotics Applications at GTC 2026 appeared first on ELE Times.

Deep Learning-Based Predictive Maintenance: The Backbone of Smart Manufacturing 4.0

ELE Times - 13 годин 23 хв тому
Introduction: Why Downtime Is No Longer Acceptable

Unplanned downtime remains one of the most persistent and costly challenges in modern manufacturing. Studies and industry assessments from organisations such as Siemens and the Aberdeen Group have consistently shown that unexpected equipment failures cost global manufacturers tens of billions of dollars every year, with large automotive plants, semiconductor fabs, and energy facilities losing millions of dollars per hour during major production disruptions.

In the current manufacturing landscape, where production systems operate with minimal margins and global supply chains are under continuous pressure, downtime has evolved from a technical inconvenience to a significant strategic liability.

With the advent of Industry 4.0, manufacturing facilities have transitioned from isolated mechanical environments to complex digital ecosystems comprising interconnected machines, industrial electronics, sensors, software platforms, and automation. In this context, traditional maintenance approaches, such as reactive repairs or fixed-schedule servicing, are increasingly misaligned with contemporary operational requirements.

Predictive maintenance (PdM), enabled by deep learning and industrial artificial intelligence, is fundamentally transforming approaches to reliability in manufacturing. Rather than reacting to failures, organisations can now anticipate them, plan interventions proactively, and maintain uninterrupted production. Predictive maintenance, once considered a support function, is increasingly recognised as a core strategic capability.

From Rules to Learning: How Deep Learning Predicts Failures

Earlier predictive maintenance systems relied on fixed thresholds and rule-based logic—triggering alerts when temperature, vibration, or current crossed predefined limits. While effective for detecting obvious faults, these approaches were inherently reactive and struggled to capture the complex, nonlinear behaviour of modern equipment operating under variable loads and conditions.

Deep learning signifies a fundamental transition from rule-based systems to data-driven intelligence. Instead of relying on manually encoded expert assumptions, deep learning models extract knowledge directly from historical and real-time data, identifying subtle, multi-parameter patterns that precede failures, often weeks in advance and prior to the activation of conventional alarms. These early indicators are typically undetectable when individual signals are analysed in isolation.

In addition to enhancing prediction accuracy, deep learning facilitates a strategic shift toward probabilistic and horizon-based maintenance planning. Maintenance decisions are guided by remaining useful life estimates and associated confidence levels, rather than binary fault alerts, enabling teams to prioritise interventions, manage operational risk, and align maintenance actions with production objectives. Several deep learning techniques are now widely applied in industrial environments.

Convolutional Neural Networks (CNNs)

CNNs are commonly used to analyse vibration spectrograms, thermal images, acoustic signatures, and visual inspection data. Subtle changes in these signals—often undetectable to human operators—can indicate early-stage bearing wear, imbalance, or surface degradation.

Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) Networks

Manufacturing equipment continuously generates time-series data from sensors embedded in motors, pumps, gearboxes, and actuators. LSTM models are particularly effective at learning long-term temporal dependencies, making them well-suited for predicting gradual wear, fatigue accumulation, and performance drift.

Autoencoders for Anomaly Detection

Autoencoders learn the normal operating behaviour of machines. When incoming data deviates from this learned baseline, the system flags anomalies that may signal emerging faults—even when labelled failure data is limited.

Practically, these models serve as digital reliability engineers by continuously monitoring assets and providing early warnings well in advance of potential production disruptions.

However, the effectiveness of learned intelligence is fundamentally dependent on the quality of the physical systems responsible for sensing, capturing, and transmitting data from the factory floor.

The Electronics Foundation Behind Predictive Intelligence

Deep learning-based predictive maintenance does not exist in isolation. Its effectiveness depends critically on the quality, reliability, and consistency of data originating from industrial electronics and sensing infrastructure. The performance of an AI model is fundamentally bounded by the fidelity of the signals it receives, which determines how accurately physical degradation mechanisms—such as bearing wear, insulation breakdown, or mechanical imbalance—are reflected in the data domain.

High-precision MEMS vibration sensors, thermal imaging modules, acoustic sensors, pressure sensors, and current-monitoring ICs form the data backbone of predictive systems. If these sensors are poorly calibrated, noisy, or inconsistently sampled, even the most advanced deep learning models will learn misleading patterns.

At the edge, industrial gateways and AI-capable processors facilitate local, low-latency analytics, thereby reducing reliance on cloud connectivity. This capability is particularly critical in sectors such as semiconductor manufacturing, automotive robotics, and power generation, where even milliseconds of delay or brief connectivity interruptions can result in significant consequences. In this context, sensors, edge processors, and industrial communication networks are foundational enablers of predictive intelligence rather than merely supporting components.

Industry Adoption: From Pilots to Production

Across sectors, deep learning-driven predictive maintenance is moving steadily from pilot projects to full-scale deployment.

Automotive Manufacturing

Automotive manufacturers increasingly apply AI-driven analytics to robotic assembly lines, analysing torque, vibration, and process parameters. These systems reduce unplanned downtime, stabilise quality, and support a transition from fixed maintenance schedules to condition-based strategies.

Aerospace and Aviation

Rolls-Royce remains a reference point in this domain. Through its Engine Health Monitoring and Total Care programs, the company uses advanced analytics to anticipate component degradation, improve fleet availability, and enhance safety—demonstrating the long-term value of predictive intelligence in mission-critical systems.

Energy and Utilities

Power plants rely on deep learning models to detect early signs of turbine imbalance, transformer insulation ageing, and rotating equipment faults. Early detection reduces outage risk and supports more reliable grid operations.

Electronics and Semiconductor Manufacturing

In semiconductor fabs, where uptime and yield are paramount, AI-based diagnostics monitor temperature stability, vibration, and process consistency. Predictive maintenance plays a central role in maintaining the precision required for advanced chip fabrication.

Industry Perspective: Insights from the Field

According to Sanjeev Srivastava, an industry spokesperson with extensive experience in industrial automation and intelligent manufacturing systems, the evolution of predictive maintenance reflects a deeper transformation in how manufacturers approach reliability and operational efficiency.

He observes that the transition from rule-based monitoring to learning-driven intelligence enables organisations to detect early-stage stress and degradation patterns that would otherwise remain invisible until failure. In this view, predictive maintenance is no longer a standalone analytics initiative but an integral part of how modern factories manage uptime, energy efficiency, and long-term asset performance.

This perspective aligns with a broader industry consensus that deep learning–based predictive maintenance is increasingly influencing strategic decision-making at the factory level, moving beyond experimental deployments.

Practical Challenges That Still Matter

Despite its advantages, the implementation of deep learning–based predictive maintenance presents challenges that extend beyond algorithmic development. Frequently, organisational and data-related constraints are more formidable than the technological aspects.

Data Quality and Consistency

Deep learning models require large volumes of reliable data. Poor sensor calibration, noise, and inconsistent sampling can significantly degrade prediction accuracy.

Legacy Equipment Integration

Many factories operate a heterogeneous mix of new and ageing equipment that was never designed for continuous data sharing. Retrofitting sensors and integrating AI insights with existing PLCs, ERP, and MES systems requires careful engineering and cross-functional coordination.

Model Transparency and Trust

Maintenance engineers with decades of experience are unlikely to act on AI recommendations without appreciating their rationale. Explainable AI techniques are, therefore, essential for building trust and encouraging adoption on the factory floor.

Scalability Throughout Sites

Models trained in one plant may not transfer directly to another due to differences in equipment, operating conditions, and maintenance practices. Hybrid cloud–edge architectures and continuous retraining are essential for enterprise-wide deployment.

Cost constraints and return-on-investment timelines also significantly influence adoption, especially when predictive maintenance initiatives compete with other capital priorities within the plant.

The Road Ahead for Predictive Maintenance

Several trends are shaping the next phase of predictive maintenance:

  • Edge AI for real-time, low-latency predictions
  • Digital twins that simulate asset behaviour and support model training without disrupting production
  • Federated learning to enable collaborative model improvement while preserving data privacy
  • AI-driven maintenance orchestration linking predictions with scheduling and spare-part logistics
  • Greater alignment with industrial standards such as IEC 62443 for cybersecurity and ISO 55000 for asset management

Digital twins provide substantial advantages; however, their effectiveness is contingent upon model fidelity and close synchronisation with real operational data. When anchored in live data rather than functioning as standalone simulations, digital twins serve as powerful tools for model training and workforce preparation without incurring downtime risks.

Edge AI, meanwhile, brings intelligence closer to the machine, enabling resilient, real-time decision-making even in connectivity-constrained environments. Together, these technologies are shaping more autonomous, responsive, and scalable maintenance systems.

Importantly, the future of predictive maintenance does not entail replacing engineers. Rather, artificial intelligence augments human expertise by managing continuous monitoring, anomaly detection, and early warning tasks at a scale unattainable by human teams alone.

Conclusion: A Strategic Shift, Not a Technology Trend

Deep learning–based predictive maintenance is transforming the management of reliability, efficiency, and risk in the context of Smart Manufacturing 4.0. Through the integration of advanced algorithms, robust industrial electronics, and edge computing, organisations are able to anticipate failures, minimise downtime, extend asset lifespans, and enhance safety.

While challenges remain, momentum across industries is unmistakable. Advances in explainable AI, digital twins, and edge intelligence are accelerating adoption and lowering practical barriers.

Engineers continue to play a critical role in interpreting predictions, balancing safety with production priorities, and making high-impact decisions. In this collaboration between human judgment and machine intelligence, predictive maintenance can be realised. Within this context, deep learning–based predictive maintenance should be regarded not as an isolated artificial intelligence initiative, but as the foundational reliability backbone of Smart Manufacturing 4.0.ng 4.0 is being built.

The post Deep Learning-Based Predictive Maintenance: The Backbone of Smart Manufacturing 4.0 appeared first on ELE Times.

A battery charger that loudly hums: Dump it or just make it dumb?

EDN Network - Пн, 03/16/2026 - 19:37

An archaic DieHard device has seemingly died hard; is hacking it to resurrect a portion of its original function a worthwhile endeavor?

A decade-plus ago, shortly after moving (part-time, at the time) to Colorado, I came across a smoking (no pun intended…keep reading) deal at Sears: a 12V vehicle battery charger supporting both standard SLA (sealed lead acid) and AGM (absorbed glass mat) cells, along with 2A, 10A and 50A (!!!) charging current options, for $32.99. I bought two, one for me and the other for my then-girlfriend (and now-wife), since we had separate residences at the time.

I’ve held onto both—in spite of the fact that I also now own several newer microprocessor-controlled (versus this transformer-based model) chargers, not only significantly more compact but offering enhanced features such as desulfication support—primarily due to the 50A jump-start capability that only the old-school DieHard charger seemingly delivers.

Geriatric degradation

When I fired one of them up a few months back after not using either of them for a while, though, I noticed that it was making a loud humming sound—incrementally louder at the 2A, then 10A, and finally 50A settings, as I’d recollected from the past—but much louder at each output option than I’d remembered. To confirm, I pulled the other charger out of its box, which also hummed but at the noticeably lower din that I’d recalled. Plus, the first charger didn’t seem to be doing anything charging-wise, whereas the second still seemingly worked fine.

Here’s the first (loud humming) charger, which I re-hooked up just yesterday to my 2001 Volkswagen Eurovan Camper (which uses a standard SLA, not AGM, battery), at the 2A setting:

10A setting:

and 50A setting:

The gauge readings don’t seem to make sense in any of these cases. As background, I top off the charge (normally using one of my more modern chargers) on the battery in the in-storage van once a month at the beginning of the month. I took those photos a bit more than halfway through the month, after a small amount of leakage discharge had inevitably occurred (less than at the end of the month, but still not nothing). So, the full-charge indication doesn’t seemingly reflect reality. Compared to them, the 2A- and 10A-setting displays when using the second (lower humming) charger are more in line with my expectations:

as is the second charger’s 50A-setting display, which I’ve shot as a video because this time, unlike previously, the LED is rapid-blinking as expected:

A 0V output isn’t always bad news

Just prior to taking the prior photos yesterday, I’d actually begun my investigation by hooking both chargers up to my multimeter to see what they were outputting. Here’s the first (loud humming) charger at its 2A, 10A, and 50A settings, first configured for use with an AGM battery:

and then set for a standard SLA battery:

The output levels were, I initially (albeit incorrectly) ascertained, in the ballpark of what one would expect for a 12V battery charging target, although perhaps a bit low. Now look at what happened when I hooked the second (lower humming) charger up, again at its 2A, 10A and 50A settings, first configured for use with an AGM battery and then a standard SLA battery:

I’ve saved you from looking at six consecutive images of the multimeter displaying the exact same thing: 0V. This initial outcome actually had me wondering whether the second (lower humming) charger was the one that had “gone south”, until I did a bit of online research and learned that this behavior is to be expected. Unless the charger detects that it’s connected to a correct-polarity battery that isn’t already drained (hold that thought), it will disable its output, among other reasons, to prevent sparking in the presence of hydrogen and other off-gassing.

Some amount of transformer hum is to be expected, of course, as many folks reading this already realize; the root-cause phenomenon is known as magnetostriction and results in a generated tone at twice the mains AC frequency (i.e., at 120 Hz in the U.S., for example):

Additional hum sources, quoting Wikipedia, are “stray magnetic fields causing the enclosure and accessories to vibrate.” And it’s also normal for the hum volume to increase somewhat under higher load. Abnormally loud hum and other noise, however, is the result of other, degradation-induced factors, such as progressive disintegration of the transformer’s core adhesive, resulting in separation of the laminated layers, or a rattle caused by loose component mounting bolts.

(Sorta-) twin sons of different mothers

At this point, I’ll point out something else interesting (at least to me) that my research uncovered: there were (at least) two different internal designs that reached production for this particular DieHard charger. It’s the model 71222; as you can see from this closeup of the outer box, mine’s specifically a model 28.71222 (here’s a link to the user manual):

But in searching around, I also came across references to another version, the model 200.71222, including another user manual link (this time even including a parts list and wiring diagram!). The two variants seem functionally identical from a high-level description standpoint and look similar from the outside, too, aside from a multicolor front panel motif in the model 200.71222:

versus my more monochrome model 28.71222. But the insides are a different matter…

At this point, I’ll point out another “information” (I’m using the term somewhat loosely) source that I came across during my research: this video:

Bonus points to Jason Hemphill, the video creator, for knowing (for example) the difference between the transformer’s primary and secondary sides, as well as for (sorta) explaining the purpose of two diodes connected to the transformer’s center tap secondary. But when, in pointing out what he called the “little smart board”, he voiced the following elucidation:

These wires over here…they’re just control…they don’t do anything…

I admittedly started shaking my head. And when, with the charger still powered up, he then yanked the “little smart board’s” fourth (black) wire out of what it was plugged into at its other end (item 7, the 35A circuit breaker, if you’ve already cross-referenced the parts list and wiring diagram in the user manual I pointed out to you earlier), I about fell out of my chair. And then I realized that although his charger was also a DieHard model 71222, it didn’t look like mine on the inside; I hadn’t yet noticed the front-panel motif variance between the two.

Looking “under the hood”

At this point, I’ll transition to the teardown portion of my write-up, before returning and concluding. Beginning with the obligatory outer box shots:

Can’t forget this all-important one…😂

I next opened it up:

and then pulled out the contents (I later found the paper user manual in my filing cabinet):

Convenient carry handle:

Only after connecting the charger to the battery and selecting the desired settings should you, and I quote, “Plug the charger into a live AC power outlet”. Further, “Unplug the AC cord before disconnecting the battery clips”….as well as prior to unplugging internal cabling, yes?

Back off my soapbox

and back to the backside to uncoil the power cord:

Don’t worry, I won’t ascend the soapbox again. That said…

And now to dive inside. You may have noticed the four screw heads on the sides, two per. Guess what comes next?

That got me partway there:

Oh yeah, there’s another screw head on the underside:

But grandma, what a big transformer you have!

At this point, I was still clinging to the delusion that this charger might be working (I hadn’t yet found Jason Hemphill’s video), so I didn’t disassemble it further. Still, I hope the photos of the internals of my model 28.71222 will be educational for you, not only standalone but also in comparison to Hemphill’s presumed model 200.71222. Here, first off, is the rear-located internal PCB, both much larger than the one in Hemphill’s charger and with an integrated circuit breaker (more accurately stated: fuse pair):

Check out the sizeable SCRs (silicon-controlled rectifiers) and discrete transistors bolted to metal heat-transfer plates on either side of the PCB!

An inner view of the front panel, with the charging current switch at lower left, the gauge at upper right and the AGM-vs-standard SLA switch below it:

And, last but definitely not least, the predominant contributor to the unit’s ~11 lb weight, the transformer. Here’s the primary winding:

And the secondary:

and finally, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, perspectives of the top (primary at left, secondary at right):

and one side (ditto):

Old-school pros, cons and conclusions

Note that the output voltage Jason Hemphill was getting out of his charger prior to his “hack” is a close approximation of what I’m seeing with mine at its 50A setting. He indicated in his video that he exclusively uses his “fixed” unit at 50A, so I’m assuming his entire video was also shot with it configured that way (I couldn’t find a sufficiently clear video frame of the front panel to confirm). And by the way, in scrolling through the comments and his responses, I realized I owed him more credit than the little I’d initially allocated (with minor grammar tweaks by yours truly):

You are exactly right. It is not fixed. And you’re also right; it’s likely to be a 10-cent transistor. But most people will not be able to fix the transistor issue. They won’t spend the time to find it, order it and replace it. The solution I’m offering is to turn it into an old school charger. It takes out the safety technology. This solution is an option for people who are old school and are used to working with things that way. As I stated in the video, this isn’t an option to hook up to a battery and walk away. So, if you’re someone who can’t hook up a battery right or doesn’t understand the idea of overcharging and needs idiot-proof technology to do that for you, this isn’t your option: go buy a new one. But if you’re old school, this will do the job.

Further perusing the 100+ comments (resulting from 115,000+ views to date!) of Jason Hemphill’s video was not only educational but also entertaining. I learned, for example, that the DieHard model 200.71222 is internally identical to the Schumacher Electric (the original developer, I’m assuming) SE5212A charger. No idea who originally developed my DieHard model 28.71222, however. And even if I did, I’m not going to try to resurrect this one, no matter that plenty of other folks seemingly prefer ones of a fully manual fashion.

Sure, by bypassing the “little smart board,” the now-manual charger might attempt to resurrect a fully drained battery, but my more modern chargers already do the same thing. They, plus the still-working sibling to my malfunctioning model 28.71222, will also automatically shut off at the end of the charging cycle, versus overcharging and potentially ruining the battery (not to mention causing other potential broader problems).  And they’ll also save me from calamity should I distractingly hook up the charger to the battery in a reverse polarity state.

Thoughts on the topics discussed and internal circuitry revealed in today’s piece? Let me know in the comments! If you’re interested in inheriting this charger and converting it to a manual version yourself (note that I take no financial or other responsibility for any subsequent calamities), send me an email! And by the way, if you’re interested in finding out more about how car battery testers work, head here!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A battery charger that loudly hums: Dump it or just make it dumb? appeared first on EDN.

Nagoya University and NU-Rei report first gallium oxide thin-film epi growth on silicon

Semiconductor today - Пн, 03/16/2026 - 18:43
At the Japan Society of Applied Physics (JSAP) Spring Meeting 2026 at the Insitute of Science Tokyo (15-18 March), a research group from Nagoya University’s Center for Low-temperature Plasma Sciences, in collaboration with university spinout NU-Rei Co Ltd, is presenting six advances in the growth of gallium oxide (Ga2O3), which has strong potential for next-generation power devices used in electric vehicles, power conversion systems, and space applications. Gallium oxide is attracting growing interest in the power semiconductor industry because it can, in principle, produce higher-voltage devices with relatively abundant, lower-cost raw materials...

EPC91202 evaluation board added for three-phase BLDC motor drive inverter

Semiconductor today - Пн, 03/16/2026 - 18:31
Efficient Power Conversion Corp (EPC) of El Segundo, CA, USA — which makes enhancement-mode gallium nitride on silicon (eGaN) power field-effect transistors (FETs) and integrated circuits for power management applications — has introduced the EPC91202 evaluation board, a complete three-phase brushless DC (BLDC) motor drive inverter designed to accelerate the development of high-efficiency motor drive applications in robotics, e-mobility, drones, industrial automation, and battery-powered systems...

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів