Українською
  In English
ELE Times
Keysight to Showcase Quantum-AI Collaboration at GTC 2025 with NVIDIA NVQLink
Keysight Technologies, Inc. announced that they support the development of the new NVIDIA NVQLink open architecture for the low latency of quantum processors and AI supercomputing. Keysight Technologies is working with NVIDIA to advance hybrid quantum–AI computing through high-performance control systems and AI-driven infrastructure.
Disaggregated computer architecture is redefining the future of high-performance computing (HPC), enabling organizations to meet rapidly evolving computational demands with greater agility and efficiency. By decoupling compute, memory, storage, and networking into composable resource pools, this approach allows on-demand configuration and precise resource allocation, maximizing performance while optimizing both scalability and cost.
As industries push toward increasingly complex and data-intensive workload spanning artificial intelligence, data analytics, large-scale simulations, and quantum computing—disaggregated systems deliver the flexible, future-proof foundation required to sustain innovation at scale. With seamless upgrades, improved utilization, and dynamic adaptability, these architectures are poised to become the cornerstone of next-generation HPC infrastructure, driving breakthroughs across science and industry.
With decades of experience designing and enabling large-scale systems, Keysight is advancing the integration of quantum and classical computing technologies to address evolving computational challenges. The company’s QCS enables precise, scalable quantum experimentation and plays a vital role in the emerging quantum ecosystem. Working with NVIDIA NVQLink and NVIDIA CUDA-Q, Keysight is exploring how quantum control systems and classical accelerators can be harnessed together, Keysight is helping organizations prepare for a new era of hybrid computing—one that enables quantum-enhanced AI, ultra-precise simulations, and advanced modeling, while remaining adaptable to future advancements across both quantum and classical domains.
This initiative marks a significant milestone for Keysight aimed at uniting high-performance control systems with AI-driven infrastructure to accelerate quantum research and hybrid compute development.
Dr. Eric Holland, General Manager, Keysight Quantum Engineer Solutions, said: “As the industry accelerates toward the next era of high-performance computing, leadership means more than building breakthrough technologies, it requires defining the standards that make these transformative technologies universally accessible. By working with NVIDIA to establish a framework for quantum–HPC hybrid compute, we are helping ensure that tomorrow’s heterogeneous engines, spanning quantum, AI, and classical HPC, operate seamlessly within modern data centers. Together, we’re shaping the future fabric of compute for scientific discovery and innovation at scale. ”
Tim Costa, General Manager for Quantum at NVIDIA, said: “Driving breakthroughs in quantum computing requires quantum processors to integrate within AI supercomputers to run complex control tasks and deploy hybrid applications. Keysight is playing an integral role in solving this challenge, and NVQLink is the open unified interface for developing what comes next.”
The post Keysight to Showcase Quantum-AI Collaboration at GTC 2025 with NVIDIA NVQLink appeared first on ELE Times.
Centre Clears ₹5,532 Crore Investment for Seven Electronics Manufacturing Projects
In a significant effort to enhance India’s electronics ecosystem, the Union Government has cleared investment of ₹5,532 crore for seven projects under the Electronics Components Manufacturing Scheme (ECMS). The initiative is to further India’s transition from assembling imported components to manufacturing core electronic materials and parts within the country.
Union Electronics and IT Minister Ashwini Vaishnaw announced that the projects are a “transformational step” towards developing a self-reliant and innovation-led electronics manufacturing ecosystem.
The projects cleared recently distributed across Tamil Nadu (5 units), Andhra Pradesh (1 unit), and Madhya Pradesh (1 unit) will create over ₹36,000 crore worth of component production and generate over 5,000 direct employment opportunities.
The ECMS will facilitate local manufacturing of key components like Multi-Layer and HDI PCBs, Camera Modules, Copper Clad Laminates (CCL), and Polypropylene Films. These components form the backbone of thousands of diverse products, ranging from smartphones and electric vehicles to medical devices and defence technology.
The cleared projects will satisfy some 20% of India’s domestic demand for PCBs and 15% of its camera module needs, while the production of CCLs will be entirely localized, with 60% of the output being export-focused.
The ECMS initiative has drawn a strong response from industry players, with 249 applications already submitted, signaling robust interest in the program. Combined, they amount to potential investments of ₹1.15 lakh crore, production value of ₹10.34 lakh crore, and 1.42 lakh job opportunities the largest-ever pledge in India’s electronics industry.
The program is likely to sharply reduce import dependence, improve supply chain resilience, and attract high-skill employment in manufacturing and R&D. The components produced under ECMS will help feed key industries like defence, telecommunication, renewable energy, and electric vehicles.
Vaishnaw highlighted that ECMS synergizes with flagships such as the Production Linked Incentive (PLI) scheme and the India Semiconductor Mission (ISM).
“India is transforming from being an assembling country to a product country designing, producing, and exporting sophisticated electronic gear. ECMS fills the critical gap between devices and components and manufacturing and innovation,” he added.
With this approval, India makes another decisive step towards becoming a world electronics manufacturing hub, driven by indigenous innovation, large-scale investment, and increasing self-reliance.
The post Centre Clears ₹5,532 Crore Investment for Seven Electronics Manufacturing Projects appeared first on ELE Times.
Nuvoton’s M55M1 AI MCU Debuts with Built-in NPU for Entry-Level AI Performance
Nuvoton Technology has launched its latest generation AI microcontroller, the NuMicro M55M1, specifically designed for edge applications such as AI data recognition and intelligent audio. Positioned as a rare entry-level AI solution in the market, the M55M1 integrates an NPU delivering up to 110 GOPS of AI computing power, providing over 100 times the inference performance compared to traditional 1GHz MCUs. Paired with Nuvoton’s self-developed NuML Tool Kit, it enables developers to quickly get started with AI applications in a familiar MCU development environment. A variety of AI models are also available for trial, including face recognition, object detection, audio command recognition, and anomaly detection, effectively lowering the technical barrier and accelerating product deployment.
To meet diverse AI application scenarios, the M55M1 is a 32-bit microcontroller based on the Arm Cortex-M55 core, equipped with Arm Ethos-U55, offering up to 110 GOPS of computing power and a built-in Helium vector processor. Compared to Arm’s existing DSPs, it delivers up to 15 times higher performance. To address AI model requirements, it provides up to 1.5 MB of RAM, 2 MB of Flash, and supports external HyperRAM/OctoSPI expansion. The M55M1 not only features powerful computing capabilities and a flexible architecture but also offers a highly integrated development environment. Through the NuML Tool Kit, developers can easily port AI models to the M55M1 platform using familiar MCU firmware development methods. This architecture is suitable for a wide range of edge AI applications, such as predictive maintenance analysis for factory equipment, analysis for various home appliances and medical sensing devices, as well as endpoint AI applications like keyword spotting, echo cancellation, and image recognition.
On the general MCU operation side, the M55M1 is equipped with a Cortex-M55 core running at up to 220 MHz and offers five low-power modes. It also supports a wide range of peripherals, including CCAP, DMIC, I2C, SPI, Timer, UART, ADC, and GPIO, all of which can operate in low-power modes. In addition, the M55M1 features multi-level security mechanisms, including secure boot, Arm TrustZone, a hardware crypto engine, and Arm PSA Certified Level 2 compliance, providing reliable protection for IoT and embedded applications.
The post Nuvoton’s M55M1 AI MCU Debuts with Built-in NPU for Entry-Level AI Performance appeared first on ELE Times.
Anritsu Supports EU Market Expansion by Ensuring Safety and Compliance of 5G Wireless Devices
ANRITSU CORPORATION has enhanced the functions of its New Radio RF Conformance Test System ME7873NR to support 5G wireless device conformance tests and compliance with the ETSI EN 301 908-25 standard under the European Radio Equipment Directive (RED).
By using these enhanced functions, manufacturers can ensure regulatory compliance for 5G wireless devices sold in the EU and guarantee product quality and reliability. Anritsu is dedicated to supporting smooth market entry for products into the EU.
RED is an EU legal framework defining the safety, electromagnetic compatibility (EMC), radio-spectrum efficiency, and cybersecurity requirements of wireless devices in the EU. With the spread of wireless technologies, such as 5G, the ETSI EN 301 908-25 standard for 5G NR devices has been established based on 3GPP Release 15 regulating 5G specifications, and wireless products now sold in the EU must comply with this standard.
Through this latest enhancement, Anritsu continues to play a key role in deployment of commercial 5G services, helping create a 5G-empowered society.
The New Radio RF Conformance Test System ME7873NR is a 5G test platform compliant with 3GPP standards and is certified by both the Global Certification Forum (GCF) and PCS Type Certification Review Board (PTCRB).
In addition to supporting Frequency Range 1 (FR1, Sub-6 GHz), combining the system with an OTA (CATR) chamber 2 adds support for Frequency Range 2 (FR2, mmWave). The flexible configuration and customizable design provide an upgrade path from current ME7873LA systems, offering enhanced 5G compatibility at a lower capital cost.
The post Anritsu Supports EU Market Expansion by Ensuring Safety and Compliance of 5G Wireless Devices appeared first on ELE Times.
High-Accuracy Time Transfer Solution Delivers Sub-Nanosecond Timing Up to 800 km via Long-Haul Optical Networks
Governments across the globe are requesting critical infrastructure operators to adopt additional time sources alongside GNSS to enhance resilience and reliability, ensuring uninterrupted operations in the face of potential disruptions or service limitations. Microchip Technology announced the release of the TimeProvider 4500 v3 grandmaster clock (TP4500) designed to deliver sub-nanosecond accuracy for time distribution across 800 km long-haul optical transmission.
This innovative solution provides critical infrastructure operators with the missing link the industry has been waiting for in terms of complementary Positioning, Navigation and Timing (PNT). The TP4500 provides a resilient, terrestrial solution in the absence of Global Navigation Satellite Systems (GNSS) for precise timing, alleviating physical obstruction, security and signal interference costs associated with GNSS-dependent deployments.
Most current deployments require GNSS at grandmaster sites, but the TP4500 enables highly resilient synchronization without relying on GNSS. The TP4500 supports time reference provided by UTC(k) UTC time provided by national labs, and is the first grandmaster to offer a premium capability that delivers High Accuracy Time Transfer (HA-TT) as defined by ITU-T G.8271.1/Y.1366.1 (01/2024) to meet 5 nanoseconds (ns) time delay over 800 km (equating to 500 picoseconds (ps) average per node, assuming 10 nodes), setting a new industry benchmark for accuracy.
The TP4500 system can be configured with multiple operation modes to form an end-to-end architecture known as virtual PRTC (vPRTC), capable of delivering PRTC accuracy over a long-distance optical network. vPRTC is a carrier-grade architecture for terrestrial distribution of HA-TT, which has been widely deployed in operator networks throughout the world. HA-TT is a proven and cost-effective approach, as opposed to other alternative PNT solutions that have no wide adoption into critical infrastructure networks to date, have low Technology Readiness Levels (TRL) and are still dependent on GNSS as the ultimate source of time.
“The TimeProvider 4500 v3 grandmaster is a breakthrough solution that empowers operators to deploy a terrestrial, standards-based timing network with unprecedented accuracy and resilience,” said Randy Brudzinski, corporate vice president of Microchip’s frequency and time systems business unit at Microchip. “This innovation reflects Microchip’s commitment to delivering the most advanced and reliable timing solutions for the world’s most essential services.”
TimeProvider 4500 v3 is a key steppingstone towards support of the ITU-T G.8272.2 standard, which defines a coherent network reference time clock (cnPRTC) in amendment 2 (2024). An cnPRTC architecture ensures highly accurate, resilient, and robust timekeeping throughout a telecom network. This allows stable, network-wide ePRTC time accuracy, even during periods of regional or network-wide GNSS unavailability or other failures and interruptions.
Key features of the TimeProvider 4500 v3 series:
- Sub-nanosecond accuracy: Delivers 5 ns time delay over long distances up to 800 km Terrestrial alternative to GNSS: Enables critical infrastructure to operate with resilient synchronization mechanisms independent of GNSS
- Seamless integration: Standards-based terrestrial network for time transfer, easily integrated with off-the-shelf small form-factor pluggable and existing Ethernet and optical deployments
- Exclusive capability: Premium software features available only on the TP4500 v3, integrating Microchip’s PolarFire FPGA and Azurite synthesizer for unmatched precision
Optimized for telecom, utilities, transportation, government, and defense, the TP4500 grandmaster ensures precise and resilient timing where it matters most. This latest version provides operators with a scalable solution for secure and reliable time distribution over long distances.
The post High-Accuracy Time Transfer Solution Delivers Sub-Nanosecond Timing Up to 800 km via Long-Haul Optical Networks appeared first on ELE Times.
Infineon adds SPICE-based model generation to IPOSIM platform for more accurate system-level simulation
The Infineon Power Simulation Platform (IPOSIM) from Infineon Technologies AG is widely used to calculate losses and thermal behavior of power modules, discrete devices, and disc devices. The platform now integrates a SPICE-based model generation tool that incorporates external circuitry and gate driver selection into system-level simulations. The tool delivers more accurate results for static, dynamic, and thermal performance, taking into consideration non-linear semiconductor physics of the devices. This enables advanced device comparison under a wide range of operating conditions and faster design decisions. Developers can also customize their application environment to reflect real-world operating conditions directly within the workflow. As a result, they can optimize the application performance, shorten time-to-market, and reduce costly design iterations. IPOSIM integrates SPICE to support a wide range of applications where switching power and thermal performance are critical, including electric vehicle (EV) charging, solar, motor drives, energy storage systems (ESS), and industrial power supplies.
In the global transition to a decarbonized future, power electronics are essential for enabling cleaner energy systems, sustainable transportation, and more efficient industrial processes. This transformation increases the demand for advanced simulation and validation tools that allow designers to innovate early in the development cycle. At the same time, they must deliver highly efficient, high-power-density designs such as EV chargers, solar inverters, motor drives, and industrial power supplies, while minimizing design iterations and reducing development costs. Switching losses and thermal performance are decisive factors in this process, yet traditional hardware testing remains time-consuming, costly, and limited in capturing real-world conditions.
With the integration of SPICE, IPOSIM brings the simulation of real switching behavior fully online and helps users optimize their designs at an early stage of the development process. By extending system simulation to real-world conditions, the models make it possible to factor in critical parameters such as stray inductance, gate voltage and dead time. The device characterization reflects the switching behavior under more realistic operating scenarios, taking the selected gate driver into account. The capability is fully integrated into IPOSIM’s multi-device comparison workflow, enabling users to select devices marked with the SPICE icon, configure application environments, and follow a guided simulation process. With its system-level accuracy and intuitive workflow, IPOSIM’s new SPICE-based models enable faster device selection and more reliable design decisions.
The post Infineon adds SPICE-based model generation to IPOSIM platform for more accurate system-level simulation appeared first on ELE Times.
Top 10 Agentic AI Threats and How to Defend Against Them
Author: Saugat Sindhu, Global Head – Advisory Services, Cybersecurity & Risk Services, Wipro Limited
October is Cybersecurity Awareness Month, and this year, one emerging frontier demands urgent attention: Agentic AI.
India’s digital economy is booming — from UPI payments to Aadhaar-enabled services, from smart manufacturing to AI-powered governance. But as artificial intelligence evolves from passive large language models (LLMs) into autonomous, decision-making agents, the cyber threat landscape is shifting dramatically.
These agentic AI systems can plan, reason, and act independently — interacting with other agents, adapting to changing environments, and making decisions without direct human intervention. While this autonomy can supercharge productivity, it also opens the door to new, high-impact risks that traditional security frameworks aren’t built to handle.
Here are the 10 most critical cyber risks of agentic AI — and the governance strategies to keep them in check.
1. Memory poisoning
Threat: Malicious or false data is injected into an AI’s short- or long-term memory, corrupting its context and altering decisions.
Example: An AI agent used by a bank falsely remembers that a loan is approved due to a tampered record, resulting in unauthorized fund disbursement.
Defense: Validate memory content regularly; isolate memory sessions for sensitive tasks; require strong authentication for memory access; deploy anomaly detection and memory sanitization routines.
2. Tool misuse
Threat: Attackers trick AI agents into abusing integrated tools (APIs, payment gateways, document processors) via deceptive prompts, leading to hijacking.
Example: An AI-powered HR chatbot is manipulated to send confidential salary data to an external email using a forged request.
Defense: Enforce strict tool access verification; monitor tool usage patterns in real time; set operational boundaries for high-risk tools; validate all agent instructions before execution.
3. Privilege compromise
Threat: Exploiting permission misconfigurations or dynamic role inheritance to perform unauthorized actions.
Example: An employee escalates privileges with an AI agent in a government portal to access Aadhaar-linked information without proper authorization.
Defense: Apply granular permission controls; validate access dynamically; monitor role changes continuously; audit privilege operations thoroughly.
4. Resource overload
Threat: Overwhelming an AI’s compute, memory, or service capacity to degrade performance or cause failures — especially dangerous in mission-critical systems like healthcare or transport.
Example: During festival season, an e-commerce AI agent gets flooded with thousands of simultaneous payment requests, causing transaction failures.
Defense: Implement resource management controls; use adaptive scaling and quotas; monitor system load in real time; apply AI rate-limiting policies.
5. Cascading hallucination attacks
Threat: AI-generated false but plausible information spreads through systems, disrupting decisions — from financial risk models to legal document generation.
Example: An AI agent in a stock trading platform generates a misleading market report, which is then used by other financial systems, amplifying the error.
Defense: Validate outputs with multiple trusted sources; apply behavioural constraints; use feedback loops for corrections; require secondary validation before critical decisions.
6. Intent breaking and goal manipulation
Threat: Attackers alter an AI’s objectives or reasoning to redirect its actions.
Example: A procurement AI in a company is manipulated to always select a particular vendor, bypassing competitive bidding.
Defense: Validate planning processes; set boundaries for reflection and reasoning; protect goal alignment dynamically; audit AI behaviour for deviations.
7. Overwhelming human overseers
Threat: Flooding human reviewers with excessive AI output to exploit cognitive overload — a serious challenge in high-volume sectors like banking, insurance, and e-governance.
Example: An insurance company’s AI agent sends hundreds of claim alerts to staff, making it hard to spot genuine fraud cases.
Defense: Build advanced human-AI interaction frameworks; adjust oversight levels based on risk and confidence; use adaptive trust mechanisms.
8. Agent communication poisoning
Threat: Tampering with communication between AI agents to spread false data or disrupt workflows — especially risky in multi-agent systems used in logistics or defense.
Example: In a logistics company, two AI agents coordinating deliveries are fed false location data, sending shipments to the wrong city.
Defense: Use cryptographic message authentication; enforce communication validation policies; monitor inter-agent interactions; require multi-agent consensus for critical decisions.
9. Rogue agents in multi-agent systems
Threat: Malicious or compromised AI agents operate outside monitoring boundaries, executing unauthorized actions or stealing data.
Example: In a smart factory, a compromised AI agent starts shutting down machines unexpectedly, disrupting production.
Defense: Restrict autonomy with policy constraints; continuously monitor agent behaviour; host agents in controlled environments; conduct regular AI red teaming exercises.
10. Privacy breaches
Threat: Excessive access to sensitive user data (emails, Aadhaar-linked services, financial accounts) increases exposure risk if compromised.
Example: An AI agent in a fintech app accesses users’ PAN, Aadhaar, and bank details, risking exposure if compromised.
Defense: Define clear data usage policies; implement robust consent mechanisms; maintain transparency in AI decision-making; allow user intervention to correct errors.
This list is not exhaustive — but it’s a strong starting point for securing the next generation of AI. For India, where digital public infrastructure and AI-driven innovation are becoming central to economic growth, agentic AI is both a massive opportunity and a potential liability.
Security, privacy, and ethical oversight must evolve as fast as the AI itself. The future of AI in India will be defined by the intelligence of our systems — and by the strength and responsibility with which we secure and deploy them.
The post Top 10 Agentic AI Threats and How to Defend Against Them appeared first on ELE Times.
AI is defining reality as we progress further
AI has well integrated into almost every sector of the economy. It has not only driven efficiency but has also simulated innovation. As AI assimilates in to the electronic industry, new trends have sparked a growing bud of innovation for the upcoming year. The electronics industry will experience a new wave of faster decision-making, improved efficiency, and sustainability as AI develops in 2026.
As research and development in the field of Artificial Intelligence grows, the trends for next year can be understood as follows-
- Agentic AI: Artificial Intelligence is already being used extensively in the R&D sector but AI can conclusively solve a key challenge in the electronic manufacturing sector too. With the development of Agentic AI, the issues related to supply chain disruptions can be studied, allowing planning in advance. The Agentic AI can identify alternate suppliers and dynamically reconfigure logistics in response to changing conditions. This reduced human intervention can reduce delays in production, and allow for more focus on R&D. It can also be used as an all-time sales assistant, tracking customer requests, generating quotes, and even placing orders. This will bring a sustainable pace to the business of the industry and also reduce the role of middlemen, hence bringing a competitive edge. From predictive maintenance to autonomous marketing, this growing trend can unleash the full potential of the ESDM industry at present. Undoubtedly, businesses that integrate agentic AI early will have an edge over others at shaping the future of the B2B electronic industry.
Some existing providers of this technology are:-
- IBM: IBM’s prebuilt watsonx AI agents are pre-designed systems that offer standard API and SDK support for open-source frameworks, allowing developers to use their preferred tools.
- Wizr AI: Wizr allows companies to build and deploy LLM powered AI agents, trained on company specific data like, CRM logs, internal documents, and past customer interactions, providing a customized experience. They also provide enterprise-grade security and certifications like SOC 2 Type 2 and ISO 27001 for highly-regulated industries.
- TrueFoundry: This provider typically caters to data scientists, ML engineers, and IT professionals with over 1000 LLMs integrated in it along with tools for connecting with other enterprise tools like Slack, Github, and Datadog.
- Generative AI: Gen AI is expected to become the new normal in the coming years, not just for content creators but for the electronic manufacturing industry too. The lack of advanced designing capabilities in the industry can be comprehensively solved with the advancement of Generative AI. From automating the creation of innovative designs, optimizing complex systems, speeding up prototyping and iteration, to reducing development costs, and democratizing design tools, Gen AI will be the new mastermind behind innovation in the manufacturing industry. This technology will allow engineers to explore new design spaces with quick validation and create more efficient and novel electronic components and systems, faster than any traditional methodology. It will eventually also cater to the challenge of skill shortage in the miniature production sector, hence increasing the efficiency of the industry.
With prominent faces like Synopsys.ai and Cadence Design Systems, already providing a comprehensive portfolio for the designing throughout the chip design workflow, other emerging providers are:-
- Flux AI: Its AI-powered e-CAD (electronic Computer Aided Design) provides for designing and building PCBs, saving time as well as giving good results.
- Circuit Mind: this software takes high-requirements and automatically provides optimized schematics and BOMs, creating reliable and error-free circuits.
- DeepPCB: This cloud-based tool uses AI for providing an automated PCB routing.
- Cirkit Designer: Cirkit is an online platform, providing circuit designing, simulation, and collaboration.
- Zuken: Zuken is a major provider of Electronic Design Automation (EDA) tools such as CR-8000, and E3.series for precise results.
- Physical AI: The shortage of skilled labor in the miniature electronic industry is all set to get a new solution with the adoption of AI-powered robotics and automated inspection to handle repetitive and complex tasks, along with the integration of augmented reality (AR) for training and real-time guidance of the human resource. This will allow the industry to use the low-skill personnel for high-level function, which will improve efficiency, quality and speed. The physical AI can retain the knowledge from a retired skilled professional to continue working in sync to the production requirements, further using the same knowledge base to train new recruits, but with a reduced cost on human-resource development. Additionally, the skilled personnel can be freed to focus on strategic and value-added activities that require creativity and decision-making.
Some of the key players in providing this technologies are:-
- Grey Matter Robotics: They are specialized in developing AI-powered robotics systems specifically for automating manufacturing and industrial operations.
- Veco Robotics: Veco integrates 3D sensoring, computer vision, along with AI to make robots work faster alongside humans. They are particularly efficient in handling delicate electronics assembly without the need for traditional caging.
- Sovereign AI: As the race to build newer AI system builds pace, it draws attention to data privacy in the AI landscape. Tomorrow is not just about any AI, but a safe and indigenous AI system that protects sensitive data with national and regional boundaries. This has given rise to a budding trend for sovereign AI. This system will allow businesses to build their own AI models which can comply with local data protection laws and industry-specific regulations. Such customized AI models can adhere to the specific needs of the business and reduce foreign dependence. A self-controlled AI system reduces the risk of cyber fraud and helps protect sensitive Intellectual property (IP). Sovereign AI can also be used to study the impact of a predicted geopolitical event on the supply chains, especially for import dependent components in the industry.
Some of the service providers of Sovereign AI in India include:-
- EDB Postgres: They offer a platform, allowing for secure, on-premises or private-cloud Gen AI interfacing. It also ensures that the data remains within he company’s control, essential for designers and manufacturers.
- Sarvam AI: It is considered India’s leading sovereign AI provider, selected by the Indian government to develop the country’s first homegrown large language model (LLM).
- Digital Twin + AI: A dynamic collaboration between a digital twin and AI will unleash new energy into the electronic manufacturing industry. As the need for miniaturization grows, the modelling of a digital twin in collaboration with AI subjecting it to real-time usage tests can improve the quality and efficiency of microscopic components like PCBs, silicon chips, and ICs. The sensors can be fed with data from real-user experiences which can help engineers design a more efficient and lasting product. It will allow minimal damage, making the process of R&D and testing more cost-effective.
From the several Digital Twin providers, some of them best suited for the electronics industry are:-
- Ansys: They specialize in simulation-based digital twins that use physics-based modelling along with AI integration to create highly accurate virtual prototype of systems.
- PTC: Their ‘ThingWorx’ platform integrates Industrial IoT, AR, and Digital Twin technologies. Allowing manufacturers to monitor, analyze, and optimize operations in real-time, benefiting product quality and predictive maintenance.
While the future of artificial intelligence is bright in the electronic industry, its integration into the existing system can prove to be a challenge. The initial costs may be overbearing for the business, however, the productivity achieved in the long-run will definitely be worth-it.
The post AI is defining reality as we progress further appeared first on ELE Times.
From Monoliths to Modules: A story of heterogeneous integration, chiplets, and the industry reshaping itself
For nearly four decades, the semiconductor narrative has simply revolved around Moore’s Law, shrinking transistors, packing more logic onto a single die for achieving results. However, now the limitations of such an approach are evident in reticle sizes, yields, rising costs, and the reality that not every function benefits from bleeding-edge lithography. The industry’s answer to this, is to stop treating the system as “one big die” instead, treat it as a system of optimized pieces chiplets and heterogeneous integration. What once started as an engineering workaround is now a full-blown industrial shift. The article is a curated, human narrative of how the industry got here, what the leading players are doing, the key technologies emerging, and how it is likely to play out in the coming future.
The pivot: when economics beat scaling
The earliest chiplet experiments were pragmatic. Designers realized that a single large die amplifies risk: one defect ruins the whole chip, and reticle-scale chips are expensive to manufacture. Chiplet thinking flips that risk model and many smaller dies (chiplets) are cheaper to yield and can be produced on the process node best suited to their function. AMD’s decision to “bet the company’s roadmap on chiplets” is perhaps the clearest strategic statement of this pivot; CEO Dr. Lisa Su has repeatedly framed chiplets as a transformational, multi-year bet that paid off by enabling modular, high-performance designs.
That economic logic attracted big players. When companies like AMD, Intel, NVIDIA, TSMC and major cloud providers all start designing around modular architectures, the idea moves from clever trick to industry standard. But to make chiplets practical at scale required new packaging, new interconnect standards, and new supply-chain thinking.
The technical enabling stack- what changed?
Three packaging techniques and a set of interconnect innovations allowed chiplets to become real:
- 2.5D (silicon interposer / CoWoS family): A silicon interposer routes huge numbers of fine wires between side-by-side dies and HBM stacks. TSMC’s CoWoS family (Chip on Wafer on Substrate) is a productionized example used in AI accelerators and high-bandwidth systems; it provides the highest on-package bandwidth today.
- 3D stacking (Foveros, TSVs, hybrid bonding): Stacking dies face-to-face shortens interconnects, saves board area, and opens power/latency advantages. Intel’s Foveros showed how a system could be built vertically from optimized tiles. The real leap is hybrid (Cu–Cu) bonding, which enables ultra-dense, low-parasitic vertical interconnects and is rapidly becoming the preferred route for the highest-performance 3D stacks.
- EMIB (embedded bridge): A cost-effective middle ground: small high-density bridges route signals between adjacent dies on a package without needing a full interposer, balancing cost and performance.
On top of physical packaging, industry collaboration produced UCIe (Universal Chiplet Interconnect Express) a standard that defines die-to-die electrical and protocol layers so designers can mix chiplets from different vendors. UCIe’s goal is simple but radical: make chiplets plug-and-play the way IP blocks (or board components) are today, lowering integration friction and encouraging a multi-vendor marketplace. The consortium’s growth and tone of its public messaging reflect broad industry support.
What the industry leaders are saying (high-level truth from the field)
Words matter because they reveal strategy. Lisa Su framed AMD’s move as an existential bet that enabled modular scaling and faster product cycles not a tweak, but a new company playbook. Jensen Huang (NVIDIA) has discussed shifting packaging needs as designs evolve, stressing that advanced packaging remains a bottleneck even as capacity improves a reminder that packaging is now a strategic choke point full of commercial leverage. And foundries and integrators (TSMC, Intel Foundry, Samsung) openly invest in CoWoS, Foveros and hybrid bonding capacity because advanced packaging is the next frontier after lithography.
The practical outcomes we’re seeing now
- Modular server CPUs and accelerators: AMD’s chiplet EPYC architecture split cores and I/O dies for yield and flexibility; major GPU vendors assemble compute tiles and HBM via CoWoS to reach enormous memory bandwidth.
- New supply-chain pressure: Advanced packaging capacity became a bottleneck in some cycles, forcing companies to book OSAT / CoWoS capacity years ahead. That’s why foundries and governments are investing in packaging fabs.
- Standardization momentum: UCIe and related initiatives reduce engineering friction and unlock third-party chiplet IP as a realistic business model.
The tensions and technical gaps
Heterogeneous integration isn’t a panacea. It introduces new engineering complexity: thermal hotspots in 3D stacks, multi-die power delivery, system-level verification across vendor boundaries, and supply-chain trust issues (who vouches for a third-party chiplet?). EDA flows are catching up but still need better automation for partitioning, packaging-aware floor planning, and co-validation. Packaging capacity, while expanding, remains a strategic scarce resource that shapes product roadmaps.
New technologies to watch
- Hybrid bonding at scale: enabling face-to-face stacks with very high I/O density; companies (TSMC, Samsung, Intel) are racing on patents and process maturity.
- UCIe ecosystem growth: as more vendors ship UCIe-compatible die interfaces, an open marketplace for physical chiplet IP becomes more viable.
- CoWoS-L / CoWoS-S differentiation and packaging variants: vendors are tailoring interposer variants to balance area, cost and performance for AI workloads.
How this story likely ends (judgement, not prophecy)
The industry is not replacing monolithic chips entirely monoliths will remain where tight coupling, the lowest latency, or the cheapest bill-of-materials matter (e.g., mass-market SoCs). But for high-value, high-performance markets (AI, HPC, networking, high-end CPUs), heterogeneous integration becomes standard. Expect three converging trends:
- An ecosystem of chiplet vendors: IP providers sell actual physical chiplets (compute tiles, accelerators, analog front ends) that can be combined like components.
- Packaging as strategic infrastructure: fabs and OSATs that excel at hybrid bonding, interposers, and 3D stacking will hold new leverage; national strategies will include packaging capacity.
- Toolchains and standards that normalize integration: with UCIe-style standards and improved EDA flows, system architects will shift focus from transistor-level tricks to system partitioning and orchestration.
If executed well, the result is faster innovation, cheaper scaling for complex systems, and diversified supply chains. If poorly coordinated, the industry risks fragmentation, security and provenance problems, and bottlenecks centered on a few packaging suppliers.
Final thought
We have moved from a single-die worldview to a modular systems worldview.
That change is technical (new bonds, interposers, interfaces), economic (yield and cost models), and strategic (packaging capacity equals competitive advantage). The transition is messy and political in places, but it’s already rewriting roadmaps: chiplets and heterogeneous integration are not an academic curiosity they are the architecture by which the next decade of compute will be built.
The post From Monoliths to Modules: A story of heterogeneous integration, chiplets, and the industry reshaping itself appeared first on ELE Times.
Chip Code, written by AI: Driving Lead Time Optimization and Supply Chain Resilience in Semiconductor Manufacturing
The semiconductor world is grappling with complex challenges and designing a modern chip that involves billions of transistors, massive verification workloads, and global supply chains prone to disruption is making the process no easier. One of the critical factors hindering innovation and market responsiveness is the extensive lead time, often exceeding 20 weeks. While the procurement and supply chain managers are constantly coordinating wafer fabs, managing inventory, and dealing with rapidly changing markets, the industry’s core bottleneck is the design phase’s sheer complexity and iterative nature.
AI technologies, including Large Language Models (LLM) and newer multi-agent generative systems, are fundamentally transforming Electronic Design Automation (EDA). These systems automate Register Transfer Level (RTL) generation, detect verification errors earlier, and help predict wafer fab schedules. Integrating AI with procurement teams and supply chain planners helps in dealing with industry volatility and resource allocation uncertainty. It is quietly reshaping the entire ecosystem, moving design from an art form reliant on small teams of gurus to a computationally optimized process.
AI’s Role in Chip Design AutomationRTL design, which defines a chip’s logic, was traditionally hand-crafted, taking engineers months for debugging. Now, AI trained on large HDL datasets suggests RTL fragments, accelerates design exploration, and flags inconsistencies. Reinforcement learning ensures the code becomes progressively accurate, often identifying optimal solutions humans miss.
This capability moves beyond mere efficiency; it reduces manufacturing risk. Fewer RTL mistakes mean fewer costly fab re-spins, making wafer scheduling predictable. Predictive analytics spot fab queue bottlenecks, allowing teams to optimize lithography usage before issues escalate. This foresight maintains consistent throughput.
Generative AI advances this using multiple specialized agents: one for synthesis tuning, one for logic checking, and a third for modelling power or timing. This distributed intelligence improves efficiency and provides procurement teams early risk warnings. By simulating designs, they can anticipate mask shortages, material spikes, or foundry capacity issues, effectively optimizing the physical supply chain.
“The ability to automate RTL generation and verification simultaneously is a game-changer. It shifts our engineering focus from tedious bug-hunting to true architectural innovation, accelerating our time-to-market by months.”- — Dr. Lisa Su, CEO, AMD
Multi-Agent Generative AI for Verification: Operational ImpactVerification often consumes up to 70 percent of chip design time, scaling non-linearly with transistor count, making traditional methods unsustainable. The Multi-Agent Verification Framework (MAVF) uses multiple AI agents to collaborate: reading specifications, writing testbenches, and continuously refining the design. This division of labour operates at machine speed and scale.
Results are notable: human effort drops by 50 to 80 percent, with accuracy exceeding manual methods. While currently module-level, this hints at faster full verification loops, compressing the ‘time-to-known-good-design’ window. This means fewer wasted weeks on debugging and substantial savings on re-spins, protecting billions in costs.
“We are seeing a 15% reduction in verification cycles across key IP blocks within a year. The key is the verifiable audit trail these new systems create, which builds trust for sign-off.”- — Anirudh Devgan, CEO, Cadence Design Systems
Predictable verification helps procurement reduce lead-time buffers. Instead of hoarding stock or overbooking fab slots, teams plan using reliable design milestones. The ROI is twofold: engineers save effort, and procurement negotiates smarter contracts, boosting resilience and freeing up working capital.
Industry Insights and Strategic ImplicationsResearch at Intel’s AI Lab shows that machine learning is powerful, but it works best when integrated with classical optimization techniques. For example, in floor planning or system-level scheduling, AI alone often struggles with hard constraints. However, hybrid approaches offer substantial improvements, combining the exploratory power of AI with the deterministic precision of conventional algorithms. The release of datasets like FloorSet demonstrates a strong commitment to benchmarking realistic chip design problems under real-world industrial constraints.
From a strategic perspective, AI-driven design efficiency provides procurement and supply chain teams with several key advantages:
- Agility: Design-to-tapeout cycles become faster, enabling companies to respond quickly when demand surges or falls, capturing market share faster than competitors.
- Resilience: More predictable verification milestones stabilize wafer fab scheduling and reduce exposure to market volatility.
- Negotiation Power: Procurement teams can better align contracts with foundries and suppliers to actual needs, helping reduce buffer costs. This shift moves contracts from being based on generalized risk to specific, design-validated schedules.
“For foundry operations, predictability is everything. AI-driven design provides a stable pipeline of GDSII files, allowing us to lock in capacity planning with much greater confidence, directly improving overall facility utilization.”- C. C. Wei, CEO, TSMC
This alignment reflects a careful integration of technical advances with operational priorities, ensuring that AI improvements translate into tangible, real-world impact across the entire value chain, from concept to silicon.
Future Outlook: AI, Market Dynamics, and Strategic PlanningThe next big step is full-chip synthesis and automated debugging. LLM-powered assistants generate block-level RTL, while reinforcement learning agents iterate to resolve timing or power conflicts. This could significantly speed up tapeout cycles and give supply chain planners a clearer picture of what is coming, though challenges remain regarding the size and systemic integrity of full-chip designs.
Real challenges persist. AI models require large data, raising concerns about proprietary Intellectual Property (IP) and training biases. Even if output passes syntax checks, deeper semantic or safety issues may arise. Integrating these tools into existing EDA workflows requires careful validation, certification, and substantial computing resources. The explainability of AI-generated code is paramount for regulatory approval and risk mitigation.
Ways to manage risks include hybrid human-in-the-loop approaches, deploying modules first, and maintaining strict audit trails for correctness. For supply chain leaders, AI is a tool to reduce volatility buffers, not a magic solution eliminating all risks. Geopolitical and natural disaster risks remain, but AI minimizes internal, process-driven risks.
ConclusionAI is gradually driving operational change in semiconductor design. Full-chip automation remains a long-term goal, but today’s advances in RTL generation, module-level verification, and predictive analytics already shorten design cycles and make wafer fab scheduling predictable. For procurement leaders, supply chain managers, and strategists, this translates to greater agility, reduced risk, and stronger resilience in an instantly changing market.
The takeaway is simple. Companies that thoughtfully integrate AI into design and supply chain operations will gain a clear competitive advantage. Tomorrow’s chips won’t just be faster or more efficient. Their code will be shaped by AI intelligence, providing engineers with insights previously almost impossible to achieve.
The post Chip Code, written by AI: Driving Lead Time Optimization and Supply Chain Resilience in Semiconductor Manufacturing appeared first on ELE Times.
Tech Diplomacy: India’s Strategic Power Play in the Global Arena
In consideration of the escalating global tensions and the growing importance of technology as a strategic measure, it is imperative for India to effectively harness both its geopolitical flexibility and technological aspirations in order to influence the forthcoming narrative, rather than merely adapting to it.
The upcoming power shifts will pivot on technological innovations rather than traditional trade agreements or territorial disputes. The future dynamics will be shaped by advancements in semiconductor chips, software code, and cybersecurity measures and more.
The convergence of the world’s two largest elements of influence – geopolitics and technology, is increasingly significant. The U.S.-China rivalry has evolved far beyond issues like tariffs or Taiwan; it now revolves around determining the dominant force in areas such as AI, semiconductors, quantum technology, and space exploration. Each action taken, whether it’s an export ban, a satellite launch, or the implementation of data regulations, serves as a strategic geopolitical statement in this high-stakes competition.
In the contemporary global landscape, there exists a new form of conflict often referred to as the digital cold war. A significant event that unfolded in 2024 was the United States’ decision to prohibit the use of Chinese connected car technology. This action led to a retaliatory move from China, where they openly released their artificial intelligence models to contest the prevailing Western supremacy in the field. It is crucial to note that the battlefield in this modern era of conflict is not defined by physical borders but by the intricate interplay of algorithms and technological advancements.
Tech is no longer just an industry; it has evolved into essential infrastructure that plays a critical role in shaping various aspects of our lives. India’s Digital Public Infrastructure (DPI), encompassing services ranging from Aadhaar to UPI, has transcended national boundaries to become a significant soft power export.
Despite India’s $10 billion incentive scheme successfully attracting major players in the semiconductor industry like Micron, AMD, and Tower Semiconductor, the establishment of fabs remains a time-intensive endeavor. However, India can strategically focus on dominating chip design, intellectual property (IP) creation, and nurturing talent.
Cybersecurity is a pressing concern in India. Emerging threats such as AI-powered malware, ransomware, and supply chain attacks are on the rise.
Geopolitics has evolved beyond traditional diplomacy into a realm where data plays a crucial role. India’s implementation of data localisation laws, its approach towards cross-border data flows, and its emphasis on developing indigenous cloud infrastructure demonstrate strategic moves with geopolitical implications. The underlying premise is clear: whoever controls the data also controls the narrative.
India’s presence in space is gaining momentum, evident through increasing commercial launches by the Indian Space Research Organisation (ISRO). India’s progression must advance from being a mere launchpad to assuming a leadership role in the space domain.
The field of quantum computing represents another frontier where India is making significant strides. The National Quantum Mission, initiated in 2023 with funding amounting to ₹6,000 crore, targets the development of 50–100 qubit systems by 2026.
India is currently making its point very clear. The pathway involves becoming a tech leader that no more relies on Western platforms and Chinese hardware. India chooses to assert itself as a tech powerhouse by developing its own systems, influencing global standards, and sharing its expertise in digital governance.
Moreover, India is no more a mere geopolitical pawn, reacting to external changes, rather it is emerging as a significant player that actively shapes international regulations. These two key tools are now at India’s disposal – geopolitics and technology.
Devendra Kumar
Editor
The post Tech Diplomacy: India’s Strategic Power Play in the Global Arena appeared first on ELE Times.
Microchip and AVIVA Links Achieve ASA-ML Interoperability, Accelerating Open Standards for Automotive Connectivity
The automotive industry is continuing its transition from proprietary automotive serializer/deserializer (SerDes) solutions to an interoperable ecosystem established by the Automotive SerDes Alliance and its first open-standard ASA Motion Link (ASA-ML). ASA-ML is now being implemented by OEMs and Tier 1 suppliers because it provides an asymmetric high-speed communications standard that connects the increasing number of cameras, sensors and displays used in In-Vehicle Networking. Microchip Technology announced a significant milestone with AVIVA Links, an automotive company delivering advanced multi-Gigabit vehicle infrastructure for ADAS and IVI systems, demonstrating that ASA-ML chipsets from multiple vendors can interoperate seamlessly to deliver scalable, high-speed connectivity. This interoperability between major semiconductor suppliers underscores the viability of the ASA-ML ecosystem and its growing role in the automotive industry.
The Automotive SerDes Alliance has more than 175 members, including OEMs such as BMW, Ford, GM, Hyundai Kia Motor Company, Nio, Renault/Ampere, Stellantis, Volvo and Xiaopeng Motors. The multi-vendor ecosystem is actively collaborating to bring ASA-ML enabled systems to the market, addressing the rapid growth of Advanced Driver Assistance Systems (ADAS) and In-Vehicle Infotainment (IVI) applications.
“Microchip is a market leader in automotive networking and connectivity, and achieving robust ASA-ML interoperability with AVIVA Links—who has announced a pending acquisition by NXP—is a pivotal moment for the Automotive SerDes Alliance and a clear signal to the market,” said Kevin So, vice president of Microchip’s communications business unit. “This collaboration highlights the benefits of a multi-source, open standards approach and gives automotive OEMs and Tier 1 suppliers the confidence to design their next-generation ADAS architectures around ASA-ML, knowing they have a scalable, robust and secure connectivity standard backed by leading semiconductor suppliers.”
The ASA-ML standard supports asymmetric high-speed video, control and data transmission up to 16 Gbps, offering a scalable and forward-looking solution. To achieve ADAS L2 and L2+ autonomous-level applications, an increasing number of cameras and sensors must be added into vehicles. These applications require the ASA-ML standard’s scalability, architectural flexibility and interoperability benefits, further driven by the availability of multi-vendor, high-bandwidth connectivity solutions that reduce reliance on proprietary solutions.
“AVIVA Links is focused on delivering advanced connectivity and enabling standards-based, interoperable solutions for the next generation of automotive systems,” said Kamal Dalmia, CEO of AVIVA Links. “Proving interoperability with Microchip’s ASA-ML SerDes chipset is an important milestone for the automotive industry, and together with our pending acquisition by NXP, will further drive confidence in ASA-ML adoption at OEMs and Tier 1s.”
The post Microchip and AVIVA Links Achieve ASA-ML Interoperability, Accelerating Open Standards for Automotive Connectivity appeared first on ELE Times.
Singapore’s largest industrial district cooling system begins operations to support ST’s decarbonization strategy
STMicroelectronics and SP Group (SP) have commenced operations for Singapore’s largest industrial district cooling system at STMicroelectronics’ (ST) Ang Mo Kio TechnoPark. The event was inaugurated by Ms. Low Yen Ling, Senior Minister of State, Ministry of Trade and Industry and Ministry of Culture, Community and Youth.
The system is expected to reduce carbon emissions by up to 120,000 tonnes per year and enable 20 per cent savings on cooling-related electricity consumption. It will also repurpose over half a million cubic meters of water each year by using reject reverse osmosis water, previously used in ST Cooling Towers, to support the new district cooling operations.
This marks ST’s first use of district cooling at a manufacturing facility and will strengthen ST’s commitment to be carbon neutral by 2027.
“The deployment of Singapore’s largest industrial district cooling system at our Ang Mo Kio TechnoPark demonstrates our commitment to pioneering energy-efficient solutions that reduce carbon emissions and conserve resources. This achievement strengthens our partnership with Singapore in advancing its national sustainability goals,” said Rajita D’Souza, President of Human Resources and Corporate Social Responsibility at STMicroelectronics. “By integrating advanced technologies like the district cooling system, we are driving a smarter, more sustainable future — showcasing how industry leadership and environmental stewardship align to create lasting value for our business, communities, and the planet.”
“SP Group’s strategic partnership with STMicroelectronics marks a pivotal milestone in our nation’s transition towards a low-carbon future. This project showcases how collaborative innovation can transform urban infrastructure to deliver sustainable, energy-efficient solutions. District cooling will continue to play a vital role in Singapore’s net-zero ambitions, enabling carbon emissions reduction and enhancing energy resilience across industrial and urban developments,” said Stanley Huang, SP’s Group Chief Executive Officer.
Technical details of the district cooling system
Designed, built, owned, and operated by a joint venture between SP and Daikin Airconditioning (Singapore), the system has an installed capacity of up to 36,000 refrigeration tonnes (RT). It delivers continuous chilled water to cool both manufacturing and office spaces via a centralized closed-loop pipe network replacing individual chillers in each building. The total area served by the system is approximately 90,000 square metres.
Chillers in series counterflow configuration reduce the energy required to cool the water. This ensures an efficient and reliable 24/7 operation, with remote monitoring capabilities augmenting the operations team on site to come.
“This partnership with SP reflects Daikin’s commitment to delivering advanced, energy-efficient solutions that go beyond immediate operational needs. Our goal is to contribute to a more sustainable built environment, where technology plays a key role in enhancing resilience, reducing environmental impact, and supporting Singapore’s long-term climate ambitions,” said Chua Ban Hong, Managing Director at Daikin Airconditioning (Singapore).
Additionally, the new installations free up around 4,000 square meters of space at Ang Mo Kio TechnoPark, which will enable ST to install other equipment contributing to environmental impact mitigation. This includes perfluorocarbon (PFC) abatement equipment, with near-future plans for additional water reclamation systems and volatile organic compounds (VOC) abatement as part of its ongoing sustainability efforts.
The post Singapore’s largest industrial district cooling system begins operations to support ST’s decarbonization strategy appeared first on ELE Times.
Microchip Adds Integrated Single-Chip Wireless Platform for Connectivity, Touch, Motor Control
Bluetooth Low Energy, Thread, Matter and proprietary protocols come together in a secure, feature-rich platform for supporting evolving standards, interface needs and market demands
As connectivity standards and market needs evolve, upgradeability has become essential for extending device lifecycles, minimizing redesigns and enabling differentiated features. To solve this challenge, Microchip Technology has released the highly integrated PIC32-BZ6 MCU that serves as a common, single-chip platform to reduce development cost, complexity and time-to-market for multi-protocol products featuring advanced connectivity and scalability.
“The PIC32-BZ6 MCU stands out for its powerful blend of connectivity, integration and flexibility in a single-chip solution,” said Rishi Vasuki, vice president of Microchip’s wireless solutions business unit. “Few devices bring together this breadth of features in a single chip, and we’re already seeing strong tremendous early adopter activity. Customers are leveraging its multi-protocol wireless capabilities, advanced analog features and high I/O to develop smarter, more connected products with greater efficiency.”
RF design for smart devices has become increasingly complex, and wireless solutions typically require multiple chips to add new features or frequent redesigns to support evolving industry standards. The PIC32-BZ6 MCU replaces these multi-chip solutions and reduces the redesign burden with a single, highly integrated chip that removes the complexity of multi-protocol wired and wireless connectivity. The MCU also includes analog peripherals to simplify motor control development, along with touch and graphics capabilities for advanced user interfaces and enhanced memory to support complex applications, heavy workloads and Over the Air (OTA) firmware updates.
The PIC32-BZ6 MCU platform streamlines development of products in the smart home and for automotive connectivity, industrial automation and wireless motor control use cases. Key features include:
- High memory and scalable package choices to support demanding applications and OTA updates: The high-performance MCU includes 2 MB Flash memory and 512 KB RAM and is available in 132-pin ICs and modules with additional pin and package variants planned.
- Multi-protocol wireless networking: Qualified against Bluetooth Core Specification 6.0, the device also supports 802.15.4-based protocols such as Thread and Matter plus proprietary smart-home mesh networking protocols.
- Design flexibility that extends product options and scaling opportunities: Versatile and comprehensive selection of on-chip peripherals goes beyond wireless connectivity and OTA updates to support:
Wired connectivity: Multiple interfaces include two CAN-FD ports for automotive and industrial communication, a 10/100 Mbps Ethernet MAC for high-speed wired connectivity and a USB 2.0 full-speed transceiver for seamless data transfer and PC integration.
Touch and graphics: Incorporate peripherals that enable advanced user interfaces including Capacitive Voltage Divider (CVD)-based touch capabilities with up to 18 channels.
Motor control: Simplifies system development through advanced analog peripherals such as 12-bit ADCs, 7-bit DAC, analog comparators, PWMs and QEI for precise motor position and speed control.
- Security by design to protect applications and IP: Includes immutable secure boot in ROM and an advanced on-board hardware-based security engine supporting AES, SHA, ECC and TRNG encryption.
- Reliability in harsh environments: The device is qualified to AEC-Q100 Grade 1 (125 °C) specifications for automotive and industrial environments.
The post Microchip Adds Integrated Single-Chip Wireless Platform for Connectivity, Touch, Motor Control appeared first on ELE Times.
Exclusive Insights: “With Spin Memristor, we’re bringing the brain’s analog intelligence to modern memory technology,” says TDK’s Gagan Bansal.
“AI is definitely one technology that is developing very fast. And power management is the second big area”, says Gagan Bansal, President-Sales & Marketing, TDK, in an exclusive conversation with the ELE Times. As the Indian electronic manufacturing industry aims to reach a worth of $300 billion by the end of 2026, the industry not only has to boost manufacturing but also innovative manufacturing to cater to the growing demands.
He navigated from providing to a wide spectrum in the electronic manufacturing market to the research and development for more innovative technology to meet the demands of tomorrow. Also offering a peek into TDK’s strategy to face global uncertainties ,along with promising technology that TDK is working on.
Moving from Traditional to Innovative Technology
Tracing TDK’s journey from creating the famous magnetic cassette tapes to batteries and now advanced technology with ADAS and XEV applications in radars and sensors, Gagan Bansal provides an introductory brief into TDK’s capabilities.
He further introduces TDK’s innovative product line, not only for the industrial sector but also for automotive and daily requirements, wherein he underlines TDK’s major mainstay in business as the market moves from ICE vehicles to EVs with nearly 60-70% electronics involved in it.
Apart from the automotive industry, he also mentions a product developed by TDK, which is the noise-cancellation in spatial atmosphere, simply said: noise cancellation without headphones. At this point, he specifically touches the combination of MEMS-based microphone, piezo-listen speaker, and digital signal processing involved in the product.
Incorporation of AI into TDK
“A lot of our sensor products are now related to AI software,” Gagan Bansal responded to the question on the
integration of AI. Recognizing the growing role of AI, he revealed that TDK has taken multiple steps for its inclusion by providing mission-critical components in terms of inductive components and electrolytic capacitors for various AI server applications.
He also recalled that many of their innovative technologies are incorporated with AI, namely the AR-VR technology and smart sensors for spatial noise cancellation and detection of head movement. He further added that their global company, TDK Sensor EI (Edge Intelligence), overlays software for artificial intelligence at the edge of the device, above the existing sensing devices; a growing demand in upcoming technology.
TDK in India
“Electronics manufacturing in India, on a penetration level, is relatively low as compared to the developed part of the world,” as he underlined the low penetration into the Indian market as compared to the developed part of the world, he recognized India as a place full of scale and scope.
To give an idea of TDK’s extensive presence in India, he says, “So we have a localization drive and we are very proud that in our existing units, more than 50% of the inputs are sourced locally over a period of time.” With six operational plants in India, TDK has also invested in four early-stage deep tech ventures into the fields of industrial IoT, agritech, EV charging, and EV bikes, making India the cornerstone in its global strategy.
TDK’s vision for the next half of the decade
Talking about the future of technology and innovation, he underlines the pace at which AI is developing while also sharing concerns around its power consumption and the indispensable need to develop power-efficient alternatives. Stepping into its feet, a bit more in this area, TDK has already begun working on developing an analogue memory product, called spin Memristor, inspired by the functioning of the human brain.
Comparing the human brain to digital memory, he highlighted how, despite being bigger in size, the human brain consumes comparatively less energy in contrast to a digital memory system. Their new product, Spin Memristor, based on the spintronics technology of an electron, aims to store data in an analogue format, which can be used in AI servers to consume less energy.
Concluding his conversation, Gagan Bansal says,” India is a formidable force to reckon with. It is a part of our global strategy as part of TDK, and it will remain to be so.” He also pitches TDK as a technology-ready company for the business to be with in furthering their ambitions and products on a global stage.
The post Exclusive Insights: “With Spin Memristor, we’re bringing the brain’s analog intelligence to modern memory technology,” says TDK’s Gagan Bansal. appeared first on ELE Times.
Building Reliable 5G and 6G Networks Through Mobile Network Testing
The development of communication networks has entered a revolutionary phase. As 5G continues to mature and 6G research gains momentum, the world stands at the cusp of a hyper-connected era driven by real-time intelligence, automation, and pervasive connectivity.
From autonomous mobility and telemedicine to smart manufacturing and immersive AR/VR, the success of these innovation rests on one invisible foundation trustworthy mobile network performance.
Behind this dependability lies mobile network testing the unseen but critical layer ensuring that every connection performs seamlessly.

This diagram shows how data flows through a telecom network—from user devices to access technologies like Small Cells and Massive MIMO, through transport layers like fiber and edge data centers, into a cloud-based core network, and finally through testing layers using mmWave, cybersecurity, AI, and digital twins to ensure performance and reliability.
The Technology Behind Network Testing
- With mmWave, massive MIMO, network slicing, and Open RAN shaping 5G architecture, testing has become a complex science demanding unprecedented accuracy, flexibility, and speed.
- Today’s testing solutions are evolving with AI-powered analytics, cloud-based digital twins, and cybersecurity validation frameworks, enabling operators, equipment manufacturers, and researchers to ensure reliability across both physical and virtualized networks.
- As the world transitions toward 6G featuring terahertz (THz) frequencies, AI-native architectures, and intelligent automation network testing will remain the quiet enabler that keeps our connected future secure and scalable.
Innovations Powering Modern Network Testing
- Predictive Analytics Powered by AI
Machine learning is reshaping network validation by predicting faults before they affect service. AI models analyze vast datasets from live networks to predict congestion, optimize routing, and reduce downtime, turning testing from reactive to proactive.
- Cloud-Based Simulation & Digital Twins
Digital twins now simulate entire networks in the cloud from traffic behavior to mobility and interference patterns. This reduces field testing costs while improving accuracy, enabling virtual prototyping of real-world networks.
- Open RAN & Massive MIMO Validation
Open RAN drives multi-vendor interoperability but demands strict conformance testing for timing, synchronization, and RF performance between distributed and radio units. Testing tools ensure each vendor’s hardware performs harmoniously in shared environments.
- Cybersecurity & Resilience Testing
With 5G serving critical industries, penetration testing, vulnerability scanning, and cyberattack emulation are essential to preserve network integrity. The shift toward zero-trust architectures is also reshaping validation methodologies.
Industry Insights:
Some of the most significant advancements in mobile network testing are being led by industry pioneers like Anritsu and Keysight Technologies. Their innovative tools and forward-thinking approaches are not only addressing current 5G challenges but also laying the groundwork for the 6G era.
Anritsu Insight:
“5G came with big promises and bigger testing challenges,” says Madhukar Tripathi, Associate Director – Marketing & Business Development, Anritsu India.
“Technologies like mmWave, URLLC, and network slicing push testing boundaries. At Anritsu, our platforms like the MT8000A Radio Communication Test Set, Shockline VNA, and Network Master Pro MT1000A are enabling operators and manufacturers to validate performance at every stage — from R&D to deployment.”
Synchronization and interoperability are key challenges in Open RAN, where timing precision determines network reliability. “Our MT1000A and MS2850A test solutions perform PTP/SyncE and RF conformance tests to ensure accurate timing across multi-vendor O-RAN environments,” he adds.
AI-powered analytics further help in predictive fault detection. “By integrating data-driven insights into our instruments, we make spectrum analysis more intelligent — transforming network testing from a reactive process to a predictive one.”
“Synchronization is key for Open RAN. Test platforms emulate and measure time errors with atomic clock precision, ensuring reliable multi-vendor timing.”
Also advancing 6G research, collaborating globally on FR3 (7–24 GHz) and sub-THz frequencies. Tools like the VectorStar broadband VNA and Scenario Edit Environment Kit (SEEK) automate multi-domain testing, preparing networks for future intelligent connectivity.
— Madhukar Tripathi, Anritsu India
Keysight Insights:
“The transition from simulation to digital twinning is redefining how networks are designed and optimized,” says Mombasawala Mohmedsaeed, CTO, Keysight Technologies India.
With RaySim and EXata, Keysight provides an end-to-end digital twin ecosystem to model base stations, channels, and user equipment under realistic mobility and interference conditions. “Tools allow operators to virtually replicate entire cities or rural regions and optimize networks for energy efficiency and coverage before physical deployment.”
For Non-Terrestrial Networks (NTNs), tools like Propsim and UE Sim emulate satellite-based communications to ensure seamless coverage.
On the security front, Keysight’s CyPerf, BreakPoint, and Threat Simulator emulate real-world cyberattacks to test network resilience. The Software Bill of Materials (SBOM) and Riscure solutions add another layer by tracing vulnerabilities within semiconductor and IoT ecosystems.
“Performance validation alone isn’t enough anymore — continuous cybersecurity testing is critical to protect mission-critical networks,” Mohmedsaeed emphasizes.
“Digital twins and cybersecurity intelligence are twin pillars of modern network assurance.”
— Mombasawala Mohmedsaeed, Keysight Technologies India
The Road Ahead: Testing Beyond 5G

The future of network testing will revolve around network slicing, private 5G, and 6G prototyping. Ultra-low latency, massive device connectivity, and AI-native architectures will demand test automation, real-time data visualization, and cross-domain validation.
Furthermore, the convergence of terrestrial and satellite networks will require innovative methods to ensure reliability and performance even in remote and harsh environments.
As the digital horizon expands toward 6G, the invisible precision of network testing will be what keeps the hyper-connected world running flawlessly. Every autonomous car that drives safely, every remote surgery that succeeds, and every virtual experience that feels real will owe its reliability to the unseen rigor of testing.
In the race to the future, innovation may set the pace but precision testing ensures the world never loses connection.
The post Building Reliable 5G and 6G Networks Through Mobile Network Testing appeared first on ELE Times.
Beyond the Screen: envisioning a giant leap forward for smartphones from physical objects to immersive experiences
Author: STMicroelectronics
Smartphones have become some of the most ubiquitous devices in modern history. For most of us, the smartphone is an indispensable tool not only to communicate, but to manage our lives – work, personal relationships, travel, shopping, entertainment, photography, and video creation. In short, smartphones have become a hub for life.
The touchscreen was transformational in the smartphone’s adoption and use. But in the future, the smartphone is set to become a platform for immersive experiences. And when aligned to innovations that will extend battery life and even see smartphones harvesting their own energy, along with new ways to stay constantly connected, their usefulness will only increase.
A powerful processor in your pocketSmartphones have become incredibly powerful processing devices. Indeed, in comparison to the most powerful supercomputers of the 1980s, today’s smartphones can process information more than 5,000 times faster.
In some ways, however, the way that we interact with our smartphones has progressed least since their arrival. For many people, the touchscreen remains the primary – if not only – way that they access and view the interactive services and rich experiences provided by their smartphone. The coming years will see that transformed and, with it, the idea of what a smartphone is.
A reduced reliance on the smartphone display as the principal way to interact with the device and receive information fundamentally changes the role of the smartphone. As a powerful computing device in its own right, but also connected to cloud-based computing resources, the smartphone potentially becomes a platform for delivering immersive experiences and valuable services to the user in numerous new ways.
New models for smartphone interactionVoice assistants have become one of the first steps into a new world of accessing services via our smartphones. Whether issuing voice commands and queries directly into the device or having these relayed via connected headphones and earbuds, consumers are realising the convenience of voice and audio interaction. An additional benefit, of course, is that the smartphone itself can remain in a pocket or bag, out of harm’s way.
Eyeglasses featuring augmented reality (AR) display technology are an ideal solution. These can visually display directions in the user’s eyeline, while also overlaying other useful or interesting information. With more information and experiences layered over the real world, discovering a new city will be more rewarding than ever before, with less potential for a misstep along the way.
Artificial intelligence (AI) will also enable proactive and predictive services that help us manage our daily lives. For example, by understanding the current traffic conditions, AI might bring an alert for your next meeting across town 30 minutes earlier. With the alert appearing on your smartwatch, more efficient travel could be proposed, with directions to the closest public transport appearing in your eyeglasses’ AR display.
Gesture recognition and haptic feedbackGesture recognition is emerging as another way to interact with services provided by smartphones. Less obvious that either using a touchscreen or voice, subtle gestures to make or answer calls or respond to messages will be quick and convenient methods of interaction. Who knows, you might well respond to the latest message received with an actual thumbs up, rather than having to find and type the emoji itself.
We might be on the cusp of a whole new vocabulary of gestures as commands. Google is one company looking at how devices can be controlled by natural human gestures, many of which we use subconsciously. Other advances in hardware, such as the latest generation of Time-of-Flight (ToF) sensors, will support more accurate detection of gestures in and around smartphones.
Haptic feedback is the use of vibrations or sensations to enrich the experience of using a device. At a basic level, most of us already experience haptic feedback in our smartphone use. Vibrations rather than a ringtone to signify an incoming call is a simple example, but the nature and application of haptic feedback is rapidly evolving.
Imagine shopping online and being able to ‘feel’ different types of fabric through haptic feedback via your smartphone’s screen. Subtle vibrations from different parts of smart eyeglasses could be used to enrich visual experiences or help with directions. Research is even looking at ultrasound and “mid-air” haptics, where the sensation of physical touch is created in the air. Such haptic feedback could augment gesture control or enhance touchless interfaces.
The potential for neural interfacesThough still in its early stages, the idea of interacting with devices merely by thinking is becoming more real. Various non-invasive neural interfaces are in development.
Electroencephalography (EEG) sensors placed on the head via headsets, or potentially even embedded in hats and headbands, are a direct way to tap into the brain’s activity. Neural wristbands detect signals from nerves connecting the brain to an individual’s hands, whereby just thinking about a gesture or action could act as a command.
So-called “silent speech” interfaces detect subtle changes in expression or movements in the vocal chords, where simply mouthing words would be detected as accurately as voice. Data from wearables such as smartwatches, rings, and earbuds could identify cognitive load and emotional state, triggering proactive alerts, suggestions, or experiences to help alleviate issues.
Projecting further into the future, neural interfaces and advanced haptic feedback could be combined to create a new world of deeply immersive experiences, all powered by the not-so-humble smartphone.
Always connectedOf course, this vision of the smartphone as a platform for new services and experiences relies on an almost constant connection to cloud-based computing resources. Fortunately, alongside the innovations in smartphone interface technologies, we’re seeing continued development of technologies that ensure we remain connected, wherever we are.
As we recently highlighted, the need to connect the world of increasingly intelligent “things” – not only smartphones, but billions of sensors, machines, and consumer products – is being supported by innovation in communications technology. This includes further evolution of established infrastructure, with 6G telecommunications networks arriving in the coming years, but also the significant expansion of satellite-based communications networks.
When the smartphone arrived, it was exactly that: a phone with additional capabilities. We can all appreciate how far it has moved beyond that simple description, and over a relatively short period of time. While we might need a new name for the device, we certainly need to change our understanding of what this powerful pocket processing device represents.
New ways to interact with our smartphones, innovation in the delivery of seamless immersive experiences, universal connection, and improved battery life and self-charging will see them become the primary digital platform for every aspect of our lives.
- Discover our Time-of-Flight sensors
The post Beyond the Screen: envisioning a giant leap forward for smartphones from physical objects to immersive experiences appeared first on ELE Times.
Microchip’s SkyWire Tech Enables Nanosecond-Level Clock Sync Across Locations
To protect critical infrastructure systems, SkyWire technology enables highly scalable and precise time traceability to metrology labs
Network clocks are the backbone of critical infrastructure operations, with the precise alignment of clocks becoming increasingly important for data centers, power utilities, wireless and wireline networks and financial institutions. For critical infrastructure operators to deploy timing architectures with reliability and resiliency, their clocks and timing references must be measured and verified to an authoritative time source such as Coordinated Universal Time (UTC). Microchip Technology announced its new SkyWire technology, a time measurement tool embedded in the BlueSky Firewall 2200, that is designed to measure, align and verify time to within nanoseconds even when clocks are long distances apart.
With the BlueSky GNSS Firewall 2200 and SkyWire technology, geographically dispersed timing systems can be compared to each other and compared to the time scale systems deployed at metrology labs within nanoseconds. Measurement of clock alignment and traceability to this level has typically only been done between metrology labs and scientific institutes. With Microchip’s solution, critical timing networks for air traffic control, transportation, public utilities and financial services can achieve alignment within nanoseconds between its clocks to protect their infrastructure no matter where the clocks are located.
“To ensure timing systems are delivering to stringent accuracy requirements, it’s important to measure and verify in an independent manner relative to UTC as managed by national laboratories and traceable to the Bureau International Poids et Mesures (BIPM),” said Randy Brudzinski, corporate vice president of Microchip’s frequency and timing systems business unit. “With the new SkyWire technology solution, we’re making UTC more widely accessible so that large deployments of clocks can be independently measured and verified against each other across long distances.”
The concept originated as an extension to the National Institute of Standards and Technology’s (NIST’s) pre-existing service called Time Measurement and Analysis Service (TMAS), which is utilized by entities that are required to maintain an accurate local time standard. The BlueSky GNSS Firewall 2200 with SkyWire technology provides a Commercial Off-The-Shelf (COTS) product to enable critical infrastructure operators to connect with the NIST TMAS Data Service for large-volume clock deployments.
“At NIST, our goal is to enable the most accurate time to support our country’s infrastructure,” said, Andrew Novick, engineer at NIST. “Our TMAS Data Service in conjunction with commercial hardware, provides a scalable solution for anyone who needs traceable and accurate timing.”
Nations around the globe can replicate this solution using Microchip’s SkyWire technology capabilities within its TimePictra software suite, which delivers similar features and functionality as that provided by the NIST TMAS Data Service. Metrology labs, government agencies and enterprises worldwide can deploy TimePictra software suite and the BlueSky GNSS Firewall 2200 with SkyWire technology and have their own end-to-end solution for traceable time measurement, alignment and verification.
The post Microchip’s SkyWire Tech Enables Nanosecond-Level Clock Sync Across Locations appeared first on ELE Times.
Next Generation Hybrid Systems Transforming Vehicles
The global automotive industry is undergoing a fundamental transformation moving from internal combustion engines (ICEs) to electric and hybrid vehicles that redefine mobility as sustainable, intelligent, and efficient.
This shift is not merely regulatory-driven; it’s fueled by a shared pursuit of carbon neutrality, cost-effectiveness, and consumer demand for cleaner mobility options.
Hybrid electric technology has proven to be the most practical bridge to date between traditional combustion and complete electrification. Providing the versatility of twin propulsion — electric motor and ICE — hybrid powertrains give the advantages of fuel economy, lower emissions, and a smoother transition for both consumers and manufacturers.
From regenerative braking to capture kinetic energy to AI-enabled energy management optimizing power delivery, hybrids are the sophisticated union of software and engineering. As countries pledge net-zero, and OEMs retool product strategies, hybrid technology is not merely a transition measure — it’s the strategic foundation of the auto decarbonization agenda.
Innovations Driving Hybrid Systems
- Solid-State & Next-Gen Lithium-Ion Batteries
Next-generation solid-state batteries hold the potential for greater energy density, quicker charging, and increased safety. Their ability to double the energy storage capacity and halve the charging time makes it a game-changer for hybrid and plug-in hybrid vehicles.
- Regenerative Braking & E-Axle Integration
Regenerative braking captures kinetic energy and converts it into electricity during braking, pumping it back into the battery. Coupled with e-axle technology, this integration optimizes drivetrain efficiency and performance.
- Lightweight Composites for Higher Efficiency
Advances in carbon-fiber-reinforced plastics and aluminum alloys allow automakers to shave weight, increase efficiency, and enhance range — all without sacrificing safety.
- AI-Powered Energy Management Systems
Artificial intelligence is now at the heart of hybrid optimization — learning driving habits, anticipating power needs, and controlling energy transfer between engine, motor, and battery for optimum efficiency.
Industry Insights: Mercedes-Benz on Hybrid Innovation
Rahul Kumar Shah, Senior Engineer at Mercedes-Benz Research & Development, outlines the engineering philosophy behind the company’s next-gen hybrid powertrains.
“At Mercedes-Benz, we view hybridisation not as an interim solution, but as a masterclass in energy orchestration. Our focus is on creating a seamless dialogue between the combustion engine and electric motor — ensuring the right power source is deployed at the right time to deliver maximum efficiency and a signature Mercedes driving experience.”
Optimizing Hybrid Powertrain Architectures:
“We have advanced from conventional parallel systems to sophisticated P2 and P3 architectures. By placing high-torque electric motors strategically within the drivetrain, we eliminate turbo lag and allow smaller, thermally efficient combustion engines to deliver spirited performance. Combined with predictive AI energy management, our vehicles decide in real time whether to operate in electric mode, recharge, or blend both power sources for optimal efficiency.”

Figure (1)

Figure (2)
The P2 Hybrid (first diagram) places the electric motor between the engine and transmission, allowing it to drive the wheels directly alongside the engine or independently. The P3 Hybrid (second diagram) places the electric motor on the transmission output shaft, leveraging greater torque multiplication but coupling the electric drive to the transmission output shaft. Both are Parallel PHEVs using a battery and inverter to manage power flow to the wheels.
Regenerative Braking:
“Our eDrive motors can decelerate the car up to 3 m/s² using purely regenerative energy. Coupled with our ESP HEV system, regenerative and mechanical braking are blended seamlessly to ensure vehicle stability and natural pedal feel. Integration with navigation and radar allows the vehicle to preemptively harvest energy — effectively ‘sailing on electricity.”

During regenerative braking, the wheels’ kinetic energy is converted by the motor (in generator mode) into electrical energy, which charges the battery. This process is triggered by a control signal from the brake pedal and ECU.
Thermal Management:
“Managing heat across the combustion engine, electric motor, and battery is essential. We maintain optimal battery temperatures between 20–40°C and reuse waste heat for cabin and coolant heating, reducing overall energy draw and improving efficiency.”
Solid-State Batteries:
“For hybrids, the real advantage lies in power density and durability. Solid-state cells can deliver and absorb charge much faster, enabling more efficient regenerative braking and smoother electric boosts. They are set to become the ultra-durable heart of next-generation hybrid powertrains.”
The Future of Hybrid Powertrains
The hybrid era is entering a smarter, cleaner, and more connected phase. Over the next decade, hybrid systems will evolve from being a bridge technology to a core pillar of sustainable mobility.
- Plug-in Hybrids Take Center Stage
Plug-in hybrids (PHEVs) will lead the transition, providing longer electric-only ranges and rapid charging. With bi-directional charging (V2G), they’ll also supply homes or return energy to the grid, making vehicles portable energy centers.
- Hydrogen Enters the Hybrid Mix
Hybrid powertrains assisted by hydrogen will appear, particularly in commercial and long-distance vehicles. Blending fuel cell stacks with electric drives, they will provide zero-emission mobility with rapid refueling and long range.
- Modular Electric Platforms
At-scale automakers are shifting to scalable modular architectures that integrate the battery, e-axle, and drive unit into flexible “skateboard” configurations. These platforms will reduce their costs and enable software-defined performance updates through over-the-air upgrades.
- AI-Optimized Energy Management
Artificial intelligence will power real-time power delivery, anticipating traffic and terrain to balance efficiency with performance. Future hybrids will be able to learn, adjust, and self-optimize, combining intelligence with propulsion.
- Smart Materials & Circular Manufacturing
The hybrids of tomorrow will be lighter and cleaner — constructed from recycled composites, graphene-reinforced metals, and bio-based plastics. Closed-loop recycling will enable hybrid production to become more sustainable from start to finish.
Conclusion

Hybrid powertrains are no longer a bridge they’re becoming the cornerstone of an intelligent, networked mobility ecosystem.
As electrification grows and carbon-neutral goals firm up, hybrids will become smarter, self-tuning systems that efficiently couple combustion with electric precision.
Next-generation hybrid platforms will talk to smart grids, learn from the behavior of drivers, and self-regulate energy across varying propulsion sources from batteries to hydrogen cells. Optimization using AI, predictive maintenance, and cloud-based analytics will transform vehicles into operating modes, charging, and interacting with their surroundings.
Hybrids aren’t just a bridge—they’re the bold intersection where combustion and electrification unite to rewrite the future.
The post Next Generation Hybrid Systems Transforming Vehicles appeared first on ELE Times.
Tobii and STMicroelectronics enter mass production of breakthrough interior sensing technology
Tobii and STMicroelectronics announced the beginning of mass production of an advanced interior sensing system for a premium European carmaker. It integrates a wide field-of-view camera, capable of seeing in daylight and at night with next-level driver and occupant monitoring, pushing the boundaries of user experience and safety.
“We’re very proud to bring this ground breaking system to life. This is more than just technology; it’s a vision,” said Adrian Capata, senior vice president of Tobii Autosense. “Image quality is critical, and thanks to our strong collaboration with ST, we’ve achieved a unique balance that allows a single-camera solution to meet rigorous safety standards, while also unlocking enhanced user experiences. By combining visible and IR sensing, we’re enabling intelligent in-cabin environments that truly understand human presence, behavior, and context.”
“As a result of close collaboration on development and integration with Tobii, we have created a new generation of interior sensing technology that is reliable, user-friendly, and ready for widespread adoption across the automotive industry,” said Alexandre Balmefrezol, Executive Vice President and General Manager of the Imaging Sub-Group at STMicroelectronics. “We are now rapidly expanding our production capacity to meet the anticipated demand and ensure a seamless transition to mass manufacturing.”
Technical information on the interior sensing system
Tobii’s and ST’s integrated approach allows automotive OEMs to install just one camera inside the cabin, providing the most mature, efficient, and cost-effective solution available on the market.
The system combines Tobii’s attention-computing technology with STMicroelectronics’ VD1940, an advanced image sensor designed primarily for automotive applications. This sensor features a single 5.1MP hybrid pixel design, sensitive to both RGB (colour in daytime) and infrared (IR at night time) light. Its wide-angle field of view covers the entire cabin, delivering exceptional image quality. Tobii’s algorithms process dual video streams to support both the Driver Monitoring System (DMS) and Occupancy Monitoring System (OMS).
The VD1940 image sensor is part of the SafeSense by ST, an advanced sensing technology platform designed by STMicroelectronics for DMS and OMS, which embeds functional safety and cyber security features and is dedicated to automotive safety applications. With this innovative product portfolio, ST is delivering reliable, high-quality, and cost-effective solutions tailored to the automotive industry. As an Integrated Device Manufacturer (IDM), STMicroelectronics masters the complete image sensor supply chain, with full control over both design and manufacturing processes. This ensures supply security through production of its imaging solutions in its European fabs, with these devices already in mass production and ready for integration by Tier 1s and OEMs.
The post Tobii and STMicroelectronics enter mass production of breakthrough interior sensing technology appeared first on ELE Times.



