Українською
  In English
ELE Times
Applied Materials, CG Power, Lam Research, Larsen & Toubro, and Micron Join the IDTA
The India Deep Tech Alliance (IDTA) announced that Applied Materials, CG Power, Lam Research, Larsen & Toubro, and Micron Technology have joined the Alliance. These additions further strengthen IDTA’s cross-sector collaboration model, which brings together investors, corporates, and technology-enabling partners to mobilise capital, technical expertise, market access, and policy engagement for the advancement of Indian deep tech startups.
With the addition of these global and Indian industry leaders, IDTA now spans artificial intelligence, semiconductor equipment, memory, materials, infrastructure engineering, and power systems, creating an integrated platform to support investment, technology collaboration, talent development, and startup commercialisation. With the shared goal of accelerating the growth of India’s deep tech economy, IDTA Corporate Strategic Partners aim to leverage their expertise to provide strategic and technology counsel to other IDTA members and emerging startups. Strategic advisory and ecosystem collaborations may include:
- Manufacturing and scale-up guidance for lab-to-fab transitions and production readiness.
- Technical talks, training, and access to expert resources.
- Collaborative research discussions and ecosystem initiatives with startups, researchers, and industry.
- Private industry input to policy dialogues related to national priority technology sectors.
- Mentorship, network access, and co-development opportunities in concert with investors.
IDTA is an industry-led consortium formed to mobilise capital and company-building expertise to help India-domiciled deep tech startups scale globally. It was formed to expand private sector support for strategic technology sectors, complementing the Government of India’s Research, Development & Innovation (RDI) Scheme.
This latest membership expansion follows NVIDIA joining IDTA as a Founding Member and Strategic & Technical Advisor, underscoring the Alliance’s ambition to build globally relevant, India-anchored deep tech capabilities at scale. Since its founding in September 2025, IDTA has expanded to a commitment of more than $2.5B USD in venture funding to Indian deep tech startups over the next five years, including a dedicated $1B USD allocation to Indian AI startups to be invested over the next three years. IDTA venture capital members have deployed $110M USD into 50+ companies over the past six months.
“The entry of Applied Materials, CG Power, Lam Research, L&T, and Micron marks a pivotal step in moving India’s deep tech ambition from intent to execution,” said Arun Kumar, India Managing Partner of Celesta Capital and Chair of IDTA. “Together with NVIDIA’s role as a founding member and strategic advisor, this coalition brings unmatched depth across semiconductors, advanced manufacturing, infrastructure, and systems engineering. IDTA is designed to align capital, technology, and policy so that India can emerge not just as a participant, but as a trusted global hub for next-generation technologies.”
Quotes from New IDTA Corporate Strategic Partners:
Om Nalamasu, CTO, Applied Materials, said, “Applied Materials has a long history of working across industry, startups, academia, and research institutions to advance foundational technologies. As a materials engineering leader, we believe long‑term progress comes from sustained, ecosystem‑level collaboration. Through this alliance, we look forward to contributing our deep technology expertise to help build resilient ecosystems for India and the world.’’
Mr. Amar Kaul, Global CEO & Managing Director, CG Power, said, “India’s deep tech journey is entering a decisive phase, one where execution, industrial capability, and long-term partnerships will determine global relevance. CG Power’s participation in the India Deep Tech Alliance reflects our conviction that nation-building today requires strong, technology-led manufacturing ecosystems. Through IDTA, we look forward to contributing our expertise in industrial, power systems and semiconductors to create resilient and future-ready value chains that reinforce India’s position as a trusted global technology hub.”
Kevin Chen, Head of Lam Capital & Corporate Development, Lam Research, said: “Semiconductor manufacturing excellence depends on deep collaboration across equipment, materials, process technology, and talent. We look forward to engaging with IDTA to help Indian innovators navigate technology roadmaps, manufacturability, and global ecosystem linkages that accelerate from lab to fab.”
Prashant Chiranjive Jain, Head Corporate Centre, Larsen & Toubro, said: “The India Deep Tech Alliance represents a pivotal shift toward indigenous innovation. By synergising L&T’s engineering heritage with advanced capabilities in AI, design engineering, and quantum systems, we are committed to building a robust deep-tech ecosystem. We look forward to delivering cutting-edge solutions that position India as a global leader in the next generation of technology.”
Anand Ramamoorthy, Managing Director, Micron India, said: “Micron’s decision to join the India Deep Tech Alliance reflects our commitment to ecosystem-led collaboration to propel a vital economic engine for India. Micron’s technology and innovation expertise will play a vital role in helping advance globally competitive deep tech from India while aligning with IDTA’s support for the national RDI agenda and its focus on translating research into market impact.”
The post Applied Materials, CG Power, Lam Research, Larsen & Toubro, and Micron Join the IDTA appeared first on ELE Times.
Manufacturing Breakthroughs in Chip Packaging Are Powering AI’s Future
Courtesy: Lam Research
With all the attention being given to AI, it’s easy to overlook some of the core technologies enabling its capabilities. Sure, a lot more people have now heard about NPUs, GPUs and the businesses that make them, but what about the companies that enable these cutting-edge AI accelerators to be manufactured?
The Complexity of Modern Chipmaking
While most people may not realise it, chip manufacturing is incredibly challenging and requires the level of scientific breakthroughs that have powered humanity’s most advanced achievements. I mean, we’re talking about bending the laws of physics to build components that are a thousand times smaller than a grain of sand. Oh, and doing so millions of times over at incredibly high levels of quality and consistency. Plus, with the extra demands that GenAI workloads are putting on today’s latest chips, the challenges are getting even tougher.
That’s why companies providing the equipment and technologies that enable the manufacturing of these advanced chips play an essential role in driving the advanced AI capabilities we are all starting to experience.
Without their work to overcome technical challenges like the need for exascale computing, addressing the “memory wall” that can slow down AI accelerators, increasing power efficiency, and other issues that are necessary to maintain the Moore’s Law-like advances we’ve seen in these chips, the state of AI would not be where it is today. In particular, organisations like Lam Research, which build extremely complex, sophisticated machines that help process the raw silicon wafers that eventually become today’s most powerful semiconductor chips, play a big, though little-understood, part in big tech advancements like AI.
Building Next-Generation AI Chips Through Heterogeneous Integration
Lam Research makes a wide array of equipment that performs multiple tasks in the highly precise, extremely complex, and long (often 30 days or more) process of creating a modern chip. But in the era of AI accelerators, it turns out even the most sophisticated individual chip isn’t enough.
Instead, the latest GPUs and other advanced processors are being assembled through a process called heterogeneous integration, which combines multiple independent elements, known as “chiplets,” into even more sophisticated pseudo-SOCs, or Systems on Chip (advanced multi-chip packages that mimic some characteristics of an SOC). Commonly referred to as advanced packaging, the technology that enables the creation of these pseudo-SOCs requires extremely sophisticated semiconductor manufacturing.
Extraordinarily precise component stacking, chip-to-chip connections, and other key technologies allow these chips to integrate multiple independent processing elements, separate connectivity elements, memory, and more. The ultimate goal is to create the most powerful and capable multi-chip package they can in the most effective and efficient space and power envelopes possible.
Advanced Packaging Techniques
As with individual wafer processing, there are often multiple steps and multiple technologies (and approaches) involved with chip packaging. Some entail direct side-by-side connections between various chiplets and other elements, while others use various forms of stacking technology where different pieces sit on top of one another. In all cases, a critical part of the packaging process involves creating the paths through which the connections between the various elements are made. Sometimes those paths are created through film layers that act as a type of “glue” between the elements, while in other situations, it may involve creating millions of tiny holes that are filled with a metal-like material that provides something akin to a physical bridge between the layers.
In the case of Lam Research, the company has developed machines for each of those core packaging technologies. For physical bridging types—which are called through silicon vias or TSVs—Lam offers products in their Syndion, Striker ALD, and SABRE 3D lines. Each performs different parts of the process, including etching for creating the holes, deposition and filling for both lining and then injecting the new material into the holes, and then various cleaning processes along the way.
Semiconductor Manufacturing Innovations Enable AI Progress
Though little understood, the advancements in AI acceleration that have been achieved to date are strongly tied back to the manufacturing technologies that enabled them to be built. Integrating things like High Bandwidth Memory (HBM) directly beside GPU cores, for example, has had a huge impact on the performance, scale and efficiency of the latest AI accelerators, and that, in turn, is driving the impressive advancements we’ve seen in Large Language Models (LLMs) and other AI applications.
Looking forward, it’s going to be continued advancements in 3D packaging—along the lines of what Lam Research is doing with their new VECTOR TEOS 3D tool—that allow those advancements to continue. They may not be easy to see, understand, or appreciate, but semiconductor manufacturing technologies play an enormously important role in moving the tech industry and society forward.
The post Manufacturing Breakthroughs in Chip Packaging Are Powering AI’s Future appeared first on ELE Times.
Powering the Future: How High-Voltage MLCCs Drive Efficiency in Modern Electronics
Courtesy: Murata Electronics
Power electronics is undergoing a profound transformation. Devices are now expected to operate faster, become smaller, and achieve unprecedented levels of efficiency.
To meet these demands, wide-bandgap (WBG) semiconductors, such as silicon carbide (SiC) and gallium nitride (GaN), are increasingly adopted over silicon-based devices. These advanced materials enable significantly higher switching frequencies and increased voltage levels. This reduces system size and boosts power density.
Figure 1: The typical operating frequency and switching power of various semiconductor materials (Source: Yole Group) [see MURA200 for original images]At the same time, the rapid electrification of transport, industry, and energy infrastructure is driving an unprecedented expansion in power conversion applications. This evolution exposes designers to a far wider spectrum of operating conditions.
Critical Challenges in High-Voltage Systems
These evolving expectations place significant stress not only on active devices but also on the passive components integral to these systems. Higher switching speeds, for instance, lead to sharp voltage transients and electromagnetic interference (EMI). Increased voltages impose strict demands on insulation and overall reliability.
Multilayer ceramic capacitors (MLCCs) play a vital role in suppressing high-frequency noise, absorbing transient spikes, and protecting semiconductor devices from overvoltage stress. Therefore, the advancement of MLCCs must align with the increased performance standards required by WBG devices, necessitating enhancements in dielectric compositions and creative packaging approaches.
Taming Transient Spikes
Snubber capacitors are essential in power electronics, especially where high-speed switching induces voltage overshoot and ringing. This is particularly critical during the turn-off transitions of MOSFETs or IGBTs. This issue is heightened in SiC and GaN power semiconductors, which exhibit greater surge voltages compared to traditional silicon IGBTs.
Figure 2: SiC MOSFETs exhibit a higher surge voltage than traditional Si IGBTs (Source: Murata) [see MURA200 for original images]A well-matched snubber capacitor effectively absorbs transient energy, suppresses peak voltages, and damps oscillations. Murata’s metal-termination MLCCs, such as the KC3 and KC9 series, are optimised for use in SiC-based circuits.
The post Powering the Future: How High-Voltage MLCCs Drive Efficiency in Modern Electronics appeared first on ELE Times.
Redefining the human experience with intelligent computing
Courtesy: Qualcomm
Enabling the devices, vehicles and machines that define tomorrow’s world.
What you should know:
- The next UI centres around you, with your AI agent seeing, hearing and acting on your behalf.
- We’re scaling AI to redefine the human experience—powering next-gen wearables and personal AI devices, and driving intelligence into robots, cars and smart homes.
- Our technologies enable extraordinary experiences that consumers and businesses depend on every day—bringing personal and physical AI everywhere.
Think of AI like coffee. You don’t walk into a café and ask for “a beverage brewed from roasted beans” — that’s assumed. You order the experience. Latte, half-pump vanilla, extra shot against a soundtrack of acoustic 90s alternative. The perfect mix to fuel your day, making you more productive, more creative, more you. AI works the same way. It’s a given, not a feature — the foundation of every experience, making each truly yours.
You are at the centre with your agent as your intelligent teammate. This is the next UI. Forget the seemingly endless scrolling and tedious tapping to complete one.single.thing only to do it again.and.again. Instead, your agent moves with you, learns from you and anticipates your needs. And thanks to AI processing on the device, it remains private, contextual and always-on. Like your favourite barista, who knows your order as soon as you walk in, including your (secret) treat every Friday.
We’re leading the charge toward the future of intelligent computing — reimagining possibilities for not only consumers, but also enterprises and industries worldwide. We’re scaling intelligence from edge to cloud, bringing AI everywhere. Our Snapdragon and Qualcomm Dragonwing platforms enable the devices, vehicles and machines that define tomorrow’s world — and redefine the human experience.
And again, I can’t say this enough, it’s all about you. Or more precisely, an “ecosystem of you” where your agent can see, hear and act on your behalf across an emerging category of AI-first intelligent wearables, along with smartphones and AI PCs.
The newest entrant in our Snapdragon X Series Compute Platforms, Snapdragon X2 Plus, delivers agentic AI experiences to aspiring creators, professionals and everyday users — broadening the already-growing Windows PC community.
Your home, too, is transforming into a responsive, intuitive environment. Understanding you and your family, your home adapts to your needs, routines and comforts. Lights, climate, security and entertainment are now intelligent with Dragonwing Q-7790 and Q-8750 processors. The backbone of these AI-enabled experiences and home automation? Connectivity, brought to you by Qualcomm, the leading wireless innovator.
But AI isn’t just personal. It’s also physical, acting alongside you.
Your car is transforming into an adaptive companion, driven by intelligence. Snapdragon is redefining automotive experiences, from enhancing safety and comfort to immersive entertainment. Private, contextual AI — sensing, processing, acting in real time — makes every drive smarter, more efficient and connected.
Advanced autonomous capabilities are also being used to power the next generation of personal service robots, all the way through to industrial full-size humanoids. Thanks to our full-stack robotics system, they will deliver intuitive and impactful assistance with precision, enhancing daily life and industry. And I’m sure they’ll learn how to make your coffee perfectly.
This is truly an exciting time in how technology is evolving around us and for us. Our innovations already power billions of devices, enabling the extraordinary experiences that consumers and businesses depend on every day. And we can’t wait to bring you more.
The post Redefining the human experience with intelligent computing appeared first on ELE Times.
The Forest Listener: Where edge AI meets the wild
Courtesy: Micron
Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organisations can accelerate product development, reduce costs and enhance user experiences. A broad ecosystem encourages collaboration among manufacturers, developers and service providers, helping to drive interoperability. Enabling an ecosystem for your customers is a huge value for your product in any market, but for a market that spans many applications, it’s paramount for allowing your customers to get to the market quickly. Micron has a diverse set of ecosystem partners for broad applications like microprocessors, including STMicroelectronics (STM). We have collaborated with STM for years, matching our memory solutions to their products. Ultimately, these partnerships empower our mutual businesses to deliver smarter, more connected solutions that meet the evolving needs of consumers and enterprises alike.
The platform and the kit
There’s something uniquely satisfying about peeling back the anti-static bag and revealing the STM32MP257F-DK dev board brimming with potential. As an embedded developer, I am excited when new silicon lands on my desk, especially when it promises to redefine what’s possible at the edge. The STM32MP257F-DK from STMicroelectronics is one of those launches that truly innovates. The STM32MP257F-DK Discovery Kit is a compact, developer-friendly platform designed to bring edge AI to life. And in my case, to the forest. It became the heart of one of my most exciting projects yet: the Forest Listener, a solar-powered, AI-enabled bird-watching companion that blends embedded engineering with natural exploration.
A new kind of birdwatcher
After a few weeks of development and testing, my daughter and I headed into the woods just after sunrise — as usual, binoculars around our necks, a thermos of tea in the backpack and a quiet excitement in the air. But this time, we brought along a new companion. The Forest Listener is a smart birdwatcher, an AI-powered system that sees and hears the forest just like we do. Using a lightweight model trained with STM32’s model zoo, it identifies bird species on the spot. No cloud, no latency, just real-time inference at the edge. My daughter has mounted the device on a tripod, connected the camera and powered it on. The screen lights up. It’s ready! Suddenly, a bird flutters into view. The camera captures the moment. Within milliseconds, the 1.35 TOPS neural processing unit (NPU) kicks in, optimised for object detection. The Cortex-A35 logs the sighting (image, species, timestamp), while the Cortex-M33 manages sensors and power. My daughter, watching on a connected tablet, lights up: “Look, Dad! It found another one!” A Eurasian jay, this time.
Built for the edge … and the outdoors
Later, at home, we scroll through the logs saved on the Memory cards. The system can also upload sightings via Ethernet. She’s now learning names, songs and patterns. It’s a beautiful bridge between nature and curiosity. At the core of this seamless experience is Micron LPDDR4 memory. It delivers the high bandwidth needed for AI inference and multimedia processing, while maintaining ultra-low power consumption, critical for our solar-powered setup. Performance is only part of the story: What truly sets Micron LPDDR4 apart is its long-term reliability and support. Validated by STM for use with the STM32MP257F-DK, this memory is manufactured at Micron’s dedicated longevity fab, ensuring a more stable, multiyear supply chain. That’s a game-changer for developers to build solutions that need to last — not just in home appliances, but in the harsh field environment. Whether you’re deploying an AI app in remote forests, industrial plants or smart homes, you need components that are not only fast and efficient but also built to endure. Micron LPDDR4 is engineered to meet the stringent requirements of embedded and industrial markets, with a commitment to support and availability that gives manufacturers peace of mind.
Beyond bird-watching
The Forest Listener is just one example of what the STM32MP257F-DK and Micron LPDDR4 can enable. In factories, the same edge-AI capabilities can monitor machines, detect anomalies, and reduce downtime. In smart homes, they can power face recognition, voice control and energy monitoring — making homes more intelligent, responsive and private, all without relying on the cloud.
The post The Forest Listener: Where edge AI meets the wild appeared first on ELE Times.
Solution Suite Concept: Software-Based Refrigerator
Courtesy: Renesas
Imagine a world without cold storage—medicines would spoil, food would perish, and supply chains would collapse. Refrigeration systems are vital to modern life, from pharmaceutical coolers to large-scale warehouses. These systems form the backbone of the cold chain, ensuring products meet strict storage requirements.
To achieve durability and sustainability in these operations, embracing AI is essential for maximum performance optimisation. Renesas has been continuously innovating in this space, delivering a range of AI-driven solutions tailored for the refrigeration industry—helping businesses enhance efficiency, reliability, and energy savings.
Renesas Enablement for AI-powered Refrigeration Solutions
Renesas’ solution concept introduces a suite of AI-driven applications designed to tackle everyday challenges in refrigeration systems. These proposals focus on enhancing operational efficiency and reliability in key areas such as predictive maintenance, energy optimisation, and asset management, among others.
Refrigerant gas is crucial for maintaining stable cold temperatures in cooling systems. Over time, however, leaks can develop due to wear and tear in pipes or refrigerant circuits. To address this challenge, Renesas AI leverages advanced AI algorithms to detect anomalies by analysing changes in electrical current and compressor vibrations. This proactive approach enables timely maintenance, prevents costly breakdowns, and ensures optimal cooling efficiency.
Compressor Overheat Detection
Compressor overheating can result from issues such as low refrigerant levels or clogged filters, leading to potential system damage. Renesas AI HVAC solutions go beyond leak detection by continuously monitoring filter conditions and measuring intake airflow in real time. By analysing this data, the system predicts risks before they escalate, ensuring the cooling system operates efficiently and reliably.
Cooling Capacity Monitoring
Renesas HVAC solutions integrate advanced AI features to predict fan imbalances that directly impact cooling efficiency. Using AI models trained with electrical current data, these anomalies can be detected, enabling seamless and sensorless monitoring. This proactive approach enhances system reliability and ensures optimal cooling performance.
Assets Monitoring: Availability and Expiry Tracking
AI-powered object detection and classification enable real-time monitoring of stock levels and refrigerated assets. The system can automatically suggest restocking needs and estimate lead times for each item. Reducing the need for manual inventory checks and minimising the frequency of opening cooling units, it reduces energy consumption and improves operational efficiency.
Person Identification
AI-powered person identification enhances security and user experience in refrigeration systems. Authorised access can be ensured through two approaches:
- Vision-based authentication using an object detection model combined with on-device learning for custom face recognition.
- Voice-based authentication using a speaker identification model with a microphone sensor.
These methods can be combined for two-step authentication, which is especially critical in medical or industrial refrigeration environments. For home appliances, person identification enables context-aware interactions, such as personalised dietary recommendations or recipe suggestions, creating a smarter and more intuitive user experience.
By integrating real-time monitoring, predictive maintenance, and intelligent object detection, these systems deliver optimal performance, energy efficiency, and operational reliability. From detecting refrigerant leaks and airflow issues to predicting fan imbalances and automating inventory management, the Renesas solution concept offers a comprehensive, end-to-end approach to cooling system optimisation.
This seamless and sensorless monitoring capability not only extends system longevity but also drives significant energy savings. As demand for smarter, more efficient refrigeration systems grows, Renesas is leading the way, pioneering the next generation of intelligent cooling solutions.
The post Solution Suite Concept: Software-Based Refrigerator appeared first on ELE Times.
The V2X Revolution: Satellite Integration, Quantum Security, and 5G RedCap Reshape Automotive Connectivity
Imagine your car negotiating traffic with satellites, quantum-encrypted handshakes, and roadside infrastructure—all while you sip your morning coffee. This isn’t science fiction; it’s the V2X revolution unfolding in production vehicles by 2027. As the Vehicle-to-Everything market hurtles from $6.53 billion in 2024 toward a staggering $70.94 billion by 2032, three converging technologies are rewriting the rules of automotive connectivity: satellite-integrated networks that promise coverage where cell towers fear to tread, post-quantum cryptography racing against the quantum computing threat, and cost-optimised 5G RedCap systems making autonomous driving infrastructure economically viable. The question isn’t whether your next vehicle will be connected—it’s whether the ecosystem will be ready when it rolls off the line.
The Convergence Catalyst: When Satellites, Quantum, and RedCap Collide
The automotive industry has weathered countless transformations, but the V2X revolution of 2025-2026 represents something unprecedented: three disparate technologies—satellite connectivity, quantum cryptography, and 5G RedCap, converging into a single automotive imperative as the market accelerates toward $70.94 billion by 2032.
The strategic calculus facing OEMs isn’t simply about adopting V2X; it’s about choosing which technological bet defines their competitive position. Some prioritise satellite-integrated Non-Terrestrial Networks, banking on 5GAA’s 2025 demonstrations proving vehicles can maintain emergency connectivity where terrestrial infrastructure fails. Their roadmaps target 2027 commercial deployments, envisioning truly ubiquitous vehicle connectivity from urban centres to remote highways.
“Connectivity is becoming more and more important for vehicles. No connection is not an option. Satellite came to our attention 3 to 5 years ago, and then it was costly and proprietary with large terminals,” said Olaf Eckart, Senior Expert, Cooperations R&D / Engineering Lead, NTN, BMW
Others race against the quantum threat timeline. With NIST finalising quantum-resistant standards including CRYSTALS-Kyber and CRYSTALS-Dilithium, these companies face an uncomfortable truth: today’s vehicle encryption could be obsolete within a decade. Their 18-24 month roadmaps aren’t about adding features; they’re about future-proofing against a cryptographic paradigm shift most consumers don’t yet understand.
The pragmatist camp focuses on 3GPP Release 17’s RedCap specifications entering mass production. These organisations see cost-effective 5G variants as critical enablers for vehicle-road-cloud integration architectures, making L2+ autonomous driving economically viable at scale.
What’s remarkable isn’t the diversity of approaches; it’s that all three are simultaneously correct. The V2X ecosystem emerging in 2026-2027 won’t be defined by a single winner but by seamless integration of all three domains. Vehicles rolling off 2027 production lines will need satellite backup for coverage gaps, quantum-resistant security for longevity, and RedCap efficiency for cost-effectiveness.
The question keeping executives awake isn’t which technology to choose; it’s whether their organisations can master all three fast enough to remain relevant.
Engineering Reality Check: Breaking Through the Technical Bottlenecks
Every breakthrough technology comes with footnotes written in engineering challenges, and V2X is no exception. The gap between demonstrations and production-ready systems is measured in thousands of testing hours and occasional failures that never reach press releases.
Consider satellite-integrated V2X’s deceptively simple promise: connectivity everywhere. Reality involves achieving seamless terrestrial-to-Non-Terrestrial Network handovers while maintaining sub-100ms latency that safety-critical applications demand. When vehicles at highway speeds switch from cellular towers to LEO satellite constellations, handovers must be invisible and instantaneous. Engineers are discovering that 3GPP Release 17/18 standards provide frameworks, but real-world implementation requires solving synchronisation challenges that textbooks barely address.
Post-quantum cryptography presents an even thornier dilemma. CRYSTALS-Kyber and CRYSTALS-Dilithium aren’t just longer keys—they’re fundamentally different mathematical operations consuming significantly more processing power than today’s RSA or ECC algorithms. Automotive-grade ECUs, designed with tight power budgets and cost constraints, weren’t built for quantum-resistant workloads. Development teams wrestle with a trilemma: maintain security standards, meet latency requirements, or stay within thermal envelopes. Pick two.
The integration paradox compounds complexity. Can existing vehicles receive V2X capabilities through OTA updates and modular hardware? Sometimes, a 2024 model with appropriate sensor suites might support RedCap upgrades via software. But satellite antenna arrays and quantum-capable security modules often require architectural changes that can’t be retrofitted; they need initial platform integration.
The coexistence problem adds another layer. Many vehicles must support multiple V2X standards simultaneously: legacy DSRC, C-V2X, and emerging satellite connectivity. Ensuring these systems don’t interfere while sharing antenna space and processing resources is creative problem-solving happening in testing facilities at 3 AM.
What separates vapourware from production-ready solutions isn’t the absence of challenges; it’s how engineering teams respond when elegant theory collides with messy reality.
NXP’s Post Quantum Cryptography chips use on-chip isolation so that “in the event of an identified attack, the technology doesn’t let the attack spread to other chips and controllers in the vehicle,” said the NXP Semiconductors, Engineering Team, Marius Rotaru (Software Architect) & Joppe Bos (Senior Principal Cryptographer).
Beyond the Vehicle: Infrastructure, Ecosystems, and the Path to Scale
The most sophisticated V2X technology becomes an expensive paperweight without supporting ecosystems. This truth is reshaping automotive development models, forcing OEMs beyond the vehicle into infrastructure challenges they’ve historically ignored.
The infrastructure gap is staggering. Satellite-integrated V2X requires ground stations for orbit tracking and handover coordination. Post-quantum security needs certificate authorities upgraded with quantum-resistant algorithms across entire PKI hierarchies. RedCap-enabled vehicle-road-cloud architectures demand roadside units at sufficient density, plus edge computing infrastructure processing terabytes of sensor data with minimal latency.
No single company can build this alone, spawning partnership models from traditional supplier relationships into complex consortia. Automotive OEMs partner with telecom operators on spectrum allocation, governments on roadside infrastructure and regulatory frameworks, and satellite operators, cloud providers, and cybersecurity firms; often simultaneously, sometimes competitively.
Regulatory landscapes add complexity. V2X touches spectrum allocation, data privacy, cybersecurity standards, and safety certification, each governed by different agencies with different timelines. Europe swung toward C-V2X after years of DSRC mandates. China receives state-backed vehicle-road-cloud infrastructure investment. United States approaches vary by state, creating fragmented deployment landscapes complicating nationwide rollouts.
When does this reach mainstream production? RedCap systems enter vehicles now, in 2025-2026, leveraging existing cellular infrastructure. Satellite integration likely reaches commercial deployment by 2027-2028 for premium vehicles and emergency services. Post-quantum security faces longer timelines; threats aren’t imminent enough to justify computational overhead across fleets.
Three factors will accelerate or delay timelines: infrastructure deployment speed, regulatory harmonisation, and killer applications making V2X tangible to consumers. V2X reaches mainstream adoption when it solves problems people actually have.
Looking toward 2027-2030, the competitive landscape splits between integrated mobility providers mastering the full vehicle-infrastructure-cloud stack and specialised component suppliers. Winners will be organisations that built ecosystems delivering end-to-end experiences. In the V2X era, the vehicle is just the beginning.
by: Shreya Bansal, Sub-Editor
The post The V2X Revolution: Satellite Integration, Quantum Security, and 5G RedCap Reshape Automotive Connectivity appeared first on ELE Times.
20 Years of EEPROM: Why It Matters, Needed, and Its Future
ST has been the leading manufacturer of EEPROM for the 20th consecutive year. As we celebrate this milestone, we wanted to reflect on why the electrically erasable programmable read-only memory market remains strong, the problems it solves, why it still plays a critical role in many designs, and where we go from here. Indeed, despite the rise in popularity of Flash, SRAM, and other new memory types, EEPROM continues to meet the needs of engineers seeking a compact, reliable memory. In fact, over the last 20 years, we have seen ST customers try to migrate away from EEPROM only to return to it with even greater fervour.
Why companies choose EEPROM today? Granularity
Understanding EEPROM
One of the main advantages of electrically erasable programmable read-only memory is its byte-level granularity. Whereas writing to other memory types, like flash, means erasing an entire sector, which can range from many bytes to hundreds of kilobytes, depending on the model, an EEPROM is writable byte by byte. This is tremendously beneficial when writing logs, sensor data, settings, and more, as it saves time, energy, and reduces complexity, since the writing operation requires fewer steps and no buffer. For instance, using an EEPROM can save significant resources and speed up manufacturing when updating a calibration table on the assembly line.
PerformanceThe very nature of EEPROM also gives it a significant endurance advantage. Whereas flash can only support read/write cycles in the hundreds of thousands, an EEPROM supports millions, and its data retention is in the hundreds of years, which is crucial when dealing with systems with a long lifespan. Similarly, its low peak current of a few milliamps and its fast boot time of 30 µs mean it can meet the most stringent low-power requirements. Additionally, it enables engineers to store and retrieve data outside the main storage. Hence, if teams are experiencing an issue with the microcontroller, they can extract information from the EEPROM, which provides additional layers of safety.
ConvenienceThese unique abilities explain why automotive, industrial, and many other applications just can’t give up EEPROM. For many, giving it up could break software implementation or require a significant redesign. Indeed, one of the main advantages of EEPROM is that they fit into a small 8-pin package regardless of memory density (from 1 Kbit to 32 Mbit). Additionally, they tolerate high operating temperatures of up to 145 °C for serial EEPROM, making them easy to use in a wide range of environments. The middleware governing their operations is also significantly more straightforward to write and maintain, given their operation.
ResilienceSince ST controls the entire manufacturing process, we can provide greater guarantees to customers facing supply chain uncertainties. Concretely, ST offers EEPROM customers a guarantee of supply availability through our longevity commitment program (10 years for industrial-grade products, 15 years for automotive-grade). This explains why, 40 years after EEPROM development began in 1985 and after two decades of leadership, some sectors continue to rely heavily on our EEPROMs. And why new customers seeking a stable long-term data storage solution are adopting it, bolstered by ST’s continuous innovations, enabling new use cases.
Why will the industry need EEPROM tomorrow? More storage
EEPROM vs. Page EEPROM
Since its inception in the late 70s, EEPROM’s storage has always been relatively minimal. In many instances, it is a positive feature for engineers who want to reserve their EEPROM for small, specific operations and segregate it from the rest of their storage pool. However, as serial EEPROM reached 4 Mbit and 110 nm, the industry wondered whether the memory could continue to grow in capacity while shrinking process nodes. A paper published in 2004 initially concluded that traditional EEPROMs “scale poorly with technology”. Yet, ST recently released a Page EEPROM capable of storing 32 Mbit that fits inside a tiny 8-pin package.
The Page EEPROM adopts a hybrid architecture, meaning it uses 16-byte words and 512-byte pages while retaining the ability to write at the byte level. This offers customers the flexibility and robustness of traditional EEPROM but bypasses some of the physical limitations of serial EEPROM, thus increasing storage and continuing to serve designs that rely on this type of memory while still improving endurance. Indeed, a Page EEPROM supports a cumulative one billion cycles across its entire memory capacity. For many, Page EEPROMs represent a technological breakthrough by significantly expanding data storage without changing the 8-pin package size. That’s why we’ve seen them in asset tracking applications and other IoT applications that run on batteries.
New featuresST also recently released a Unique ID serial EEPROM, which uses the inherent capabilities of electrically erasable programmable read-only memory to store a unique ID or serial number to trace a product throughout its assembly and life cycle. Usually, this would require additional components to ensure that the serial number cannot be changed or erased. However, thanks to its byte-level granularity and read-only approach, the new Unique ID EEPROM can store this serial number while preventing any changes, thus offering the benefits of a secure element while significantly reducing the bill of materials. Put simply, the future of EEPROM takes the shape of growing storage and new features.
The post 20 Years of EEPROM: Why It Matters, Needed, and Its Future appeared first on ELE Times.
Modern Cars Will Contain 600 Million Lines of Code by 2027
Courtesy: Synopsys
The 1977 Oldsmobile Toronado was ahead of its time.
Featuring an electronic spark timing system that improved fuel economy, it was the first vehicle equipped with a microprocessor and embedded software. But it certainly wasn’t the last.
While the Toronado had a single electronic control unit (ECU) and thousands of lines of code (LOC), modern vehicles have many ECUs and 300 million LOC. What’s more, the pace and scale of innovation continue to accelerate at an exponential, almost unfathomable rate.
It took half a century for cars to reach 300 million LOC.
We predict the amount of software in vehicles will double in the next 12 months alone, reaching 600 million LOC or more by 2027.
The fusion of automotive hardware and software
Automotive design has historically been focused on structural and mechanical platforms — the chassis and engine. Discrete electronic and software components were introduced over time, first to replace existing functions (like manual window cranks) and later to add new features (like GPS navigation).
For decades, these electronics and the software that define them were designed and developed separately from the core vehicle architecture — standalone components added in the latter stages of manufacturing and assembly.
This approach is no longer viable.
Not with the increasing complexity and interdependence of automotive electronics. Not with the criticality of those electronics for vehicle operation, as well as driver and passenger safety. And not with a growing set of software-defined platforms — from advanced driver assistance (ADAS) and surround view camera systems to self-driving capabilities and even onboard agentic AI — poised to double the amount of LOC in vehicles over the next year.
From the chassis down to the code, tomorrow’s vehicles must be designed, developed, and tested as a single, tightly integrated, highly sophisticated system.
Shifting automotive business models
The rapid expansion of vehicular software isn’t just a technology trend — it’s rewriting the economics of the automotive industry. For more than a century, automakers competed on horsepower, handling, and mechanical innovation. Now, the battleground is shifting to software features, connectivity, and continuous improvement.
Instead of selling a static product, OEMs are adopting new, more dynamic business models where vehicles evolve long after they leave the showroom. Over-the-air updates can deliver new capabilities, performance enhancements, and safety improvements without a single trip to the dealer. And features that used to be locked behind trim levels can be offered as on-demand upgrades or subscription services.
This transition is already underway.
Some automakers are experimenting with monthly fees for heated seats or performance boosts. Others are building proprietary operating systems to replace third-party platforms, giving them control over the user experience — as well as the revenue stream. By the end of the decade, software subscriptions will become as common as extended warranties, generating billions in recurring revenue and fundamentally changing how consumers think about car ownership.
The engineering challenge behind OEM transformation
Delivering on the promise of software-defined vehicles (SDVs) that continuously evolve isn’t as simple as adding more code. It requires a fundamental rethinking of how cars are designed, engineered, and validated.
Hundreds of millions of lines of code must push data seamlessly across a variety of electronic components and systems. And those systems — responsible for sensing, safety, communication, and other functions — must work in concert with millisecond precision.
For years, vehicle architectures relied on dozens of discrete ECUs, each dedicated to a specific function. But as software complexity grows, automakers are shifting toward fewer, more powerful centralised compute platforms that can handle much larger workloads. This means more code is running on less hardware, with more functionality consolidated onto a handful of high-performance processors. As a result, the development challenge is shifting from traditional ECU integration — where each supplier delivered a boxed solution — to true software integration across a unified compute platform.
As such, hardware and software development practices can no longer be separate tracks that converge late in the process. They must be designed together and tailored for one another — well before any physical platform exists.
This is where electronics digital twins (eDTs) are becoming indispensable. By creating functionally accurate virtual models of vehicle electronics and systems, design teams can shift from late-stage integration to a model where software and hardware are co-developed from day one.
eDTs and virtual prototypes do more than enable earlier software development at the component level — they allow engineers to simulate, validate, and optimise the entire vehicle electronics. This means teams can see how data flows across subsystems, how critical components interact under real-world scenarios, and how emerging features might impact overall safety and performance. With eDTs, automakers can test billions — even trillions — of operating conditions and edge cases, many of which would be too costly, too time-consuming, or otherwise infeasible with physical prototypes.
By embracing eDTs, the industry is not just keeping pace with escalating complexity — it is re-engineering longstanding engineering processes, accelerating innovation, and improving the quality and safety of tomorrow’s vehicles.
The road ahead
Our prediction of cars containing 600 million lines of code by 2027 isn’t just a number. It signals a turning point for an industry that has operated in largely the same manner for more than a century.
Many automakers are reimagining their identity.
No longer just manufacturers, they’re becoming technology companies with business models that resemble those of cloud providers and app developers. They’re adopting agile, iterative practices, where updates roll out continuously rather than in multi-year product refreshes. And they’re learning how to design, develop, test, and evolve their products as a unified system — from chassis and engine to silicon and software — rather than a collection of pieces that are assembled on a production line.
Unlike the 1977 Oldsmobile Toronado, the car you buy in 2027 won’t be the same car you drive in 2030 — and that’s by design.
The post Modern Cars Will Contain 600 Million Lines of Code by 2027 appeared first on ELE Times.
Advancement in waveguides to progress XR displays, not GPUs
Across emerging technology domains, a familiar narrative keeps repeating itself. In Extended Reality (XR), progress is often framed as a race toward ever more powerful GPUs. In wireless research, especially around 6G, attention gravitates toward faster transistors and higher carrier frequencies in the terahertz (THz) regime. In both cases, this framing is misleading. The real constraint is no longer raw compute or device-level performance. It is system integration. This is not a subtle distinction. It is the difference between impressive laboratory demonstrations and deployable, scalable products.
XR Has Outgrown the GPU BottleneckIn XR, GPU capability has reached a point of diminishing returns as the primary limiter. Modern graphics pipelines, combined with foveated rendering, gaze prediction, reprojection, and cloud or edge offloading, can already deliver high-quality visual content within reasonable power envelopes. Compute efficiency continues to improve generation after generation. Yet XR has failed to transition from bulky headsets to lightweight, all-day wearable glasses. The reason lies elsewhere: optics, specifically waveguide-based near-eye displays.
Waveguides must inject, guide, and extract light with high efficiency while remaining thin, transparent, and manufacturable. They must preserve colour uniformity across wide fields of view, provide a sufficiently large eye-box, suppress stray light and ghosting, and operate at power levels compatible with eyewear-sized batteries. Today, no waveguide architecture geometric (reflective), diffractive, holographic, or hybrid solves all these constraints simultaneously. This reality leads to a clear conclusion: XR adoption will be determined by breakthroughs in waveguides, not GPUs. Rendering silicon is no longer the pacing factor; optical system maturity is.
The Same Structural Problem Appears in THz and 6GA strikingly similar pattern is emerging in terahertz communication research for 6G. On paper, THz promises extreme bandwidths, ultra-high data rates, and the ability to merge communication and sensing on a single platform. Laboratory demonstrations routinely showcase impressive performance metrics. But translating these demonstrations into real-world systems has proven far harder than anticipated. The question is no longer whether transistors can operate at THz frequencies; they can, but whether entire systems can function reliably, efficiently, and repeatably at those frequencies.
According to Vijay Muktamath, Founder of Sensesemi Technologies, the fundamental bottleneck holding THz radios back from commercialisation is system integration. Thermal management becomes fragile, clock and local oscillator integration grows complex, interconnect losses escalate, and packaging parasitics dominate performance. Each individual block may work well in isolation, but assembling them into a stable system is disproportionately difficult. This mirrors the XR waveguide challenge almost exactly.
When Integration Becomes Harder Than InnovationAt THz frequencies, integration challenges overwhelm traditional design assumptions. Power amplifiers generate heat that cannot be dissipated easily at such small scales. Clock distribution becomes sensitive to layout and material choices. Even millimetre-scale interconnects behave as lossy electromagnetic structures rather than simple wires.
As a result, the question of what truly limits THz systems shifts away from transistor speed or raw output power. Instead, the constraint becomes whether designers can co-optimise devices, interconnects, packaging, antennas, and thermal paths as a single electromagnetic system. In many cases, packaging and interconnect losses now degrade performance more severely than the active devices themselves. This marks a broader transition in engineering philosophy. Both XR optics and THz radios have crossed into a regime where system-level failures dominate component-level excellence.
Materials Are Necessary, But Not SufficientThis raises a critical issue for 6G hardware strategy: whether III–V semiconductor technologies such as InP and GaAs will remain mandatory for THz front ends. Today, their superior electron mobility and high-frequency performance make them indispensable for cutting-edge demonstrations.
However, relying exclusively on III–V technologies introduces challenges in cost, yield, and large-scale integration. CMOS and SiGe platforms, while inferior in peak device performance, offer advantages in integration density, manufacturability, and system-level scaling. Through architectural innovation, distributed amplification, and advanced packaging, these platforms are steadily pushing into higher frequency regimes. The most realistic future is not a single winner, but a heterogeneous architecture. III–V devices will remain essential where absolute performance is non-negotiable, while CMOS and SiGe handle integration-heavy functions such as beamforming, control, and signal processing. This mirrors how XR systems offload rendering, sensing, and perception tasks across specialised hardware blocks rather than relying on a single dominant processor.
Why THz Favours Point-to-Point, Not Cellular CoverageAnother misconception often attached to THz communication is its suitability for wide-area cellular access. While technically intriguing, this vision underestimates the physics involved. THz frequencies suffer from severe path loss, atmospheric absorption, and extreme sensitivity to blockage. Beam alignment overhead becomes significant, especially in mobile scenarios. As Mr Muktamath puts it, “THz is fundamentally happier in controlled environments. Point-to-point links, fixed geometries, short distances, that’s where it shines.”
THz excels in short-range, P2P links where geometry is controlled and alignment can be maintained. Fixed wireless backhauls; intra-data-centre communication, chip-to-chip links, and high-resolution sensing are far more realistic early applications. These use cases resemble the constrained environments where XR has found initial traction in enterprise, defence, and industrial deployments— rather than mass consumer adoption.
Packaging: The Silent DominatorPerhaps the clearest parallel between XR waveguides and THz radios lies in packaging. In XR, the waveguide itself is the package: it dictates efficiency, form factor, and user comfort. In THz systems, packaging and interconnects increasingly dictate whether the system works at all. Losses introduced by packaging can erase transistor-level gains. Thermal resistance can limit continuous operation. Antenna integration becomes inseparable from the RF front-end. This has forced a shift from chip- centric design to electromagnetic system design, where silicon, package, antenna, and enclosure are co-designed from the outset.
Communication and Sensing: Convergence with ConstraintsTHz also revives the idea of joint communication and sensing on shared hardware. In theory, high frequencies offer exceptional spatial resolution, making simultaneous data transmission and environmental sensing attractive. In practice, coexistence introduces non-trivial trade-offs.
Waveform design, dynamic range, calibration, and interference management all become more complex when reliability and throughput must be preserved. The most viable path is not full hardware unification, but carefully partitioned coexistence, with shared elements where feasible and isolation where necessary. This echoes XR architectures, where sensing and rendering share infrastructure but remain logically separated to maintain performance.
A Single Lesson Across Two DomainsXR waveguides and THz radios operate in different markets, but they are constrained by the same fundamental truth: the era of component-led innovation is giving way to system-led engineering. Faster GPUs do not solve optical inefficiencies. Faster transistors do not solve packaging losses, thermal bottlenecks, or integration fragility.
As Mr. Muktamath aptly concludes, “The future belongs to teams that can make complex systems behave simply, not to those who build the most impressive individual blocks.” The next generation of technology leadership will belong to organisations that master cross-domain co-design across devices, packaging, optics, and software. Manufacturability and yield as first-order design constraints, Thermal and power integrity as architectural drivers and Integration discipline over isolated optimisation. In both XR and THz, success will not come from building the fastest block, but from making the entire system work reliably, repeatedly, and at scale. That is the real frontier now.
The post Advancement in waveguides to progress XR displays, not GPUs appeared first on ELE Times.
AI-Enabled Failure Prediction in Power Electronics: EV Chargers, Inverters, and SMPS
Reliability is now a defining parameter for modern power electronic systems. As the world pushes harder toward electric mobility, renewable energy adoption, and high-efficiency digital infrastructure, key converters like EV chargers, solar inverters, and SMPS are running in incredibly demanding environments. High switching frequencies, aggressive power densities, wide bandgap materials (like SiC/GaN), and really stringent uptime expectations have all squeezed those reliability margins down to almost nothing. Clearly, traditional threshold-based alarms or basic periodic maintenance are no longer enough to guarantee stable operation.
This is exactly where AI-enabled failure prediction emerges as a breakthrough. By integrating real-time sensing, historical stress patterns, physics-based models, and deep learning, AI unlocks the ability to spot early degradation. This gives us the power to accurately estimate remaining useful life (RUL) and prevent catastrophic breakdowns long before they ever occur.
Jean‑Marc Chéry, CEO of STMicroelectronics, has emphasised that the practical value of AI in power electronics emerges at fleet and lifecycle scale rather than at individual-unit prediction level, particularly for SiC- and GaN-based SMPS.
Aggregated field data across large deployments is used to refine derating guidelines, validate device-level reliability models, and harden next-generation power technologies, instead of attempting deterministic failure prediction on a per-unit basis.
Limitations of Traditional Monitoring in Power Electronics
Conventional condition monitoring methods, things like simple temperature alarms, current protection limits, or basic event logs, operate reactively. They only catch failures after components have already drifted past the acceptable redline. Yet, converter failures actually start much earlier from subtle, long-term changes. Think about:
- Gradual ESR (Equivalent Series Resistance) increase in electrolytic capacitors
- Bond wire fatigue and solder joint cracking inside IGBT/MOSFET modules
- Gate oxide degradation in newer SiC devices
- Magnetic core saturation and insulation ageing
- Switching waveform distortions caused by gate driver drift
AI Techniques Powering Predictive Failure Intelligence
AI-based diagnostics in power electronics rest on three complementary pillars:
- Deep Learning for Real-Time Telemetry
AI-based diagnostics in power electronics rely on three complementary pillars:
Deep Learning for Real Time Telemetry Converters pump out rich telemetry data: temperatures, currents, switching waveforms, harmonics, soft switching behaviour, and acoustic profiles. Deep learning models find patterns here that are absolutely impossible for a human to spot manually.
- CNNs (Convolutional Neural Networks): These analyse switching waveforms, spot irregularities in turn-on/turn-off cycles, identify diode recovery anomalies, and classify abnormal transient events instantly.
- LSTMs (Long Short Term Memory Networks): These track the long-term drift in junction temperature, capacitor ESR, cooling efficiency, and load cycle behaviour over months.
- Autoencoders: learn the “healthy signature” of a converter and identify deviations that signal emerging faults.
- Physics-Informed ML
Pure machine learning struggles with operating points it has not seen; physics-informed machine learning offers better generalisation. It integrates:
- Power cycle fatigue equations
- MOSFET/IGBT thermal models
- Magnetics core loss equations
- Capacitor degradation curves
- SiC/GaN stress lifetime relationships
Peter Herweck, former CEO of Schneider Electric, has underscored that long-life power conversion systems cannot rely on data-driven models alone.
In solar and industrial inverters, Schneider Electric’s analytics explicitly anchor AI models to thermal behaviour, power-cycling limits, and component ageing physics, enabling explainable and stable Remaining Useful Life estimation across wide operating conditions.
- Digital Twins & Edge AI
Digital twins act as virtual replicas of converters, simulating electrical, thermal, and switching behaviour in real time. AI continuously updates the twin using field data, enabling:
- Dynamic stress tracking
- Load-cycle-based lifetime modelling
- Real-time deviation analysis
- Autonomous derating or protective responses
Edge-AI processors integrated into chargers, inverters, or SMPS enable on-device inference even without cloud connectivity.
AI-Driven Failure Prediction in EV Chargers
EV fast chargers (50 kW–350 kW+) operate under harsh conditions with high thermal and electrical stress. Uptime dictates consumer satisfaction, making predictive maintenance critical.
Key components under AI surveillance
- SiC/Si MOSFETs and diodes
- Gate drivers and isolation circuitry
- DC-link electrolytic and film capacitors
- Liquid/air-cooling systems
- EMI filters, contactors, and magnetic components
Roland Busch, CEO of Siemens, has emphasised that reliability in power-electronic infrastructure depends on predictive condition insight rather than reactive protection.
In high-power EV chargers and grid-connected converters, Siemens’ AI-assisted monitoring focuses on detecting long-term degradation trends—thermal cycling stress, semiconductor wear-out, and DC-link capacitor ageing—well before protection thresholds are reached.
AI-enabled predictive insights
- Waveform analytics: CNNs detect micro-oscillations in switching transitions, indicating gate driver degradation.
- Thermal drift modelling: LSTMs predict MOSFET junction temperature rise under high-power cycling.
- Cooling system performance: Autoencoders identify airflow degradation, pump wear, or radiator clogging.
- Power-module stress estimation: Digital twins estimate cumulative thermal fatigue and RUL.
Charging network operators report a 20–40% reduction in unexpected downtime by implementing AI-enabled diagnostics.
Solar & Industrial Inverters: Long-Life Systems Under Environmental Stress
Solar inverters operate for 10–20 years in harsh outdoor conditions—dust, high humidity, temperature cycling, and fluctuating PV generation.
Common failure patterns identified by AI
- Bond-wire lift-off in IGBT modules due to repetitive thermal stress
- Capacitor ESR drift affecting DC-link stability
- Transformer insulation degradation
- MPPT (Maximum Power Point Tracking) anomalies due to sensor faults
- Resonance shifts in LCL filters
AI-powered diagnostic improvements
- Digital twin comparisons highlight deviations in thermal behaviour or DC-link ripple.
- LSTM RUL estimation predicts when capacitors or IGBTs are nearing end-of-life.
- Anomaly detection identifies non-obvious behavior such as partial shading impacts or harmonic anomalies.
SMPS: High-Volume Applications Where Reliability Drives Cost Savings
SMPS units power everything from telecom towers to consumer electronics. With millions of units deployed, even a fractional improvement in reliability creates massive financial savings.
AI monitors key SMPS symptoms
- Switching frequency drift due to ageing components
- Hotspot formation on magnetics
- Acoustic signatures of transformer failures
- Leakage or gate-charge changes in GaN devices
- Capacitor health degradation trends
Manufacturers use aggregated fleet data to continuously refine design parameters, enhancing long-term reliability.
Cross-Industry Benefits of AI-Enabled Failure Prediction
Industries implementing AI-based diagnostics report:
- 30–50% reduction in catastrophic failures
- 25–35% longer equipment lifespan
- 20–30% decline in maintenance expenditure
- Higher uptime and service availability
Challenges and Research Directions
Even with significant progress, several challenges persist:
- Scarcity of real-world failure data: Failures occur infrequently; synthetic data and stress testing are used to enrich datasets.
- Model transferability limits: Variations in topology, gate drivers, and cooling systems hinder direct model reuse.
- Edge compute constraints: Deep models often require compression and pruning for deployment.
- Explainability requirements: Engineers need interpretable insights, not just anomaly flags.
Research in XAI, transfer learning, and physics-guided datasets is rapidly addressing these concerns.
The Future: Power Electronics Designed with Built-In Intelligence
In the coming decade, AI will not merely monitor power electronic systems—it will actively participate in their operation:
- AI-adaptive gate drivers adjusting switching profiles in real time
- Autonomous derating strategies extending lifespan during high-stress events
- Self-healing converters recalibrating to minimise thermal hotspots
- Cloud-connected fleet dashboards providing RUL estimates for entire EV charging or inverter networks
- WBG-specific failure prediction models tailored for SiC/GaN devices
Conclusion
AI-enabled failure prediction is completely transforming the reliability of EV chargers, solar inverters, and SMPS systems. Engineers are now integrating sensor intelligence, deep learning, physics-based models, and digital twin technology. This allows them to spot early degradation, accurately forecast future failures, and effectively stretch the lifespan of the equipment.
This whole predictive ecosystem doesn’t just cut your operational cost; it significantly boosts system safety, availability, and overall performance. As electrification accelerates, AI-driven reliability will become the core foundation of next-generation power electronic design. It makes systems smarter, more resilient, and truly future-ready.
The post AI-Enabled Failure Prediction in Power Electronics: EV Chargers, Inverters, and SMPS appeared first on ELE Times.
Powering AI: How Power Pulsation Buffers are transforming data center power architecture
Courtesy: Infineon Technologies
Microsoft, OpenAI, Google, Amazon, NVIDIA, etc. are racing against each other, and it is for good reasons: to build massive data centres with billions of dollars in investments.
Imagine a data centre humming with thousands of AI GPUs, each demanding bursts of power like a Formula 1 car accelerating out of a corner. Now imagine trying to feed that power without blowing out the grid.
That is the challenge modern AI server racks face, and Infineon’s Power Pulsation Buffer (PPB) might just be the pit crew solution you need.
Why AI server power supply needs a rethink
As artificial intelligence continues to scale, so does the power appetite of data centres. Tech giants are building AI clusters that push rack power levels beyond 1 MW. These AI PSUs (power supply units) are not just hungry. They are unpredictable, with GPUs demanding sudden spikes in power that traditional grid infrastructure struggles to handle.
These spikes, or peak power events, can cause serious stress on the grid, especially when multiple GPUs fire up simultaneously. The result? Voltage drops, current overshoots, and a grid scrambling to keep up.
Figure 1: Example peak power profile demanded by AI GPUs
Rethinking PSU architecture for AI racks
To tackle this, next-gen server racks are evolving. Enter the power sidecar, a dedicated module housing PSUs, battery backup units (BBUs), and capacitor backup units (CBUs). This setup separates power components from IT components, allowing racks to scale up to 1.3 MW.
But CBUs, while effective, come with trade-offs:
- Require extra shelf space
- Need communication with PSU shelves
- Add complexity to the rack design
This is where PPBs come in.
What is a Power Pulsation Buffer?
Think of PPB as a smart energy sponge. It sits between the PFC voltage controller and the DC-DC converter inside the PSU, soaking up energy during idle times and releasing it during peak loads. This smooths out power demands and keeps the grid happy.
PPBs can be integrated directly into single-phase or three-phase PSUs, eliminating the need for bulky CBUs. They use SiC bridge circuits rated up to 1200 V and can be configured in 2-level or 3-level designs, either in series or parallel.
PPB vs. traditional PSU
In simulations comparing traditional PSUs with PPB-enhanced designs, the difference is striking. Without PPB, the grid sees a sharp current overshoot during peak load. With PPB, the PSU handles the surge internally, keeping grid power limited to just 110% of rated capacity.
This means:
- Reduced grid stress
- Stable input/output voltages
Better energy utilisation from PSU bulk capacitors
Figure 3: Simulation of peak load event: Without PPB (left) and with PPB (right) in 3-ph HVDC PSU
PPB operation modes
PPBs operate in two modes, on-demand and continuous. Each is suited to different rack designs and power profiles.
- On-demand operation: Activates only during peak events, making it ideal for short bursts. It minimises energy loss and avoids unnecessary grid frequency cancellation
- Continuous operation: By contrast, always keeps the PPB active. This supports steady-state load jumps and enables DCX with fixed frequency, which is especially beneficial for 1-phase PSUs.
Choosing the right mode depends on the specific power dynamics of your setup.
Why PPB is a game-changer for AI infrastructure
PPBs are transforming AI server power supply design. They manage peak power without grid overload and integrate compactly into existing PSU architectures.
By enhancing energy buffer circuit performance and optimising bulk capacitor utilisation, PPBs enable scalable designs for high-voltage DC and 3-phase PSU setups.
Whether you are building hyperscale data centres or edge AI clusters, PPBs offer a smarter, grid-friendly solution for modern power demands.
The post Powering AI: How Power Pulsation Buffers are transforming data center power architecture appeared first on ELE Times.
From Insight to Impact: Architecting AI Infrastructure for Agentic Systems
Courtesy: AMD
The next frontier of AI is not just intelligent – it’s agentic. As enterprises move toward systems capable of autonomous action and real-time decision-making, the demands on infrastructure are intensifying.
In this IDC-authored blog, Madhumitha Sathish, Research Manager, High Performance Computing, examines how organisations can prepare for this shift with flexible, secure, and cost-effective AI infrastructure strategies. Drawing on IDC’s latest research, the piece highlights where enterprises stand today and what it will take to turn agentic AI potential into measurable business impact.
Agentic AI Is Reshaping Enterprise Strategy
Artificial intelligence has become foundational to enterprise transformation. In 2025, the rise of agentic AI, systems capable of autonomous decision-making and dynamic task execution, is redefining how organisations approach infrastructure, governance, and business value. These intelligent systems don’t just analyse data; they act on it, adapting in real time across datacenter, cloud, and edge environments.
Agentic AI can reallocate compute resources to meet SLAs, orchestrate cloud deployments based on latency and compliance, and respond instantly to sensor failures in smart manufacturing or logistics. But as IDC’s July 2025 survey of 410 IT and AI infrastructure decision-makers reveals, most enterprises are still figuring out how to harness this potential.
IDC Insight: 75% Lack Clarity on Agentic AI Use Cases
According to IDC, more than 75% of enterprises report uncertainty around agentic AI use cases. This lack of clarity poses real risks where initiatives may stall, misalign with business goals, or introduce compliance challenges. Autonomous systems require robust oversight, and without well-defined use cases, organisations risk deploying models that behave unpredictably or violate internal policies.
Scaling AI: Fewer Than 10 Use Cases at a Time
IDC found that 83% of enterprises launch fewer than 10 AI use cases simultaneously. This cautious approach reflects fragmented strategies and limited scalability. Only 21.7% of organisations conduct full ROI analyses for proposed AI initiatives, and just 22.2% ensure alignment with strategic objectives. The rest rely on assumptions or basic assessments, which can lead to inefficiencies and missed opportunities.
Governance and Security: A Growing Priority
As generative and agentic AI models gain traction, governance and security are becoming central to enterprise readiness. IDC’s data shows that organisations are adopting multilayered data governance strategies, including:
- Restricting access to sensitive data
- Anonymising personally identifiable information
- Applying lifecycle management policies
- Minimising data collection for model development
Security testing is also evolving. Enterprises are simulating adversarial attacks, testing for data pollution, and manipulating prompts to expose vulnerabilities. Input sanitisation and access control checks are now standard practice, reflecting a growing awareness that AI security must be embedded throughout the development pipeline.
Cost Clarity: Infrastructure Tops the List
AI initiatives often falter due to unclear cost structures. IDC reports that nearly two-thirds of GenAI projects begin with comprehensive cost assessments covering infrastructure, licensing, labor, and scalability. Among the most critical cost factors:
- Specialised infrastructure for training (60.7%)
- Infrastructure for inferencing (54.5%)
- Licensing fees for LLMs and proprietary tools
- Cloud compute and storage pricing
- Salaries and overhead for AI engineers and DevOps teams
- Compliance safeguards and governance frameworks
Strategic planning must account for scalability, integration, and long-term feasibility.
Infrastructure Choices: Flexibility Is Essential
IDC’s survey shows that enterprises are split between building in-house systems, purchasing turnkey solutions, and working with systems integrators. For training, GPUs, high-speed interconnects, and cluster-level orchestration are top priorities. For inferencing, low-latency performance across datacenter, cloud, and edge environments is essential.
Notably, 77% of respondents say it’s very important that servers, laptops, and edge devices operate on consistent hardware and software platforms. This standardisation simplifies deployment, ensures performance predictability, and supports model portability.
Strategic Deployment: Data center, Cloud, and Edge
Inferencing workloads are increasingly distributed. IDC found that 63.9% of organisations deploy AI inference workloads in public cloud environments, while 50.7% continue to leverage their own datacenters. Edge servers are gaining traction for latency-sensitive applications, especially in sectors like manufacturing and logistics. Inferencing on end-user devices remains limited, reflecting a strategic focus on reliability and infrastructure consistency.
Looking Ahead: Agility, Resilience, and Cost-Efficient Infrastructure
As enterprises prepare for the next wave of AI innovation, infrastructure agility and governance sophistication will be paramount. Agentic AI will demand real-time responsiveness, energy-efficient compute, and resilient supply chains. IDC anticipates that strategic infrastructure planning can help in lowering operational costs while improving performance density by optimizing power and cooling demands. Enterprises can also avoid unnecessary spending through workload-aware provisioning and early ROI modelling across AI environments. Sustainability will become central to infrastructure planning, and semiconductor availability will be a strategic priority.
The future of AI isn’t just about smarter models; it’s about smarter infrastructure. Enterprises that align strategy with business value, governance, and operational flexibility will be best positioned to lead in the age of agentic intelligence.
The post From Insight to Impact: Architecting AI Infrastructure for Agentic Systems appeared first on ELE Times.
IIIT Hyderabad’s customised chip design and millimetre-wave circuits for privacy-preserving sensing and intelligent healthcare systems
In an age where governance, healthcare and mobility increasingly rely on data, how that data is sensed, processed and protected matters deeply. Visual dashboards, spatial maps and intelligent systems have become essential tools for decision-making, but behind every such system lies something less visible and far more fundamental: electronics.
Silicon-To-System Philosophy
At IIIT Hyderabad, the Integrated Circuits – Inspired by Wireless and Biomedical Systems, IC-WiBES research group led by Prof. Abhishek Srivastava, is rethinking how electronic systems are designed; not as isolated chips, but as end-to-end technologies that move seamlessly from silicon to real-world deployment. The group follows a simple but powerful philosophy: vertical integration from chip design to system-level applications.
Rather than treating integrated circuits, signal processing and applications as separate silos, the group works across all three layers simultaneously. This “dual-track” approach allows researchers to design custom chips while also building complete systems around them, ensuring that electronics are shaped by real-world needs rather than abstract specifications.
Why Custom Chips Still Matter
In many modern systems, off-the-shelf electronics are sufficient. But for strategic applications such as healthcare monitoring, privacy-preserving sensing, space missions, or national infrastructure, generic hardware often becomes a bottleneck. The IIIT-H team focuses on designing application-specific integrated circuits (ASICs) that offer greater flexibility, precision and energy efficiency than commercial alternatives. These chips are not built in isolation; they evolve continuously based on feedback from real deployments, ensuring that circuit-level decisions directly improve system performance.
Millimetre Wave Electronics
One of the lab’s most impactful research areas is millimetre-wave (mmWave) radar sensing, a technology increasingly used in automotive safety but still underexplored for civic and healthcare applications. Unlike cameras, mmWave radar can operate in low light, fog, rain and dust – all while preserving privacy. By transmitting and receiving high-frequency signals, these systems can detect motion, distance and even minute vibrations, such as the movement of a human chest during breathing.
Contactless Healthcare Monitoring
This capability has opened up new possibilities in non-contact health monitoring. The team has developed systems that can measure heart rate and respiration without wearables or cameras, which is particularly useful in infectious disease wards, elderly care, and post-operative monitoring. These systems combine custom electronics, signal processing and edge AI to extract vital signs from extremely subtle radar reflections. Clinical trials are already underway, with deployments planned in hospital settings to evaluate real-world performance.
Privacy-First Sensing For Roads
The same radar technology is being applied to road safety and urban monitoring. In poor visibility conditions, such as heavy rain or fog, traditional camera-based systems struggle. Radar-based sensing, however, continues to function reliably. The researchers have demonstrated systems that can detect and classify vehicles, pedestrians and cyclists with high accuracy and low latency, even in challenging environments. Such systems could inform traffic planning, accident analysis and smart city governance, without raising surveillance concerns.
Systems Shaping Chips
A defining feature of the lab’s work is the feedback loop between systems and circuits. When limitations emerge during field testing, such as signal interference or noise, the insights directly inform the next generation of chip designs. This has led to innovations such as programmable frequency-modulated radar generators, low-noise oscillators and high-linearity receiver circuits, all tailored to the demands of real applications rather than textbook benchmarks.
Building Rare Electronics Infrastructure
Supporting this research is a rare, high-frequency electronics setup at IIIT Hyderabad, capable of measurements up to 44 GHz – facilities available at only a handful of institutions nationwide. The lab has also led landmark milestones, including the institute’s first fully in-house chip tape-out and participation in international semiconductor design programs that provide broad access to advanced electronic design automation tools.
Training Full Stack Engineers
Beyond research outputs, the group is shaping a new generation of engineers fluent across the entire electronics stack- from transistor-level design to algorithms and applications. “Our students learn how circuit-level constraints shape system intelligence – a rare but increasingly critical skill,” remarks Prof. Srivastava. This cross-disciplinary training equips students for roles in national missions, deep-tech startups, academia and advanced semiconductor industries, where understanding how hardware constraints affect system intelligence is increasingly critical.
Academic Research to National Relevance
With sustained funding from multiple agencies, dozens of top-tier publications, patents in progress and early-stage technology transfers underway, the lab’s work reflects a broader shift in Indian research – one that is towards application-driven electronics innovation.
Emphasising that progress in deep-tech research isn’t linear, Prof. Srivastava remarks that at IC-WIBES, circuits, systems, and algorithms mature together. “Sometimes hardware leads. Sometimes applications expose flaws. The key is patience, persistence, and constant feedback. The lab isn’t trying to replace every component with custom silicon. Instead, we are focused on strategic intervention – designing custom chips where they matter most.”
The post IIIT Hyderabad’s customised chip design and millimetre-wave circuits for privacy-preserving sensing and intelligent healthcare systems appeared first on ELE Times.
Can the SDV Revolution Happen Without SoC Standardization?
Speaking at the Auto EV Tech Vision Summit 2025, Yogesh Devangere, who heads the Technical Center at Marelli India, turned attention to a layer of the Software-Defined Vehicle (SDV) revolution that often escapes the spotlight: the silicon itself. The transition from distributed electronic control units (ECUs) to centralized computing is not just a software story—it is a System-on-Chip (SoC) story.
While much of the industry conversation revolves around features, over-the-air updates, AI assistants, and digital cockpits, Devangere argued that none of it is possible without a fundamental architectural shift inside the vehicle. If SDVs represent the future of mobility, then SoCs are the engines quietly driving that future.
From 16-Bit Controllers to Heterogeneous Superchips
Automotive electronics have evolved dramatically over the past two decades. What began as simple 16-bit microcontrollers has now transformed into complex, heterogeneous SoCs integrating multiple CPU cores, GPUs, neural processing units, digital signal processors, hardware security modules, and high-speed connectivity interfaces—all within a single chipset.
“These SoCs are what enable the SDV journey,” Devangere explained, describing them as high-performance computing platforms that can consolidate multiple vehicle domains into centralized architectures. Unlike traditional ECUs designed for single-purpose tasks, modern SoCs are built to manage diverse functions simultaneously—from ADAS image processing and AI model deployment to infotainment rendering, telematics, powertrain control, and network management. This manifests a structural shift in the automotive industry.
Centralized Computing Is the Real Transformation
The move toward SDVs, in a way, is a move toward centralized computing. Simply stated, instead of dozens of independent ECUs scattered across the vehicle, OEMs are increasingly experimenting with domain controller architectures or centralized controllers combined with zonal controllers. In both cases, the SoC becomes the computational heart of the system, and this consolidation enables:
- Higher processing power
- Cross-domain feature integration
- Over-the-air (OTA) updates
- AI-driven functionality
- Flexible software deployment across operating systems such as Linux, Android, and QNX
A key enabler in this architecture is the hypervisor layer, which abstracts hardware from software and allows multiple operating systems to run independently on shared silicon. This flexibility is essential in a transition era where AUTOSAR (AUTomotive Open System ARchitecture) and non-AUTOSAR stacks coexist. AUTOSAR is a global software standard for automotive electronic control units (ECUs). It defines how automotive software should be structured, organized, and communicated, so that different suppliers and OEMs can build compatible systems.
But while the architectural promise is compelling, Devangere made it clear that implementation is far from straightforward.
The Architecture Is Not StandardizedOne of the most critical challenges he highlighted is the absence of hardware-level standardization. “Every OEM is implementing SDV architecture in their own way,” he noted. Some opt for multiple domain controllers; others experiment with centralized controllers and zonal approaches. The result is a fragmented ecosystem.
Unlike the smartphone world—where Android runs on broadly standardized hardware platforms—automotive SoCs lack a unified framework. There is currently no hardware consortium defining a common architecture. While open-source software efforts such as Eclipse aim to harmonize parts of the software stack, the hardware layer remains highly individualized. The consequence is complexity. Tier-1 suppliers cannot rely on long lifecycle platforms, as SoCs evolve rapidly. What might be viable today could become obsolete within a few years.
In an industry accustomed to decade-long product cycles, that volatility is disruptive.
Complexity vs. Time-to-MarketIf architectural fragmentation were not enough, development timelines are simultaneously shrinking. Designing with SoCs is inherently complex. A single SoC program often involves coordination among five to nine suppliers. Hardware validation must account for electromagnetic compatibility, thermal performance, and interface stability across multiple cores and peripherals. Software integration spans multi-core configurations, multiple operating systems, and intricate stack dependencies.
Yet market expectations continue to demand faster launches. “You cannot go back to longer development cycles,” Devangere observed. The pressure to innovate collides with the technical realities of high-complexity chip integration.
Power, Heat, and the Hidden Engineering BurdenBeyond software flexibility and AI capability lies a more fundamental engineering constraint: energy. High-performance SoCs generate significant heat and demand careful power management—critical in electric vehicles where battery efficiency is paramount. Many current architectures still rely on companion microcontrollers for power and network management, while the SoC handles high-compute workloads.
Balancing performance with energy efficiency, ensuring timing determinism across multiple simultaneous functions, and maintaining safety compliance remain non-trivial challenges. As vehicles consolidate ADAS, infotainment, telematics, and control systems onto shared silicon, resource management becomes as important as raw processing capability.
Partnerships Over IsolationGiven the scale of complexity, Devangere emphasized collaboration as the only viable path forward. SoC development and integration are rarely the work of a single organization. Semiconductor suppliers, Tier-1 system integrators, software stack providers, and OEMs must align early in the architecture phase.
Some level of standardization—particularly at the hardware architecture level—could significantly accelerate development cycles. Without it, the industry risks “multiple horses running in different directions,” as one audience member aptly put it during the discussion.
For now, that standardization remains aspirational.
The Real Work of the SDV EraThe excitement surrounding software-defined vehicles often focuses on user-facing features—AI assistants, personalized interfaces, downloadable upgrades. Devangere’s message was more grounded. Behind every seamless update, every AI-enabled feature, and every connected service lies a dense web of silicon complexity. Multi-core processing, heterogeneous architectures, thermal constraints, validation cycles, and fragmented standards form the invisible scaffolding of the SDV transformation.
The car may be becoming a computer on wheels. But building that computer—robust, safe, efficient, and scalable—remains one of the most demanding engineering challenges the automotive industry has ever faced.
And at the center of it all is the SoC.
The post Can the SDV Revolution Happen Without SoC Standardization? appeared first on ELE Times.
ElevateX 2026, Marking a New Chapter in Human Centric and Intelligent Automation
Teradyne Robotics today hosted ElevateX 2026 in Bengaluru – its flagship industry forum bringing together Universal Robots (UR) and Mobile Industrial Robots (MiR) to spotlight the next phase of human‑centric, collaborative, and intelligent automation shaping India’s manufacturing and intralogistics landscape.
Designed as a high‑impact platform for industry leadership and ecosystem engagement, ElevateX 2026 convened 25+ CEO/CXO leaders, technology experts, startups, and media, reinforcing how Indian enterprises are progressing from isolated automation pilots to scalable, business‑critical deployments. Ots)
Teradyne Robotics emphasized the rapidly expanding role of flexible and intelligent automation in enabling enterprises to scale confidently and safely. With industrial collaborative robots (cobots) and autonomous mobile robots (amr’s) becoming mainstream across sectors, the company underlined its commitment to driving advanced automation, skill development, and stronger industry‑partner ecosystems in India.
The event showcased several real‑world automation applications featuring cobots and amr’s across key sectors, including Automotive, F&B, FMCG, Education, and Logistics. These demos highlighted the ability of Universal Robots and MiR to help organizations scale quickly, redeploy easily, and improve throughput and workforce efficiency.
Showcasing high‑demand applications from palletizing and welding to material transport, machine tending, and training, the demonstrations reflected how Teradyne Robotics enables faster ROI, simpler deployment, and safe automation across high‑mix and high‑volume operations.
Speaking at the event, James Davidson, Chief Artificial Intelligence Officer, Teradyne Robotics, said, “Automation is entering a defining era – one where intelligence, flexibility, and human-centric design are no longer optional, but fundamental to how businesses innovate, scale, and compete. AI is transforming robots from tools that simply execute tasks into intelligent collaborators that can perceive, learn, and adapt in dynamic environments. In India, we are witnessing a decisive shift from experimentation to enterprise-wide adoption, and ElevateX 2026 reflects this momentum – bringing the ecosystem together to explore how collaborative and intelligent automation can become a strategic growth engine for both established enterprises and the next generation of startups.”
Poi Toong Tang, Vice President of Sales, Asia Pacific, Teradyne Robotics, added, “India is rapidly emerging as one of the most important and dynamic automation markets in Asia Pacific. Organizations today are not just looking to automate – they are looking to build operations that are flexible, resilient, and future-ready. The demand is for modular automation that delivers faster ROI and can evolve alongside business needs. Through Universal Robots and MiR, we are enabling end-to-end automation across production and intralogistics, helping Indian companies scale with confidence and compete on a global stage.”
Sougandh K.M., Business Director – South Asia, Teradyne Robotics, said, “India’s automation journey will be defined by collaboration across its ecosystem — by partners, system integrators, startups, and skilled talent working together to turn technology into real impact. At Teradyne Robotics, our belief is simple: automation should be for anyone and anywhere, and robots should enable people to do better work, not work like robots. Our focus is on automating tasks that are dull, dirty, and dangerous, while helping organizations improve productivity, safety, and quality. ElevateX 2026 is about lowering barriers to adoption and building long-term capability in India, making automation practical, scalable, and accessible, and positioning Teradyne Robotics as a trusted partner in every stage of that growth journey .”
Customer Case Story Testimonial/Teaser
A key highlight of ElevateX 2026 was the spotlight on customer success, and Origin stood out. As a fast‑growing U.S. construction tech startup, they shared how partnering with Universal Robots is driving measurable impact through improved productivity, stronger safety, and consistently high‑quality project outcomes powered by collaborative automation.
Yogesh Ghaturle, the Co-founder and CEO of Origin, said, “Our goal is to bring true autonomy to the construction site, transforming how the world builds. Executing this at scale requires a technology stack where every component operates with absolute predictability. Universal Robots provides the robust, operational backbone we need. With their cobots handling the mechanical precision, we are free to focus on deploying our intelligent systems in the real world.”
The post ElevateX 2026, Marking a New Chapter in Human Centric and Intelligent Automation appeared first on ELE Times.
The Architecture of Edge Computing Hardware: Why Latency, Power and Data Movement Decide Everything
Courtesy: Ambient Scientific
Most explanations of edge computing hardware talk about devices instead of architecture. They list sensors, gateways, servers and maybe a chipset or two. That’s useful for beginners, but it does nothing for someone trying to understand how edge systems actually work or why certain designs succeed while others bottleneck instantly.
If you want the real story, you have to treat edge hardware as a layered system shaped by constraints: latency, power, operating environment and data movement. Once you look at it through that lens, the category stops feeling abstract and starts behaving like a real engineering discipline.
Let’s break it down properly.
What edge hardware really is when you strip away the buzzwords
Edge computing hardware is the set of physical computing components that execute workloads near the source of data. This includes sensors, microcontrollers, SoCs, accelerators, memory subsystems, communication interfaces and local storage. It is fundamentally different from cloud hardware because it is built around constraints rather than abundance.
Edge hardware is designed to do three things well:
- Ingest data from sensors with minimal delay
- Process that data locally to make fast decisions
- Operate within tight limits for power, bandwidth, thermal capacity and physical space
If those constraints do not matter, you are not doing edge computing. You are doing distributed cloud.
This is the part most explanations skip. They treat hardware as a list of devices rather than a system shaped by physics and environment.
The layers that actually exist inside edge machines
The edge stack has four practical layers. Ignore any description that does not acknowledge these.
- Sensor layer: Where raw signals are produced. This layer cares about sampling rate, noise, precision, analogue front ends and environmental conditions.
- Local compute layer: Usually MCUs, DSP blocks, NPUs, embedded SoCs or low-power accelerators. This is where signal processing, feature extraction and machine learning inference happen.
- Edge aggregation layer: Gateways or industrial nodes that handle larger workloads, integrate multiple endpoints or coordinate local networks.
- Backhaul layer: Not cloud. Just whatever communication fabric moves selective data upward when needed.
These layers exist because edge workloads follow a predictable flow: sense, process, decide, transmit. The architecture of the hardware reflects that flow, not the other way around.
Why latency is the first thing that breaks and the hardest thing to fix
Cloud hardware optimises for throughput. Edge hardware optimises for reaction time.
Latency in an edge system comes from:
- Sensor sampling delays
- Front-end processing
- Memory fetches
- Compute execution
- Writeback steps
- Communication overhead
- Any DRAM round-trip
- Any operating system scheduling jitter
If you want low latency, you design hardware that avoids round-trip to slow memory, minimises driver overhead, keeps compute close to the sensor path and treats the model as a streaming operator rather than a batch job.
This is why general-purpose CPUs almost always fail at the edge. Their strengths do not map to the constraints that matter.
Power budgets at the edge are not suggestions; they are physics
Cloud hardware runs at hundreds of watts. Edge hardware often gets a few milliwatts, sometimes even microwatts.
Power is consumed by:
- Sensor activation
- Memory access
- Data movement
- Compute operations
- Radio transmissions
Here is a simple table with the numbers that actually matter.
| Operation | Approx Energy Cost |
| One 32-bit memory access from DRAM | High tens to hundreds of pJ |
| One 32-bit memory access from SRAM | Low single-digit pJ |
| One analogue in memory MAC | Under 1 pJ effective |
| One radio transmission | Orders of magnitude higher than compute |
These numbers already explain why hardware design for the edge is more about architecture than brute force performance. If most of your power budget disappears into memory fetches, no accelerator can save you.
Data movement: the quiet bottleneck that ruins most designs
Everyone talks about computing. Almost no one talks about the cost of moving data through a system.
In an edge device, the actual compute is cheap. Moving data to the compute is expensive.
Data movement kills performance in three ways:
- It introduces latency
- It drains power
- It reduces compute utilisation
Many AI accelerators underperform at the edge because they rely heavily on DRAM. Every trip to external memory cancels out the efficiency gains of parallel compute units. When edge deployments fail, this is usually the root cause.
This is why edge hardware architecture must prioritise:
- Locality of reference
- Memory hierarchy tuning
- Low-latency paths
- SRAM-centric design
- Streaming operation
- Compute in memory or near memory
You cannot hide a bad memory architecture under a large TOPS number.
Architectural illustration: why locality changes everything
To make this less abstract, it helps to look at a concrete architectural pattern that is already being applied in real edge-focused silicon. This is not a universal blueprint for edge hardware, and it is not meant to suggest a single “right” way to build edge systems. Rather, it illustrates how some architectures, including those developed by companies like Ambient Scientific, reorganise computation around locality by keeping operands and weights close to where processing happens. The common goal across these designs is to reduce repeated memory transfers, which directly improves latency, power efficiency, and determinism under edge constraints.
Figure: Example of a memory-centric compute architecture, similar to approaches used in modern edge-focused AI processors, where operands and weights are kept local to reduce data movement and meet tight latency and power constraints.
How real edge pipelines behave, instead of how diagrams pretend they behave
Edge hardware architecture exists to serve the data pipeline, not the other way around. Most workloads at the edge look like this:
- The sensor produces raw data
- Front end converts signals (ADC, filters, transforms)
- Feature extraction or lightweight DSP
- Neural inference or rule-based decision
- Local output or higher-level aggregation
If your hardware does not align with this flow, you will fight the system forever. Cloud hardware is optimised for batch inputs. Edge hardware is optimised for streaming signals. Those are different worlds.
This is why classification, detection and anomaly models behave differently on edge systems compared to cloud accelerators.
The trade-offs nobody escapes, no matter how good the hardware looks on paper
Every edge system must balance four things:
- Compute throughput
- Memory bandwidth and locality
- I/O latency
- Power envelope
There is no perfect hardware. Only hardware that is tuned to the workload.
Examples:
- A vibration monitoring node needs sustained streaming performance and sub-millisecond reaction windows
- A smart camera needs ISP pipelines, dedicated vision blocks and sustained processing under thermal pressure
- A bio signal monitor needs to be always in operation with strict microamp budgets
- A smart city air node needs moderate computing but high reliability in unpredictable conditions
None of these requirements match the hardware philosophy of cloud chips.
Where modern edge architectures are headed, whether vendors like it or not
Modern edge workloads increasingly depend on local intelligence rather than cloud inference. That shifts the architecture of edge hardware toward designs that bring compute closer to the sensor and reduce memory movement.
Compute in memory approaches, mixed signal compute block sand tightly integrated SoCs are emerging because they solve edge constraints more effectively than scaled-down cloud accelerators.
You don’t have to name products to make the point. The architecture speaks for itself.
How to evaluate edge hardware like an engineer, not like a brochure reader
Forget the marketing lines. Focus on these questions:
- How many memory copies does a singleinference require
- Does the model fit entirely in local memory
- What is the worst-case latency under continuous load
- How deterministic is the timing under real sensor input
- How often does the device need to activate the radio
- How much of the power budget goes to moving data
- Can the hardware operate at environmental extremes
- Does the hardware pipeline align with the sensor topology
These questions filter out 90 per cent of devices that call themselves edge capable.
The bottom line: if you don’t understand latency, power and data movement, you don’t understand edge hardware
Edge computing hardware is built under pressure. It does not have the luxury of unlimited power, infinite memory or cool air. It has to deliver real-time computation in the physical world where timing, reliability and efficiency matter more than large compute numbers.
If you understand latency, power and data movement, you understand edge hardware. Everything else is an implementation detail.
The post The Architecture of Edge Computing Hardware: Why Latency, Power and Data Movement Decide Everything appeared first on ELE Times.
Govt Bets Big on Chips: India Semiconductor Mission 2.0 Gets ₹1,000 Crore Funding
In a significant push for the nation’s tech ambitions, the Government of India has earmarked Rs. 1,000 crores for the India Semiconductor Mission (ISM) 2.0 in the Union Budget 2026-27.
The new funding aims to supercharge domestic production, with investments slated for semiconductor manufacturing equipment, local IP development, and supply chain fortification both within India and on the international stage.
This upgraded version of the ISM will focus on industry-driven research and the refinement of training centres to enhance technology advancement, thereby fostering a skilled workforce for the future growth of the industry.
With India aiming for self-reliance through boosting domestic manufacturing in multiple sectors, the need for semiconductor manufacturing has exponentially increased.
Recently, Qualcomm tapped out the most advanced 2nm chips led by Indian engineering teams. This is a major boost to Indian semiconductor aspirations.
The first phase of the ISM was supported by a Rs. 76,000 crores incentive scheme, with ten projects worth Rs. 1.60 lakh crores approved by December, 2025, covering the entire manufacturing spectrum from fabrication units, packaging to assembly, and testing infrastructure development.
By: Shreya Bansal, Sub-editor
The post Govt Bets Big on Chips: India Semiconductor Mission 2.0 Gets ₹1,000 Crore Funding appeared first on ELE Times.
Microchip and Hyundai Collaborate, Exploring 10BASE-T1S SPE for Future Automotive Connectivity
The post Microchip and Hyundai Collaborate, Exploring 10BASE-T1S SPE for Future Automotive Connectivity appeared first on ELE Times.
Microchip Extends its Edge AI Solutions for Development of Production-ready Applications using its MCUs & MPUs
A major next step for artificial intelligence (AI) and machine learning (ML) innovation is moving ML models from the cloud to the edge for real-time inferencing and decision-making applications in today’s industrial, automotive, data center and consumer Internet of Things (IoT) networks. Microchip Technology has extended its edge AI offering with full-stack solutions that streamline the development of production-ready applications using its microcontrollers (MCUs) and microprocessors (MPUs) – the devices that are located closest to the many sensors at the edge that gather sensor data, control motors, trigger alarms and actuators, and more.
Microchip’s products are long-time embedded-design workhorses, and the new solutions turn its MCUs and MPUs into complete platforms for bringing secure, efficient and scalable intelligence to the edge. The company has rapidly built and expanded its growing, full-stack portfolio of silicon, software and tools that solve edge AI performance, power consumption and security challenges while simplifying implementation.
“AI at the edge is no longer experimental—it’s expected, because of its many advantages over cloud implementations,” said Mark Reiten, corporate vice president of Microchip’s Edge AI business unit. “We created our Edge AI business unit to combine our MCUs, MPUs and FPGAs with optimised ML models plus model acceleration and robust development tools. Now, the addition of the first in our planned family of application solutions accelerates the design of secure and efficient intelligent systems that are ready to deploy in demanding markets.”
Microchip’s new full-stack application solutions for its MCUs and MPUs encompass pre-trained and deployable models as well as application code that can be modified, enhanced and applied to different environments. This can be done either through Microchip’s embedded software and ML development tools or those from Microchip partners. The new solutions include:
- Detection and classification of dangerous electrical arc faults using AI-based signal analysis
- Condition monitoring and equipment health assessment for predictive maintenance
- Facial recognition with liveness detection supporting secure, on-device identity verification
- Keyword spotting for consumer, industrial and automotive command-and-control interfaces
Development Tools for AI at the Edge
Engineers can leverage familiar Microchip development platforms to rapidly prototype and deploy AI models, reducing complexity and accelerating design cycles. The company’s MPLAB X Integrated Development Environment (IDE) with its MPLAB Harmony software framework and MPLAB ML Development Suite plug-in provides a unified and scalable approach for supporting embedded AI model integration through optimised libraries. Developers can, for example, start with simple proof-of-concept tasks on 8-bit MCUs and move them to production-ready high-performance applications on Microchip’s 16- or 32-bit MCUs.
For its FPGAs, Microchip’s VectorBlox Accelerator SDK 2.0 AI/ML inference platform accelerates vision, Human-Machine Interface (HMI), sensor analytics and other computationally intensive workloads at the edge while also enabling training, simulation and model optimisation within a consistent workflow.
Other support includes training and enablement tools like the company’s motor control reference design featuring its dsPIC DSCs for data extraction in a real-time edge AI data pipeline, and others for load disaggregation in smart e-metering, object detection and counting, and motion surveillance. Microchip also helps solve edge AI challenges through complementary components that are required for product design and development. These include PCIe® devices that connect embedded compute at the edge and high-density power modules that enable edge AI in industrial automation and data centre applications.
The analyst firm IoT Analytics stated in its October 2025 market reportthat embedding edge AI capabilities directly into MCUs is among the top four industry trends, enabling AI-driven applications “…that reduce latency, enhance data privacy, and lower dependency on cloud infrastructure.” Microchip’s AI initiative reinforces this trend with its MCU and MPU platforms, as well as its FPGAs. Edge AI ecosystems increasingly require support for both software AI accelerators and integrated hardware acceleration on multiple devices across a range of memory configurations.
The post Microchip Extends its Edge AI Solutions for Development of Production-ready Applications using its MCUs & MPUs appeared first on ELE Times.



