ELE Times

Subscribe to ELE Times feed ELE Times
Updated: 1 hour 45 min ago

Open World Foundation Models Generate Synthetic Worlds for Physical AI Development

3 hours 15 min ago

Courtesy: Nvidia

Physical AI Models- which power robots, autonomous vehicles, and other intelligent machines — must be safe, generalized for dynamic scenarios, and capable of perceiving, reasoning and operating in real time. Unlike large language models that can be trained on massive datasets from the internet, physical AI models must learn from data grounded in the real world.

However, collecting sufficient data that covers this wide variety of scenarios in the real world is incredibly difficult and, in some cases, dangerous. Physically based synthetic data generation offers a key way to address this gap.

NVIDIA recently released updates to NVIDIA Cosmos open-world foundation models (WFMs) to accelerate data generation for testing and validating physical AI models. Using NVIDIA Omniverse libraries and Cosmos, developers can generate physically based synthetic data at incredible scale.

Cosmos Predict 2.5 now unifies three separate models — Text2World, Image2World, and Video2World — into a single lightweight architecture that generates consistent, controllable multicamera video worlds from a single image, video, or prompt.

Cosmos Transfer 2.5 enables high-fidelity, spatially controlled world-to-world style transfer to amplify data variation. Developers can add new weather, lighting and terrain conditions to their simulated environments across multiple cameras. Cosmos Transfer 2.5 is 3.5x smaller than its predecessor, delivering faster performance with improved prompt alignment and physics accuracy.

These WFMs can be integrated into synthetic data pipelines running in the NVIDIA Isaac Sim open-source robotics simulation framework, built on the NVIDIA Omniverse platform, to generate photorealistic videos that reduce the simulation-to-real gap. Developers can reference a four-part pipeline for synthetic data generation:

  • NVIDIA Omniverse NuRec neural reconstruction libraries for reconstructing a digital twin of a real-world environment in OpenUSD, starting with just a smartphone.
  • SimReady assets to populate a digital twin with physically accurate 3D models.
  • The MobilityGen workflow in Isaac Sim to generate synthetic data.
  • NVIDIA Cosmos for augmenting generated data.

From Simulation to the Real World

Leading robotics and AI companies are already using these technologies to accelerate physical AI development.

Skild AI, which builds general-purpose robot brains, is using Cosmos Transfer to augment existing data with new variations for testing and validating robotics policies trained in NVIDIA Isaac Lab.

Skild AI uses Isaac Lab to create scalable simulation environments where its robots can train across embodiments and applications. By combining Isaac Lab robotics simulation capabilities with Cosmos’ synthetic data generation, Skild AI can train robot brains across diverse conditions without the time and cost constraints of real-world data collection.

Serve Robotics uses synthetic data generated from thousands of simulated scenarios in NVIDIA Isaac Sim. The synthetic data is then used in conjunction with real data to train physical AI models. The company has built one of the largest autonomous robot fleets operating in public spaces and has completed over 100,000 last-mile meal deliveries across urban areas. Serve’s robots collect 1 million miles of data monthly, including nearly 170 billion image-lidar samples, which are used in simulation to further improve robot models.

See How Developers Are Using Synthetic Data

Lightwheel, a simulation-first robotics solution provider, is helping companies bridge the simulation-to-real gap with SimReady assets and large-scale synthetic datasets. With high-quality synthetic data and simulation environments built on OpenUSD, Lightwheel’s approach helps ensure robots trained in simulation perform effectively in real-world scenarios, from factory floors to homes.

Data scientist and Omniverse community member Santiago Villa is using synthetic data with Omniverse libraries and Blender software to improve mining operations by identifying large boulders that halt operations.

Undetected boulders entering crushers can cause delays of seven minutes or more per incident, costing mines up to $650,000 annually in lost production. Using Omniverse to generate thousands of automatically annotated synthetic images across varied lighting and weather conditions dramatically reduces training costs while enabling mining companies to improve boulder detection systems and avoid equipment downtime.

FS Studio partnered with a global logistics leader to improve AI-driven package detection by creating thousands of photorealistic package variations in different lighting conditions using Omniverse libraries like Replicator. The synthetic dataset dramatically improved object detection accuracy and reduced false positives, delivering measurable gains in throughput speed and system performance across the customer’s logistics network.

Robots for Humanity built a full simulation environment in Isaac Sim for an oil and gas client using Omniverse libraries to generate synthetic data, including depth, segmentation and RGB images, while collecting joint and motion data from the Unitree G1 robot through teleoperation.

Omniverse Ambassador Scott Dempsey is developing a synthetic data generation synthesizer that builds various cables from real-world manufacturer specifications, using Isaac Sim to generate synthetic data augmented with Cosmos Transfer to create photorealistic training datasets for applications that detect and handle cables.

Conclusion

As physical AI systems continue to move from controlled labs into the complexity of the real world, the need for vast, diverse, and accurate training data has never been greater. Physically based synthetic worlds—driven by open-world foundation models and high-fidelity simulation platforms like Omniverse—offer a powerful solution to this challenge. They allow developers to safely explore edge cases, scale data generation to unprecedented levels, and accelerate the validation of robots and autonomous machines destined for dynamic, unpredictable environments.

The examples from industry leaders show that this shift is already well underway. Synthetic data is strengthening robotics policies, improving perception systems, and drastically reducing the gap between simulation and real-world performance. As tools like Cosmos, Isaac Sim, and OpenUSD-driven pipelines mature, the creation of rich virtual worlds will become as essential to physical AI development as datasets and GPUs have been for digital AI.

In many ways, we are witnessing the emergence of a new engineering paradigm—one where intelligent machines learn first in virtual environments grounded in real physics, and only then step confidently into the physical world. The Omniverse is not just a place to simulate; it is becoming the training ground for the next generation of autonomous systems.

The post Open World Foundation Models Generate Synthetic Worlds for Physical AI Development appeared first on ELE Times.

How Well Will the Automotive Industry Adopt the Use of AI for its Manufacturing Process

7 hours 4 min ago

Gartner believes that by 2029, only 5% of automakers will maintain strong AI investment growth, a decline from over 95% today.

“The automotive sector is currently experiencing a period of AI euphoria, where many companies want to achieve disruptive value even before building strong AI foundations,” said Pedro Pacheco, VP Analyst at Gartner. “This euphoria will eventually turn into disappointment as these organizations are not able to achieve the ambitious goals they set for AI.”

Gartner predicts that only a handful of automotive companies will maintain ambitious AI initiatives after the next five years. Organizations with strong software foundations, tech-savvy leadership, and a consistent very long-term focus on AI will pull ahead from the rest, creating a competitive AI divide.

“Software and data are the cornerstones of AI,” said Pacheco. “Companies with advanced maturity in these areas have a natural head start. In addition, automotive companies led by execs with strong tech know-how are more likely to make AI their top priority instead of sticking to the traditional priorities of an automotive company.”

Fully-Automated Vehicle Assembly Predicted by 2030

The automotive industry is also heading for radical operational efficiency. As automakers rapidly integrate advanced robotics into their assembly lines, Gartner predicts that by 2030, at least one automaker will achieve fully automated vehicle assembly, marking a historic shift in the automotive sector.

“The race toward full automation is accelerating, with nearly half of the world’s top automakers (12 out of 25) already piloting advanced robotics in their factories,” said Marco Sandrone, VP Analyst at Gartner. “Automated vehicle assembly helps automakers reduce labor costs, improve quality, and shorten production cycle times. For consumers, this means better vehicles at potentially lower prices.”

While it may reduce the direct need for human labor in vehicle assembly, new roles in AI oversight, robotics maintenance and software development could offset losses if reskilling programs are prioritized.

The post How Well Will the Automotive Industry Adopt the Use of AI for its Manufacturing Process appeared first on ELE Times.

Electronics manufacturing and exports grow manifold in the last 11 years

7 hours 30 min ago

Central government-led schemes, including PLI for large-scale electronics manufacturing (LSEM) and PLI for IT hardware, have boosted both manufacturing and exports in the broader electronics category and the mobile phone segment.

The mobile manufacturing in India has taken a tremendous rise. In last 11 years, total number of mobile manufacturing units have increased from 2 to more than 300. Since the launch of PLI for LSEM, Mobile manufacturing has increased from 2.2 Lakh Cr in 2020-21 to 5.5 Lakh Cr.

Minister of State for Electronics and Information Technology Jitin Prasada, in a question, informed Rajya Sabha on Friday that as a result of policy efforts, electronics manufacturing has grown almost six times in the last 11 years – from ₹1.9 lakh crore in 2014-15 to ₹11.32 lakh crore in 2024-25.

The booming industry has generated employment for approximately 25 lakh people and the electronic exports have grown by eight times since 2014-15.

According to the information submitted by Union Minister for Electronics and Information Technology Shri Ashwini Vaishnaw in Rajya Sabha, to encourage India’s young engineers, the Government is providing latest design tools to 394 universities and start-ups. Using these tools, chip designers from more than 46 universities have designed and fabricated the chips using these tools at Semiconductor Labs, Mohali.

Also, all major semiconductor design companies have set up design centers in India. Most advanced chips such as 2 nm chips are now being designed in India by Indian designers.

The post Electronics manufacturing and exports grow manifold in the last 11 years appeared first on ELE Times.

Taiwanese company to invest 1,000 crore in Karnataka for new electronics & semiconductor park

9 hours 16 min ago

Allegiance Group signed a Memorandum of Understanding (MoU) with Karnataka to develop an India-Taiwan Industrial Park, creating a dedicated hub for advanced electronics and semiconductor manufacturing.

Karnataka has secured a major foreign investment with Taiwan-based Allegiance Group signing an MoU to establish a ₹1,000-crore India-Taiwan Industrial Park (ITIP) focused on electronics and semiconductor manufacturing. The agreement was signed by IT/BT director Rahul Sharanappa Sankanur and Allegiance Group vice president Lawrence Chen.

Chief Minister Siddaramaiah welcomed the investment, stating that the project will bring cutting-edge technologies to the state and enhance India’s role in the global electronics value chain.

The proposed ITIP will be developed as a dedicated zone for Taiwanese companies specialising in advanced manufacturing, R&D, and innovation. According to the state government, the park is expected to generate 800 direct jobs over the next five years and will strengthen Karnataka’s position in high-value manufacturing.

This investment comes as Karnataka intensifies efforts to expand its manufacturing footprint. Just last week, American firm Praxair India announced plans to invest ₹200 crore in its operations in the state over the next three years.

The Allegiance Group, which recently committed ₹2,400 crore for similar industrial facilities in Andhra Pradesh and Telangana, said the Bengaluru project would act as a strong catalyst for Taiwanese companies entering the Indian market. “The ITIP will help Taiwanese firms scale in India and support the growth of the semiconductor and electronics ecosystem,” said Lawrence Chen.

IT/BT minister Priyank Kharge stated that the facility will deepen India-Taiwan business ties and strengthen collaboration in emerging technologies. The project aims to build a full supply chain ecosystem, including components, PCBs, and chip design, while also encouraging technology transfer and global best practices.

Industries minister M.B. Patil noted that Karnataka has signed over 115 MoUs worth ₹6.57 lakh crore in the last two years, and continues to attract leading global manufacturers such as Tata Motors, HAL, and Bosch.

The post Taiwanese company to invest 1,000 crore in Karnataka for new electronics & semiconductor park appeared first on ELE Times.

The 2025 MRAM Global Innovation Forum will Showcase MRAM Technology Innovations, Advances, & Research from Industry Experts

9 hours 58 min ago

The MRAM Global Innovation Forum is the industry’s premier platform for Magnetoresistive Random Access Memory (MRAM) technology, bringing together leading magnetics experts and researchers from industry and academia to share the latest MRAM advancements. Now in its 13th year, the annual one-day conference will be held the day after the IEEE International Electron Devices Meeting (IEDM) on December 11, 2025 from 8:45am to 6pm at the Hilton San Francisco Union Square Hotel’s Imperial Ballroom A/B.

The 2025 MRAM technical program includes 12 invited presentations from leading global MRAM experts, as well as an evening panel. The programs will throw light on technology development, product development, tooling and other exploratory topics.

MRAM technology, a type of non-volatile memory is known for its high speed, endurance, scalability, low power consumption and radiation hardness. Data in MRAM devices is stored by magnetic storage elements instead of an electric charge, in contrast to conventional memory technologies. MRAM technology is increasingly used in embedded memory applications for automotive microcontrollers, edge AI devices, data centers, sensors, aerospace, and in wearable devices.

“The STT-MRAM market is growing rapidly now, especially with use of embedded STT-MRAM in next-generation automotive microcontroller units,” said Kevin Garello, MRAM Forum co-chair (since 2021) and senior researcher engineer at SPINTEC. “I expect edge AI applications to be the next big market for STT-MRAM.”

“I am pleased to see that over the years, the MRAM Forum series has grown into a landmark event within the MRAM industrial ecosystem,” said Bernard Dieny, former MRAM Forum co-chair (2017–2023), and director of research at SPINTEC. “We are witnessing a steady increase in the adoption of this technology across the microelectronics industry, and the initial concerns associated with this new technology are steadily fading away.”

The post The 2025 MRAM Global Innovation Forum will Showcase MRAM Technology Innovations, Advances, & Research from Industry Experts appeared first on ELE Times.

The Era of Engineering Physical AI

Mon, 12/08/2025 - 14:27

Courtesy: Synopsys

Despite algorithmic wizardry and unprecedented scale, the engineering behind AI has been relatively straightforward. More data. More processing.

But that’s changing.

With an explosion of investment and innovation in robots, drones, and autonomous vehicles, “physical AI” is making the leap from science fiction to everyday reality. And the engineering behind this leap is anything but straightforward.

No longer confined within the orderly, climate-controlled walls of data centers, physical AI must be engineered — from silicon to software to system — to navigate countless new variables.

Sudden weather shifts. A cacophony of signals and noise. And the ever-changing patterns of human behavior.

Bringing physical AI into these dynamic settings demands far more than sophisticated algorithms. It requires the intricate fusion of advanced electronics, sensors, and the principles of multiphysics — all working together to help intelligent machines perceive, interpret, and respond to the complexities of the physical world.

The next frontier for AI: physics

We have taught AI our languages and imparted it with our collective knowledge. We’ve trained it to understand our desires and respond to our requests.

But the physical world presents a host of new challenges. If you ask AI about potholes, it will tell you how they’re formed and how to repair them. But what happens when AI encounters a large pothole in foggy, low-light conditions during the middle of rush hour?

Our environment is highly dynamic. But the one, unbending constant? Physics. And that’s why physics-based simulation is foundational to the development of physical AI.

For AI to function effectively in the real world, it needs finely tuned sensors — such as cameras, radar, and LiDAR — that deliver correlated environmental data, allowing physical AI systems to accurately perceive and interpret their surroundings.

Physics-based simulation allows engineers to design, test, and optimize these sensors — and the systems they support — digitally, which is significantly less expensive than physical prototypes. Answers to critical “what-if” questions can be attained, such as how varying weather conditions or material reflectivity impact performance. Through simulation, engineers can gather comprehensive and predictive insights on how their systems will respond to countless operating scenarios.

Equally important to being able to “see” our world is how well physical AI is trained to “think.” In many cases, we lack the vast, diverse datasets required to properly train nascent physical AI systems on the variables they will encounter. The rapid emergence of synthetic data increasingly helps innovators bridge the gap, but accuracy has been a concern.

Exciting progress has been made on this front. Powerful development platforms — such as NVIDIA’s Omniverse — can be used to create robust virtual worlds. When integrated with precise simulation tools, these platforms enable developers to import high-fidelity physics into their scenario to generate reliable synthetic data.

Re-engineering engineering from silicon to systems

Design and engineering methodologies have traditionally been siloed and linear, with a set of hardware and software components being developed or purchased separately prior to assembly, test, and production.

These methodologies are no longer viable — for physical AI or other silicon-powered, software-defined products.

Consider a drone. To fly autonomously, avoid other objects, and respond to operator inputs, many things must work in concert. Advanced software, mechanical parts, sensors, custom silicon, and much more.

Achieving this level of precision — within imprecise environments — can’t be achieved with traditional methodologies. Nor can it be delivered within the timelines the market now demands.

Digitally enhanced products must be designed and developed as highly complex, multi-domain systems. Electrical engineers, mechanical engineers, software developers, and others must work in lockstep from concept to final product. And their work must accelerate to meet shrinking development cycles.

Ansys electromagnetic simulation software within a rendering of downtown San Jose in NVIDIA Omniverse with 5 cm resolution

The complexity of today’s intelligent systems demands solutions with a deeper integration of electronics and physics. Engineering solution providers are moving fast to meet this need.

Conclusion

Physical AI is pushing engineering into uncharted territory—far beyond the comfort of controlled data centers and into the unpredictable, physics-governed world we live in. Delivering machines that can truly see, think, and act in real time requires more than clever algorithms; it demands a new model of engineering rooted in high-fidelity simulation, cross-domain collaboration, and deeply integrated electronics and software.

As sensors, computing, and simulation technologies converge, engineers are gaining the tools to design intelligent systems that can anticipate challenges, adapt to dynamic conditions, and operate safely in complex environments. The leap from digital AI to physical AI is not just an evolution—it’s a reinvention of how we build technology itself. And with the accelerating progress in multiphysics modeling, synthetic data generation, and unified development platforms, the industry is rapidly assembling the foundation for the next era of autonomous machines.

Physical AI is no longer a distant vision. It is becoming real, and the engineering innovations taking shape today will define how seamlessly—and how safely—intelligent systems fit into the world of tomorrow.

The post The Era of Engineering Physical AI appeared first on ELE Times.

SFO Technologies plans to invest Rs. 2,270 crore for a PCB manufacturing plant in Tamil Nadu

Mon, 12/08/2025 - 14:13

SFO technologies plans to set up a plant in Theni, Tamil Nadu with an investment of Rs. 2,270 crore for manufacturing printed circuit boards (PCBs) and other components for the electronics industry, a senior executive at the Kochi-based company said.

An unnamed senior level source from the company revealed that the the flagship company of the NeST Group, is expected to sign a memorandum of understanding on the project with the Tamil Nadu government at the TN Rising Conclave in Madurai on Sunday.

PCBs are used in nearly all modern consumer electronic devices and accessories, including phones, tablets, smartwatches, wireless chargers, and power supplies. At the proposed plant in Theni, the company is also considering manufacturing components like composites, connectors, relays and optical transceivers.

SFO has expressed a demand for 60 acre of land piece to begin manufacturing facilities at the unit in the next two years, scaling it to its full capacity in the next six years

The source said the company is considering Theni for the project also because of its wind energy potential. The plant could possibly meet its power demand through renewable sources of energy, he said.

The company’s plan is to start with PCBs in Theni, the executive said. “As part of our proposal, we have also requested for some land towards Krishnagiri, for a plant that will be intended for connectors,” he said.

The post SFO Technologies plans to invest Rs. 2,270 crore for a PCB manufacturing plant in Tamil Nadu appeared first on ELE Times.

Gartner Forecasts Having 116 Million EVs on the Road in 2026

Mon, 12/08/2025 - 12:36

Gartner, Inc., a business and technology insights company predicts to have 116 million electric vehicles (EVs), including cars, buses, vans and heavy trucks on the road in 2026.

According to the research by the company, battery electric vehicles (BEVs) are forecast to continue to account for well over half of EV installed base, but there is an increasing proportion of customers choosing PHEVs (see Table 1).

Table 1. Electric Vehicle Installed Base by Vehicle Type, Worldwide, 2025-2026 (Single Units)

   

2025 Installed Base

 

2026 Installed Base

 

Battery Electric Vehicles (BEV)

59,480,370 76,344,452
Plug-in Hybrid Electric Vehicles (PHEV) 30,074,582 39,835,111
Total 89,554,951 116,179,563

Source: Gartner (December 2025)

Expert Take:

“Despite the U.S. government introducing tariffs on vehicle imports and many governments removing the subsidies and incentives for purchasing EVs, the number of EVs on the road is forecast to increase 30% in 2026,” said Jonathan Davenport, Sr Director Analyst at Gartner. “In 2026, China is projected to account for 61% of total EV installed base, and global ownership of plug-in hybrid EVs (PHEVs) is expected to rise 32% year-over-year as customers value the reassurance of a back-up petrol engine for use, should they need it.”

The post Gartner Forecasts Having 116 Million EVs on the Road in 2026 appeared first on ELE Times.

Toradex Launches Two New Computer on Module Families for Ultra-Compact Industrial and IoT Applications

Mon, 12/08/2025 - 11:52

Toradex has expanded its embedded computing portfolio with four new modules powered by NXP i.MX 93 and i.MX 91 processors: OSM iMX93, OSM iMX91, and Lino iMX93, Lino iMX91 by launching two entirely new Computer on Module (CoM) families, OSM and Lino.

The OSM and Lino families deliver cost-optimized, industrial-grade reliability, offering ultra-compact form factors, and long-term software support, designed for high-volume, space-constrained industrial IoT devices, like industrial controllers, gateways, smart sensors, and handheld systems, among others. For AI at the Edge, Industrial IoT applications, the NXP i.MX 93 offers a 0.5 TOPS NPU, enabling entry-level HW accelerated on-device machine learning for smart sensing, analytics, and industrial intelligence. Designed for extreme temperatures from -40°C to +85°C, both the OSM and Lino families deliver industrial-grade reliability and availability through 2038, providing a future-proof foundation for next-generation IoT and edge devices.

Both families deliver new compact, reliable, industrial Edge AI compute platforms”, said Samuel Imgrueth, CEO at Toradex. “While OSM adds a solderable standard form factor, Lino provides connector-based ease of use for rapid integration and serviceability. This empowers customers to design next generation, intelligent, space-constrained devices with confidence, scalability, and long-term support.

OSM Family: Solderable, Ultra-Compact, Open Standard

The OSM family adheres to the Open Standard Module (OSM) Size-S specification, providing a 30 × 30mm solderable, connector-less design optimized for automated assembly, rugged operation, and cost-effective scaling. It’s an ideal choice for high-volume applications up to several hundred thousand devices a year.

Lino Family: Connector-Based Flexibility for High-Volume Devices

The Lino family provides a cost-optimized, connector-based entry point for space-constrained devices. Its easy-to-use connector interface simplifies integration, serviceability, and speeds up development, while rich connectivity options support a wide range of scalable industrial and IoT applications.

Toradex is also introducing the Verdin-Lino Adapter, allowing any Lino module to be mounted onto any Verdin-compatible carrier board. This gives customers immediate access to the powerful Verdin ecosystem and enables testing and validation using both the Verdin Development Board and existing Verdin-based custom designs.

All modules come with full Toradex Software support, including a Yocto Reference Image and Torizon support, a Yocto-based, long-term-supported Linux platform that provides secure OTA remote updates, device monitoring, remote access, and simplified EU CRA (Cyber Resilience Act) compliance. Its integration with Visual Studio Code and rich ecosystem accelerates development while ensuring production reliability and operational security. Torizon is also the ideal starting point for your own Linux Distribution.

The post Toradex Launches Two New Computer on Module Families for Ultra-Compact Industrial and IoT Applications appeared first on ELE Times.

The Great Leap: How AI is Reshaping Cybersecurity from Pilot Projects to Predictive Defense

Mon, 12/08/2025 - 09:44

Imagine your cybersecurity team as a group of highly-trained detectives. For decades, they’ve been running through digital crime scenes with magnifying glasses, reacting to the broken window or the missing safe after the fact. Now, suddenly, they have been handed a crystal ball—one that not only detects the threat but forecasts the modus operandi of the attacker before they even step onto the property. That crystal ball is Artificial Intelligence, and the transformation it’s bringing to cyber defense is less a technological upgrade and more a fundamental re-engineering of the entire security operation.

Palo Alto Networks, in partnership with the Data Security Council of India (DSCI), released the State of AI Adoption for Cybersecurity in India report. The report found that only 24% of CXOs consider their organizations fully prepared for AI-driven threats, underscoring a significant gap between adoption intent and operational readiness. The report sets a clear baseline for India Inc., examining where AI adoption stands, what organizations are investing in next, and how the threat landscape is changing. It also surfaces capability and talent gaps, outlines governance, and details preferred deployment models.

While the intent to leverage AI for enhanced cyber defense is almost universal, its operational reality is still maturing. The data reveals a clear gap between strategic ambition and deployed scale.

The report underscores the dual reality of AI: it is a potent defense mechanism but also a primary source of emerging threat vectors. Key findings include:

  • Adoption intent is high, maturity is low: 79% of organizations plan to integrate AI/ML towards AI-enabled cybersecurity, but 40% remain in the pilot stage. The main goal is operational speed, prioritizing the reduction of Mean Time to Detect and Respond (MTTD/MTTR).
  • Investments are Strategic: 64% of organizations are now proactively investing through multi-year risk-management roadmaps.
  • Threats are AI-Accelerated: 23% of the organizations are resetting priorities due to new AI-enabled attack paradigms. The top threats are coordinated multi-vector attacks and AI-poisoned supply chains.
  • Biggest Barriers: Financial overhead (19%) and the skill/talent deficit (17%) are the leading roadblocks to adoption.
  • Future Defense Model: 31% of organizations consider Human-AI Hybrid Defense Teams as an AI transforming cybersecurity approach and 33% of organizations require human approval for AI-enabled critical security decisions and actions.

“AI is at the heart of most serious security conversations in India, sometimes as the accelerator, sometimes as the adversary itself. This study, developed with DSCI, makes one thing clear: appetite and intent are high, but execution and operational discipline are lagging,” said Swapna Bapat, Vice President and Managing Director, India & SAARC, Palo Alto Networks. “Catching up means using AI to defend against AI, but success demands robustness. Given the dynamic nature of building and deploying AI apps, continuous red teaming of AI is an absolute must to achieve that robustness. It requires coherence: a platform that unifies signals across network, operations, and identity; Zero-Trust verification designed into every step; and humans in the loop for decisions that carry real risk. That’s how AI finally moves from shaky pilots to robust protection.”

Vinayak Godse, CEO, DSCI, said “India is at a critical juncture where AI is reshaping both the scale of cyber threats and the sophistication of our defenses. AI enabled attacker capabilities are rapidly increasing in scale and sophistication. Simultaneously, AI adoption for cyber security can strengthen security preparedness to navigate risk, governance, and operational readiness to predict, detect, and respond to threats in real time. This AI adoption study, supported by Palo Alto Networks, reflects DSCI’s efforts to provide organizations with insights to navigate the challenges emerging out of AI enabled attacks for offense while leveraging AI for security defense.

The report was based on a survey of 160+ organizations across BFSI, manufacturing, technology, government, education, and mid-market enterprises, covering CXOs, security leaders, business unit heads, and functional teams.

The post The Great Leap: How AI is Reshaping Cybersecurity from Pilot Projects to Predictive Defense appeared first on ELE Times.

Optimized analog front-end design for edge AI

Fri, 12/05/2025 - 13:12

Courtesy: Avnet

Key Takeaways:

01.   AI models see data differently: what makes sense to a digital processor may not be useful to an AI model, so avoid over-filtering and consider separate data paths

02.   Consider training needs: models trained at the edge will need labeled data (such as clean, noisy, good, faulty)

03.   Analog data is diverse: match the amplifier to the source, consider the bandwidth needs of the model, and the path’s signal-to-noise ratio

 

Machine learning (ML) and artificial intelligence (AI) have expanded the market for smart, low-power devices. Capturing and interpreting sensor data streams leads to novel applications. ML turns simple sensors into smart leak detectors by inferring why the pressure in a pipe has changed. AI can utilize microphones in audio equipment to detect events within the home, such as break-ins or an occupant falling.

For many applications that rely on real-world data, the analog front-end (AFE) is one of the most important design elements as it functions as a bridge to the digital world. At a high level, AFEs delivering data to a machine-learning back-end have broadly similar design needs to conventional data-acquisition and signal-processing systems.

But in some applications, particularly those in transition from IoT to AIoT, the data is doing double-duty. Sensors could be used for conventional data analysis by back-end systems and also as real-time input to AI models. There are trade-offs implied by this split, but it could also deliver greater freedom in the AFE architecture. Any design freedom must still address overall cost, power efficiency, and system reliability.

The importance of bandwidth and signal-to-noise ratio

Accuracy is often an imperative with analog signals. The signal path must deliver the bandwidth and signal-to-noise ratio required by the front-end’s digitizer. When using AI, designers will be more diligent when avoiding distortion, as introducing spurious signals during training could compromise model training.

The classic AFE may need to change to accommodate the sensor and digital processing sections, and the AI model’s needs which may be different. (Source: Avnet)

For signals with a wide dynamic range, it may make sense to employ automated gain control (AGC) to ensure there is enough detail in the recorded signal under all practical conditions. The changes in amplification should also be passed to the digitizer and synchronized with the sensor data so they can be recorded as features during AI training or combined by a preprocessing step into higher-resolution samples. If not, the model may learn the wrong features during training.

Interfacing AI systems with multi-sensor designs

Multi-sensor designs introduce another consideration. Devices that process biological signals or industrial condition-monitoring systems often need to process multiple types of data together. Time-synchronized data will deliver the best results as changes in group delay caused by filtering or digitization pipelines of different depths can change the relationship between signals.

The use of AI may lead the designer to make choices they might not make for simpler systems. For example, aggressive low- and high-pass filtering might help deliver signals that are easier for traditional software to interpret. But this filtering may obscure signal features that are useful to the AI.

Design Tip – Analog Switches & Multiplexers

Analog switches and multiplexers perform an important role in AFEs where multiple sensors are used in the signal chain. Typically, devices are digitally addressed and controlled, switches selectively connect inputs to outputs, while multiplexers route a specific input to a common output. Design considerations include resistance, switching speed, bandwidth, and crosstalk.

 

For example, high-pass filtering can be useful for removing apparent signal drift but will also remove cues from low-frequency sources, such as long-term changes in pressure. Low-pass filtering may remove high-frequency signal components, such as transients, that are useful for AI-based interpretation. It may be better to perform the filtering digitally after conversion for other downstream uses of the data.  

Techniques for optimizing energy efficiency in AFEs

Programmable AFEs, or interchangeable AFE pipelines, can improve energy optimization. It is common for edge devices to operate in a low-energy “always on” mode, acquiring signals at a relatively low level of accuracy while the AI model is inactive. Once a signal passes a threshold, the system wakes the AI accelerator and moves into a high-accuracy acquisition mode.

That change can be accommodated in some cases by programming the preamplifiers and similar components in the AFE to switch between low-power and low-noise modes dynamically.

A different approach often used in biomedical sensors is to use changes in duty cycles to reduce overall energy. In the low-power state, the AFE may operate at a relatively low data rate and powered down during quiet intervals. The risk arises of the system missing important events. An alternative is to use a separate, low-accuracy AFE circuit that runs at nanowatt levels. This circuitry may be quite different to the main AFE signal path.

In audio sensing, one possibility is to use a frequency-detection circuit coupled with a comparator to listen for specific events captured by a microphone. A basic frequency detector, consisting of a simple bandpass filter and comparator, may wake the system or move the low-power AFE into a second, higher-power state, but not the full wakefulness mode that engages the back-end digital AI model.

In this state, a circuit such as a generalized impedance converter can be manipulated to sweep the frequency range and look for further peaks to see if the incoming signal meets the wakeup criteria. That multistage approach will limit the time during which the full AI model needs to be active.

Breaking down analog front-ends for AI

Further advances in AI theory enable more sophisticated analog-domain processing before digitization. Some vendors have specialized in neural-network devices that combine on-chip memory with analog computation. Another possibility for AFE-based AI that results in a lower hardware overhead is reservoir computing. This uses concepts from the theory of recurrent neural networks. A signal fed back into a randomly connected network, known as the reservoir, can act as a discriminator used by an output layer that is trained to recognize certain output states as representing an event of interest. This provides the ability to train an AFE on trigger states that are more complex than simple threshold detectors.

Another method for trading off AFE signal quality against AI capability is compressive or compressed sensing. This uses known signal characteristics, such as sparsity, to lower the sample rate and, with it, power. Though this mainly affects the choice of sampling rate in the analog-to-digital converter, the AFE still needs to be designed to accommodate the signal’s full bandwidth. At the same time, the AFE may need to incorporate stronger front-end filtering to block interferers that may fold into the measurement frequency range.

Optimizing AFE/AI trade-offs through experimentation

With so many choices, experimentation will be key to determining the best tradeoffs for the target system. Operating at higher bandwidth and resolution specifications is a good start. Data can be readily filtered and converted to the digital domain at lower resolutions to see how they affect AI model performance.

The results of those experiments can be used to determine the target AFE’s specifications in terms of gain, filtering, bandwidth, and the ENOB needed. Such experiments also provide opportunities to experiment with more novel AFE processing, such as reservoir computing and compressive sensing to gauge how well they might enhance the final system.

The post Optimized analog front-end design for edge AI appeared first on ELE Times.

Introducing Wi-Fi 8: The Next Boost for the Wireless AI Edge

Fri, 12/05/2025 - 11:33

Courtesy: Broadcom

Wi-Fi 8 has officially arrived—and it marks a major leap forward for next-generation connectivity.

Wi-Fi has come a long way. Earlier generations (Wi-Fi 1 through 5) focused mainly on delivering content to users: streaming video, online gaming, and video calls. But today’s digital world runs in both directions. We create as much data as we consume. We upload high-resolution content, collaborate in real time, and rely on on-device AI for everything from productivity to entertainment. That makes the “last hop” between devices and wireless networks more critical than ever.

Wi-Fi 8 is built for this new reality. Evolving from the advances of Wi-Fi 6 and 7, it offers reliable performance at scale, consistently low latency, and significantly stronger uplink capacity—precisely what modern, AI-driven applications need to run smoothly and responsively.

Why Wi-Fi 8 Matters

The internet has shifted from passive browsing to immersive, interactive, and personalized experiences. Devices now sense, analyze, and generate data constantly. By the end of 2025, hundreds of millions of terabytes will be created every day, much of it from IoT, enterprise telemetry, and video. A lot of that data never even makes it to the cloud—it’s handled locally. But guess what still carries it around? Wi-Fi.

Uplink matters

Traditional traffic patterns skewed roughly 90/10 downlink/uplink. Not anymore. AI apps, smart assistants, and continuous sync push networks toward a 50/50 split. Your Wi-Fi can’t just be fast going to you—it has to be equally fast, fair, and predictable going from you.

Real-time Wi-Fi

In the age of AI, Wi-Fi has to be far more real-time. Take, for example, agentic apps that work with voice inputs. We all know today’s assistants can feel clunky—they buffer, they miss interruptions. To get to your true agentic assistant with a “Jarvis-like” back-and-forth, networks need ultra-low latency, less jitter, and fewer drops.

With the right broadband and Wi-Fi 8, those thresholds become possible.

What’s New in Wi-Fi 8

Wi-Fi 8 delivers a system-wide upgrade across speed, capacity, reach, and reliability. Here’s what that means in practice:

Higher throughput—more of the time: Wi-Fi 8 achieves higher real-world speeds with smarter tools. Unequal Modulation ensures each stream runs at its best rate, so one fading link doesn’t drag the others down. Enhanced MCS smooths out the data-rate ladder to prevent sudden slowdowns, while Advanced LDPC codes hold strong even in noisy conditions. The result: faster, steadier performance across the network.

More network capacity in busy air: In crowded spaces, Wi-Fi 8 is built for cooperation. Inter-AP Coordination helps access points schedule transmissions instead of colliding. Non-Primary Channel Access (NPCA) taps into secondary channels when the primary is congested, while Dynamic Subband Operation (DSO) lets wide channels serve multiple narrower-band clients at once. Dynamic Bandwidth Expansion (DBE) then selectively opens wider pipes for Wi-Fi 8 devices without disrupting legacy clients—unlocking more usable capacity where it’s needed most. 

Longer reach and stronger coverage close to the edge: Connections go farther with Distributed Resource Units (dRU), which spread transmit energy for noticeable gains at the fringe. And with Enhanced Long Range (ELR), a special 20-MHz mode extends coverage up to 2× in line-of-sight and about 50% farther in non-LoS—keeping links alive even at the outer edge of the network.

Reliability and QoS that stick: Real-time apps get the consistency they need thanks to smarter quality-of-service features. Low-Latency Indication prioritizes AR/VR, gaming, and voice traffic, while Seamless Roaming keeps calls and streams intact during movement. QoS enhancements and Prioritized EDCA reduce latency and prevent bottlenecks across multiple streams. Plus, enhanced in-device coexistence coordinates Wi-Fi with Bluetooth, UWB, and more to avoid self-interference. Together, these features make the network feel smoother and more reliable.

Wi-Fi 8 Real-World Impact

So what does all this look like in everyday use? Assume a busy apartment building, or an office full of devices roaming between access points. Signals aren’t perfect, interference is everywhere, but Wi-Fi 8 keeps things flowing. It does this by coordinating access points, smoothing out delays, and reducing radio clashes—so your most important traffic doesn’t get stuck in line.

Picture this: It’s a busy evening in a modern family home. One person is streaming a live sports match in 8K on the big screen, another is deep into an online game while streaming to friends, and a third is working on a project that involves real-time AI voice assistance.

Meanwhile, the smart doorbell detects someone approaching. But instead of just pinging a vague “motion alert,” the AI-powered camera recognizes whether it’s a delivery driver dropping off a package, a neighbor stopping by, or a family member arriving home. The alert is contextual and useful.

In older Wi-Fi environments, that mix of high-bandwidth streams, real-time gaming, and constant AI inference could lead to stutters, buffering, or dropped packets at the worst moments. With Wi-Fi 8, all of it just works. The 8K stream stays crisp. The gamer experiences smooth, low-latency play. The AI assistant responds instantly without awkward delays. And the doorbell notification comes through without competing for airtime—because the network can intelligently prioritize, coordinate, and balance all that traffic.

That’s the difference Wi-Fi 8 brings: a reliable home network, no matter how many devices or demands are piled on at once.

Why it works: Wi-Fi 8 increases throughput and, at the same time, greatly reduces the 99th percentile latency tail. By coordinating APs, elevating delayed packets, and reducing radio self-contention, it shortens queues, avoids collisions, and keeps critical traffic flowing—even when signals aren’t perfect.

Conclusion

Wi-Fi 8 represents far more than an incremental upgrade—it marks a fundamental shift in how wireless networks will power the AI-driven world. As our homes, offices, factories, and devices generate and process more data than ever, the need for reliable uplink performance, real-time responsiveness, and intelligent coordination becomes non-negotiable. Broadcom’s new ecosystem brings these capabilities to life, ensuring that next-generation applications—from immersive entertainment and autonomous IoT to true conversational AI—can operate smoothly, consistently, and securely.

With Wi-Fi 8, the wireless edge finally catches up to the ambitions of modern computing. It isn’t just faster Wi-Fi; it’s the foundation for the seamless, AI-enabled experiences we’ve been waiting for—and a major leap toward the connected future we’re building every day.

The post Introducing Wi-Fi 8: The Next Boost for the Wireless AI Edge appeared first on ELE Times.

Vehicle to Grid (V2G) Charging in EVs: Understanding the Basics

Fri, 12/05/2025 - 08:20

Much of the research around emerging technologies in Electric vehicles is looking within the EV system and lack a comprehensive review of EV integration and its impact on Power system planning and operation, across transmission and distribution levels.

Shift Toward Bidirectional Energy Flows

Vehicle to Grid (V2G) charging is a phenomenal step taken in direction of placing EVs on the energy landscape where they can contribute to grid stability. V2G integration provides a paradigm shift from an era of unidirectional energy flow and introduces bidirectional energy transfer between EVs and energy grid. EVs essentially act as renewable energy storage units facilitating load balancing, peak shaving and frequency regulation within the grid.

Core Components of V2G Systems

  • EV as Portable Energy Storage: The V2G system primarily consist of EV as a potable energy storage system fitted with a battery management system (BMS).
  • EVSE, Chargers and Communication Interfacee: The Electrical vehicle supply equipment (EVSE) is connected to the EV, including a bidirectional charger and a communication interface that allows data flow between EV, grid and the user.
  • Supporting Infrastructure: It also integrates a transformer to manage voltage levels, a smart meter for precise monitoring and an aggregator platform for coordination of combined energy sources from many EVs.

Variants of Bidirectional Charging

In addition to V2G, bidirectional energy transfer covers V2H (vehicle-to-home) and V2L (vehicle-to-load) charging.

Global Deployment Landscape

Adoption Examples: Virta Global, is a European country headquartered at Helsinki, Finland that is leading in providing V2G solutions. It has installed seven V2G chargers at its premises in Finland. Virta has installed 20 chargers at a Nissan manufacturing plant in the UK.

Economic Considerations

Battery Cycling and Infrastructure Costs: There are, however, cost considerations to be considered with V2G charging. The batteries in a V2G setup are subject to wear and tear due to frequent charging and discharging cycles. Advance chargers and communication systems add to the cost, even though a part of it is offset by revenue opportunities gained from selling stored energy.

Impact on Power Networks

  • Grid Benefits and Load Management: The electrical network is largely benefited by V2G charging during load shifting, load building, power conservation, peak clipping, valley filling and flexible loads. As increasingly EVs get integrated into the system, power imbalances on the load side are bound to occur. The ancillary services provided by the V2G setup help in alleviating certain congestions on the network.

Role of Aggregators and Control Models

Aggregation for Frequency Control: Aggregation of numerous EVs into grid through V2G setup for primary frequency control is a critical to larger EV integration into the transportation system. Aggregators assist in providing services to individual EVs and serve the purpose of a bigger and more appropriate load for utility.

Models Used in Research

Studies are ongoing on several aggregated models for large-scale EV integration. Some researchers use an independent distributed Vehicle to grid regulation arrangement while others use master slaves grid regulation technique for microgrid (MG) in islanding mode.

Modified Droop Controller Approach

However, modified droop controller method is considered better than others where the reference signal is controlled and monitored uninterruptedly by a droop controller possessing feedback mechanism. Such controller is known as Modified droop controller.

Technical Capabilities of V2G Systems

  • Active and Reactive Power Support: In V2G system, the vehicle can provide active power regulation, current harmonic filtering, reactive power support and tracing of adjustable renewable energy sources.
  • Ancillary Services: With the help of these, ancillary services such as frequency and voltage control can be facilitated.

Microgrid-Level Power Balancing

Optimized Charging Schedules:  A better balance of power in a microgrid can be achieved by Vehicle to grid systems. With the help of intelligent charging schedules, the vehicle can discharge during peak hours, charge during off-peak hours, thereby improving the load curve.

Voltage and Frequency Regulation

Up and Down Regulation

Regulation of voltage and primary frequency are crucial for energy markets. When the voltage supply from the grid is high, the EV battery is in charging state known as down regulation. On the contrary, when supply from the grid is low, EV battery is in discharging state also known as up regulation. This may affect the frequency. The primary purpose of frequency control is to maintain equilibrium between generation and demand within specific time duration. Regulation services can be provided within V2G systems to reduce pressure on the power grid.

Harmonic Filtration and Power Quality

Need for Harmonic Control: Maintaining the quality of power supplied back to the grid through Harmonic filtration is essential for any emerging V2G technology.

Advances in Digital IIR Filters: Recently, real-time digital infinite impulse response (IIR) filters are developed for the same. IIR filters generate reference signals at the power calculation stage. Digital filters offer various advantages over passive and active filters in inverter output signals of V2G applications like real-time processing, adaptability, improved performance with better noise reduction, increased control and lower costs.

V2G Applications in Fuel-Cell EVs

IIR Filters in FCEVs: Digital IIR filters are mostly implemented in Fuel Cell Electric vehicles (FCEVs) because they require less memory and have less computational complexity.

Global Research Initiatives

The European Union’s Hydrogen Mobility Europe 2 (H2ME2) project is researching and developing V2G technology for FCEVs to demonstrate the technology in a real-world setting. Other companies, such as Toyota and Hyundai, have also announced plans to develop V2G technology for their FCEVs. Moreover, companies like Texas instruments are researching on application of digital IIR filters.

Conclusion: EVs as Grid Assets

Electric vehicles can act both as a load and a potential power source which can be integrated into the power system when required. Adequate studies on the stresses that a power distribution system can experience due to large scale EV adoption are imperative and equally imperative is to come up with alternatives like V2G technology that can overcome some of these challenges.

The post Vehicle to Grid (V2G) Charging in EVs: Understanding the Basics appeared first on ELE Times.

ROHM launches SiC MOSFETs in TOLL package that achieves both miniaturization and high-power capability

Thu, 12/04/2025 - 14:28

ROHM has begun mass production of the SCT40xxDLL series of SiC MOSFETs in TOLL (TO-Leadless) packages. Compared to conventional packages (TO-263-7L) with equivalent voltage ratings and on-resistance, these new packages offer approximately 39% improved thermal performance. This enables high-power handling despite their compact size and low profile. It is ideal for industrial equipment such as server power supplies and ESS (Energy Storage Systems) where the power density is increasing, and low-profile components are required to enable miniaturized product design.

In applications like AI servers and compact PV inverters, the trend toward higher power ratings is occurring simultaneously with the contradictory demand for miniaturization, requiring power MOSFETs to achieve higher power density. Particularly in totem pole PFC circuits for slim power supplies, often called “the pizza box type,” stringent requirements demand thicknesses of 4mm or less for discrete semiconductors.

ROHM’s new product addresses these needs by reducing component footprint by approximately 26% and achieving a low profile of 2.3mm thickness – roughly half that of conventional packaged products. Furthermore, while most standard TOLL package products are limited by a drain-source rated voltage of 650V, ROHM’s new products support up to 750V. This allows for lower gate resistance and increased safety margin for surge voltages, contributing to reduced switching losses.

The lineup consists of six models with on-resistance ranging from 13mΩ to 65mΩ, with mass production started in September 2025 (sample price: $37.0/unit, tax excluded). These products are available for online purchase from online distributors such as DigiKey, MOUSER, and Farnell. Simulation models for all six new products are available on ROHM’s official website, supporting rapid circuit design evaluation.

Product Lineup

Application Examples
・Industrial equipment: Power supplies for AI servers and data centers, PV inverters, ESS (energy storage systems)
・Consumer equipment: General power supplies

The post ROHM launches SiC MOSFETs in TOLL package that achieves both miniaturization and high-power capability appeared first on ELE Times.

Asia-Pacific Takes the Lead in AI Adoption Across Manufacturing

Thu, 12/04/2025 - 13:58

Courtesy: Rockwell Automation

Manufacturing around the world has undergone a significant transformation with the emergence of artificial intelligence (AI) and machine learning. With dynamic market conditions and pressures to optimize operations across the supply chain, more businesses today are turning to technology to meet the demands of an increasingly competitive market.

To get a better understanding of the smart manufacturing landscape and how manufacturers are leveraging these new technologies, the latest edition of the State of Smart Manufacturing report (SOSM) surveyed more than 1,500 manufacturing decision makers across industries from 17 countries, including Asia Pacific nations Australia, China, India, Japan, New Zealand, and South Korea.

Now in its 10th edition, the report offers a global perspective on today’s challenges and tomorrow’s opportunities, highlighting how smart manufacturing and emerging technologies are fostering resilience and shaping the future.

How AI is Challenging the Status Quo in the Asia Pacific  

Rich in resources with the resilience to constantly adapt and innovate, Asia Pacific (APAC) is setting the pace for manufacturing and industrial growth around the world. There is strong digitization momentum across the region, as AI and smart manufacturing technologies are no longer buzzwords but have become mission-critical to drive quality, agility, and growth.

This year’s survey is proof that the focus has shifted from experimentation to execution, where more manufacturers have adopted a tech-first mindset. Nearly half of manufacturers are already scaling AI to address workforce gaps, cybersecurity risks, and evolving sustainability targets, and APAC organizations investing in generative and causal AI increased 10% year-over-year.

The growing maturity in how businesses view AI is noticeable, moving towards becoming a strategic enabler rather than a supplementary tool. Trust in AI has deepened, with 41% of those surveyed in APAC having plans to increase automation in the workplace to address workforce shortage and bridge the skills gap. Organizations are no longer primarily using AI for predictive maintenance. They are now leveraging these capabilities for other, more sophisticated, autonomous operations such as quality assurance and adaptive control, which help to reduce human error and enhance real-time decision-making.

AI in Cybersecurity

With the rapid adoption of digital technologies, cybersecurity has become a growing concern across industries, including manufacturing. Globally, it now ranks as the second most significant external obstacle for manufacturers. In the APAC region, cybersecurity is top of the list, alongside inflation and rising energy costs.

In response, businesses are accelerating their use of AI, adopting smart manufacturing technologies to digitize operations, and upskilling existing talent to stay competitive and minimize cybersecurity risks. They are also hiring with new priorities, whereby cybersecurity skills and standards have become an in-demand capability.

Understanding the need for secure-by-design architectures and real-time threat detection capabilities, Rockwell Automation has developed a series of threat intelligence and detection services, helping manufacturers to stay ahead of the evolving cybersecurity frameworks and industry standards.

Transforming the Workforce with AI

Alongside AI, the workforce, too, is evolving. Just as AI can support business needs, it requires a skilled workforce to adapt these technologies to deliver real business value. Manufacturers across APAC are looking to AI to increase automation and make workflows more efficient, while looking for employees with strong analytical thinking skills to take on more value-added tasks. While challenging, the SOSM report reveals that the need for more skilled workers is not a uniquely regional issue but a global concern, affecting industries in both developed and emerging markets.

On the upside, the skills gap in APAC has narrowed slightly from the previous year, with only 29% of respondents in 2025 citing skills gap as a challenge compared to the 31% in 2024. This suggests that investments in talent development and education are beginning to pay off.

Delivering More Sustainable Business Outcomes for the Long Run

As an industry, manufacturing consumes lots of energy. As manufacturers across the region become more invested in their ESG goals, they are driven to improve business efficiencies in pursuit of sustainability and resource conservation. Over half (55%) stated that improving efficiencies is the top reason to pursue better sustainability, up from 39% last year. By improving workflow efficiencies through automation and technologies like AI, businesses are saving on business costs while supporting better energy management.

As the 2025 State of Smart Manufacturing report shows, AI is no longer a distant promise for the Asia Pacific—it is a powerful catalyst actively reshaping how the region builds, protects, and grows. From strengthening cybersecurity and elevating workforce capabilities to enabling smarter energy use and more sustainable operations, APAC manufacturers are demonstrating what it means to move from digital ambition to digital action.

With technology adoption accelerating and confidence in AI deepening, the region is well-positioned to define the next era of global manufacturing. Those who continue to invest in talent, innovation, and secure, future-ready systems will not only overcome today’s challenges but also lead the transformation of industry for years to come.

The post Asia-Pacific Takes the Lead in AI Adoption Across Manufacturing appeared first on ELE Times.

Will AI Consume the World’s Electricity? Addressing AI Data Center Demands with Advanced Power Semiconductors

Thu, 12/04/2025 - 12:22

Courtesy: RoHM

AI’s unprecedented advancement is reshaping our world, but this transformation comes with the urgent challenge of sharply rising energy demands from data center infrastructure.

In response, Japan has launched an ambitious national strategy—the ‘Watt-Bit Initiative’—spearheaded by the Ministry of Economy, Trade, and Industry (METI). This comprehensive program aims to establish Japan as a global leader by developing ultra-efficient data centers strategically distributed across the nation. Through collaborative platforms like the ‘Watt-Bit Public-Private Council,’ METI is orchestrating a unified effort among key sectors—energy providers, telecommunications, data center operators, and semiconductor manufacturers—to turn this vision into reality.

Will AI Consume the World’s Electricity?

The explosive growth of generative AI technologies like ChatGPT has triggered an unprecedented surge in data center energy demands. Training and inference of complex AI models require enormous computational resources, supported by high-performance servers operating continuously around the clock.

This escalating demand for electricity not only places a significant strain on local environments but also raises concerns about the stability of the power supply. As AI continues to advance, the limitations of conventional power supply systems are becoming increasingly apparent.

Against this backdrop, three urgent challenges emerge: improving energy efficiency, expanding the use of renewable energy, and optimizing the regional distribution of data centers. Achieving a sustainable society requires moving away from fossil fuel dependency and embracing renewable sources such as solar and wind power.

Utilizing Renewable Energy in Data Centers

Data centers, now an indispensable part of modern infrastructure, are at a major turning point.

Traditionally, urban data centers have been concentrated in metropolitan hubs like Tokyo to ensure low-latency communication for services requiring high-speed data access, including finance, healthcare, and edge computing. However, the surge in power consumption driven by AI adoption, coupled with the need for robust business continuity (BCP) in the face of large-scale natural disasters, is accelerating the shift toward decentralizing data centers into suburban areas.

These new sites offer compelling advantages beyond just abundant available space. They enable seamless integration of renewable energy sources such as solar and wind power, benefit from surplus grid capacity for stable electricity, and leverage natural cooling from climate and water resources, dramatically reducing operational costs. As a result, suburban facilities are increasingly being adopted for modern workloads such as cloud hosting, backup, disaster recovery, and large-scale storage.

The Future of Server Rack Expansion

Urban data centers face severe land constraints, and even suburban data centers, where securing large plots is relatively easier, are approaching their limits in available space for server deployment.

To overcome this, server racks are evolving into high-density AI server racks designed to house a greater number of high-performance servers efficiently. Rather than expanding the total number of server racks, the industry is moving toward high-density configurations equipped with more CPUs, GPUs, and other functional boards, significantly boosting the computing power per rack to maximize performance within limited space.

While the external appearance of server racks remains largely unchanged, their internal storage capacity has increased several fold.

This leap in performance and density demands a fundamental transformation of power delivery systems. Conventional multi-stage power conversion introduces significant energy losses, making efficient supply increasingly difficult. As a result, innovations such as reducing conversion stages and adopting high-voltage direct current (HVDC) architectures are gaining momentum, driving the need for SiC and GaN power semiconductors. ROHM, together with other industry leaders, is advancing technologies that support this transformation, enabling both higher performance and greater energy efficiency across entire data centers.

  1. Are Today’s Power Systems Sufficient?

The sharp rise in power consumption of high-performance AI servers—particularly GPUs—is forcing a fundamental redesign of existing data center power architectures. Conventional multi-stage power conversion incurs significant conversion losses, making efficient power delivery increasingly difficult.

In today’s data centers, high-voltage AC is supplied and gradually stepped down through multiple transformers and rectifiers before finally being converted into the low-voltage DC required by servers. Each stage of this process incurs losses, ultimately reducing overall efficiency. To address these challenges, data centers are expected to undergo key transformations aimed at enhancing both power conversion efficiency and reliability.

  • Reducing Power Conversion Stages

A growing trend is the integration of multiple conversion processes—for example, converting high-voltage AC directly to DC, or stepping down high-voltage DC directly to the voltage used by servers. This approach significantly reduces the number of conversion steps, minimizing energy losses, enhancing overall system efficiency, and lowering the risk of failures.

  • Supporting High-Voltage Input/High-Voltage Direct Current (HVDC) Power Supplies

Server rack input voltages are shifting from traditional low-voltage 12VDC and 48VDC to higher levels such as 400VDC, and even 800VDC (or ±400VDC). Operating at higher voltages reduces transmission current, enabling lighter busbar designs.

At the same time, the adoption of HVDC systems is gaining momentum. Unlike conventional AC-based architectures, HVDC delivers DC power directly to server racks, reducing the need for multiple AC/DC conversion stages. This approach enhances energy efficiency, enables more flexible power management and bidirectional transmission, and simplifies integration with renewable energy sources.

  • Increasing Adoption of SSTs (Solid State Transformers)

Transformer equipment is evolving from traditional designs to SSTs (Solid State Transformers) that leverage semiconductor technology. SSTs are expected to play a key role in significantly miniaturizing conventional equipment.

  • Growing Demand for SiC/GaN Power Semiconductors

Building high-efficiency, high-voltage power systems requires performance levels that exceed the capabilities of conventional silicon (Si) semiconductors. This has made SiC and GaN power semiconductors indispensable. These advanced devices enable low-loss, high-frequency, high-temperature operation under high-voltage input conditions, greatly contributing to both the miniaturization and efficiency of power systems.

Moreover, as these technologies advance, their benefits extend beyond power systems to individual devices within server racks, further improving overall energy efficiency.

ROHM is accelerating the development of solutions for next-generation servers. In addition to existing products such as SiC/GaN/Si IGBTs, isolated gate drivers, cooling fan drivers, SSD PMICs, and HDD combo motor drivers from the EcoSiC, EcoGaN, and EcoMOS series, we are also developing high-current LV MOS, isolated DC-DC converters, DC-DC converters for SoCs/GPUs, and eFuses.

Power Semiconductors Driving Next-Generation AI Data Centers 

  • SiC Devices Ideal for High Voltage, Large Current Applications

SiC devices are particularly well-suited for sets requiring high voltages and currents. As server rack input voltages continue to rise, conventional 54V rack power systems face increasing challenges, including space limitations, high copper usage, and significant power conversion losses.

By integrating ROHM’s SiC MOSFETs into next-generation data center power systems, superior performance can be achieved in high-voltage, high-power environments. These devices reduce both switching and conduction losses, improving overall efficiency while ensuring the high reliability demanded by compact, high-density systems.

This not only minimizes energy loss but also reduces copper usage and simplifies power conversion across the entire data center.

  • GaN Devices that Provide Greater Efficiency and Miniaturization

While SiC excels in high-voltage, high-current applications, GaN demonstrates outstanding performance in the 100V to 650V range, providing excellent breakdown strength, low on-resistance, and ultra-fast switching.

AI servers process far greater volumes of data than general-purpose servers, requiring high-performance GPUs, large memory capacity, and advanced software. This leads to higher power consumption, making efficient cooling and thermal management increasingly critical.

To address these challenges, GaN HEMTs – capable of high-speed switching (high-frequency operation) – are being integrated into power supply units to minimize power loss. This delivers major gains in power conversion efficiency, translating to substantial energy savings, lower operating costs, and reduced environmental footprint.

What’s more, GaN devices offer high current density, enabling a size reduction of approximately. 30-50% compared to conventional silicon devices. This not only improves space efficiency in power supplies and chargers, but also simplifies thermal design.

By reducing unit size and making effective use of freed-up space, the load on cooling systems can be alleviated, supporting overall system miniaturization and improved reliability. In addition, GaN’s high durability and suitability for high-frequency applications make it an ideal choice for data centers.

ROHM has succeeded in shortening the pulse width to as little as 2ns utilizing proprietary Nano Pulse Control technology, further enhancing the switching performance of GaN devices. Through the EcoGaN series, ROHM is expanding its lineup to meet the needs of AI data centers demanding compact, highly efficient power systems. The portfolio includes 150V and 650V GaN HEMTs, gate drivers, and integrated solutions that combine these components.

Conclusion

The evolution of AI, which shows no signs of slowing, comes with an inevitable surge in power demand.

According to the International Energy Agency (IEA), global data center electricity consumption is expected to more than double in the next 5 years, reaching approximately 945 billion kWh. Around half of this demand is projected to be met by renewable energy sources such as solar and wind, signaling a major shift in how energy is generated and consumed in the power-hungry data center sector. Technologies like photovoltaics (PV) and energy storage systems (ESS) are gaining traction as essential components of this transformation.

ROHM is actively contributing to this transition with a broad portfolio of advanced power semiconductor technologies, including SiC and GaN devices. These solutions enable high-efficiency power systems tailored for next-generation AI data centers. ROHM is also accelerating the development of new products to meet evolving market needs, supporting a more sustainable and prosperous AI-driven future.

The post Will AI Consume the World’s Electricity? Addressing AI Data Center Demands with Advanced Power Semiconductors appeared first on ELE Times.

STMicroelectronics streamlines smart-home device integration with industry-first Matter NFC chip

Thu, 12/04/2025 - 11:43

STMicroelectronics has unveiled a secure NFC chip designed to make home networks faster and easier to install and scale, leveraging the latest Matter smart-home standard. ST’s ST25DA-C chip lets users add lighting, access control, security cameras, or any IoT device to their home network in one step by tapping their phone. The chip is the first commercial solution fulfilling newly published enhancements in Matter—the latest open-source standard now making smart home devices secure, reliable, and seamless to use.

“The integration of NFC-based onboarding in Matter 1.5 is a timely enhancement to the smart home experience. Our market-first ST25DA-C chip leverages this capability to simplify device commissioning through tap-to-pair functionality. This reduces setup complexity, especially for installations that are difficult to access, thanks to NFC-enabled battery-less connectivity. This aligns well with the broader momentum in the smart home market to serve consumers who increasingly prioritize ease of use, interoperability, and security. NFC-enabled Matter devices are positioned to play a key role in driving even greater adoption,” said David Richetto, Group VP, Division General Manager, Connected Security at STMicroelectronics.

“Matter is an important standard for the smart-home industry, enabling seamless communication across devices, mobile apps, and cloud services. Its primary benefit is simplifying technology for non-expert consumers, which could help accelerate adoption of connected devices. The new STMicroelectronics’ ST25DA-C secure NFC chip is one example of next generation chipset that supports this standard, providing device makers with tools to develop the next generation of smart-home products,” said Shobhit Srivastava, Senior Principal Analyst at Omdia.

Technical information 

Enhanced usability: ST’s new NFC Forum Type 4 chip significantly improves the user experience, leveraging NFC technology present in most smartphone devices. NFC-enabled device commissioning is faster, more reliable, and secure compared to conventional pairing using technologies such as Bluetooth® or QR codes, which are not always possible. 

The ST25DA-C secure NFC tag can operate cryptographic operations required for Matter device commissioning using energy harvesting from the RF field. This mechanism allows users to jump-start adding unpowered devices to the smart home network. It also simplifies the installation of multiple accessories in parallel.

Focused on security: The ST25DA-C brings strong security to smart homes, leveraging ST’s proven expertise in embedded secure elements for protecting assets with device authentication, secure storage for cryptographic keys, certificates, and network credentials.

Based on Common Criteria-certified hardware, the ST25DA-C also targets certification to the GlobalPlatform Security Evaluation Standard for IoT Platforms (SESIP level 3).

The post STMicroelectronics streamlines smart-home device integration with industry-first Matter NFC chip appeared first on ELE Times.

Mitsubishi Electric India to Showcase Breakthrough Power Semiconductor Technologies at PCIM India 2025

Thu, 12/04/2025 - 11:30

Mitsubishi Electric India, is set to introduce its flagship cutting edge Power Semiconductor Devices and technology to the Indian market. MEI Participation in PCIM Asia New Delhi 2025 reinforces the company’s commitment on delivering high-efficiency semiconductor solutions to support India’s growing demand in the area of Home appliances, Railway, xEV, renewable energy and industrial Applications.

Visitors at PCIM India 2025 will experience the new DIPIPM platform that integrates inverter circuitry, gate-drive functions and protection features into a single module. These modules enable compact designs and improved system safety. Available in both IGBT and SiC-based versions, the latest Compact DIPIPM and SLIMDIP families are suited for applications such as room air conditioners, washing machines, commercial HVAC, solar pumping and light industrial drives.

Mitsubishi Electric India will also showcase a wider product portfolio, including high-voltage HVIGBT modules, LV100 and NX industrial power modules, and automotive-grade semiconductor platforms engineered for Utility-scale solar inverters, wind converters, EV charging & powertrains, Railway traction converters, HVDC transmission and induction heating. Alongside the Power Modules, Mitsubishi Electric India will also display its latest bare-die SiC MOSFETs and RC-IGBT technology which enables optimal structure, low loss, and high reliability devices for xEV- traction and charging applications.

Product Line Key Features
DIPIPM (Dual In-line Package Intelligent Power Module) *Offers CSTBT & RC-IGBT chip technologies in a wide line-up *Available in 600V and 1200V, 5A–100A
*Includes SiC-MOSFET variants and new Compact DIPIPM & SLIMDIP series
LV100 & NX Power Modules *Industry-standard IGBT & SiC modules with 7th/8th gen CSTBT chipset and SLC packaging
*Voltage: 1200V/1700V/2000V; Current: 225A–1800A
*Includes new 8th gen LV100 & NX models
HVIGBT (High-Voltage IGBT) *Modules for traction and power transmission
*Voltage options: 1700V, 3300V, 4500V, 6500V; Current: 400A–2400A
*High-voltage SiC up to 3300V/175A–800A
*Includes new XB Series
Power Modules for Automotive *Designed with integrated cooling fins and DLB technology
*The Line-up of 2 in 1 circuit & 6 in 1 circuit with latest SiC & RC-IGBT chip technologies
*Available in 750V/1300V, 350A–800A with on-chip current and temperature sensing
*Includes new J3 Series

 

Speaking on the participation, Mr. Hitesh Bhardwaj, General Manager/Business Head, Semiconductors & Devices, Mitsubishi Electric India said: “India is entering a decisive phase of Power Electronics across mobility, renewable energy infrastructure. With the introduction of latest Si and SiC semiconductor technologies to the domestic market, we aim to empower Indian manufacturers with smarter, more efficient and more reliable technologies. Our long-term vision is to support the country’s innovation ecosystem and contribute to sustainable growth across industry and society.”

With India’s manufacturing ecosystem evolving toward higher energy efficiency standards and smarter power architectures, Mitsubishi Electric India’s latest offering strengthens access to globally proven semiconductor innovation tailored for future-ready applications.

The post Mitsubishi Electric India to Showcase Breakthrough Power Semiconductor Technologies at PCIM India 2025 appeared first on ELE Times.

ASMPT Wins New Orders for Nineteen Chip-to-Substrate TCB Tools to Serve AI Chip Market

Thu, 12/04/2025 - 11:19

ASMPT announced it had won new orders for 19 Chip-to-Substrate (C2S) TCB tools from a major OSAT partner of the leading foundry serving the AI chip market.

ASMPT is the sole supplier and Process of Record (POR) of C2S TCB solutions for this customer, supporting their high-volume manufacturing requirements. These latest systems will enable their next-generation C2S bonding for logic applications as compound die sizes get larger. This demonstrates the customer’s continued confidence in ASMPT’s technological leadership and production-proven capabilities. Looking ahead, ASMPT is well-positioned to secure additional orders in the future.

This continued momentum for ASMPT’s flagship Thermo-Compression Bonding (TCB) solutions reinforces its position as the industry’s leading provider of advanced packaging solutions for artificial intelligence and high-performance computing applications.

“The TCB market is experiencing transformational growth driven by AI and HPC applications,” said Robin Ng, Group CEO, ASMPT. “Our comprehensive technology portfolio spanning chip-on-wafer, chip-on-substrate, and HBM applications positions ASMPT uniquely to support our customers’ most demanding advanced packaging roadmaps. This latest win validates our technology leadership and highlights the market’s recognition of our ability to deliver production-ready, scalable platforms.”

With the largest TCB installed base worldwide consisting of more than 500 tools, ASMPT is strategically positioned to capture between 35% to 40% of an expanded TCB market. ASMPT recently expressed confidence that the TCB Total Addressable Market (TAM) projection will exceed US$1 billion by 2027, bolstered by recent news about AI ecosystem investments.

The post ASMPT Wins New Orders for Nineteen Chip-to-Substrate TCB Tools to Serve AI Chip Market appeared first on ELE Times.

Microchip Halves the Power Required to Measure How Much Power Portable Devices Consume

Thu, 12/04/2025 - 10:53

Battery-operated devices and energy-restricted applications must track and monitor power consumption without wasting power in the process. To solve this challenge, Microchip Technology announced two digital power monitors that consume half the power of comparable solutions based on typical operating conditions at 1024 samples per second. The PAC1711 and PAC1811 power monitors achieve this efficiency milestone while also providing real-time system alerts for out-of-limit power events and a patent-pending step-alert function for identifying variations in long-running averages.

The 42V, 12-bit single-channel PAC1711 and 16-bit PAC1811 monitors are housed in 8- and 10-pin Very Thin Dual Flat, No-Lead (VDFN) packages, respectively, that are pin- and footprint-compatible with the popular Small Outline Transistor (SOT23)-8 package. This compatibility simplifies second-sourcing for developers, while streamlining upgrades and integration into existing systems.

“Until now, portable devices and a variety of energy-constrained applications have needed to burn a significant amount of valuable power to measure how much they are consuming,” said Keith Pazul, vice president of Microchip’s mixed-signal linear business unit. “Unlike many existing solutions, Microchip’s power monitors function as independent ‘watchdog’ peripherals, eliminating the need for the MCU to handle power monitoring tasks. These monitors allow the MCU or host processor to remain dormant until a significant power event occurs such as needing an LCD screen to power on.”

The PAC1711 and PAC1811 power monitors’ step-alert capability keeps a running average of voltage and current values. If there is a significant, user-defined variation, it will notify the MCU to act on it. The devices keep a rolling average, and any new sample can trigger an alert. A slow-sample pin option is available, which can delay the power usage sampling to every eight seconds and further conserve power.

An accumulator register in the power monitor can be used to manage logistical items, track system battery aging or time to recharge, and provide the short-term historical data for long-term power usage that the MCU can be programmed to act on. Both current monitor integrated circuits sense bus voltages from 0 to 42 volts and can communicate over an I2C interface. They are well-suited for first- or second-source options in computing, networking, AI/ML and E-Mobility applications.

The post Microchip Halves the Power Required to Measure How Much Power Portable Devices Consume appeared first on ELE Times.

Pages