ELE Times

Subscribe to ELE Times потік ELE Times
Адреса: https://www.eletimes.ai/
Оновлене: 3 години 56 хв тому

AI Glasses: Ushering in the Next Generation of Advanced Wearable Technology

Пн, 01/12/2026 - 08:42

Courtesy: NXP Semiconductors  

AI integration into wearable technology is experiencing explosive growth and covering a variety of application scenarios from portable assistants to health management. Their convenience of operation has also become a highlight of AI glasses. Users can easily access teleprompting, object recognition, real-time translation, navigation, health monitoring, and other operations without physically interacting with their mobile phones. AI glasses offer a plethora of use cases seamlessly integrating the digital and real worlds, powering the next emerging market.

The Power Challenge: Performance vs. Leakage

The main challenge for AI glasses is battery life. Limited by the weight and size of the device itself, AI glasses are usually equipped with a battery capacity of only 150~300mAh. To support diverse application scenarios, related high-performance application processors mostly use advanced process nodes of 6nm and below. Although the chip under this process has excellent dynamic running performance, it also brings serious leakage challenges. As the process nodes shrink, the leakage current of the silicon can increase by an order of magnitude. The contradiction between high leakage current and limited battery capacity significantly reduces the actual usage time of the product and negatively affects the user experience.

The chip architect is forced to weigh the benefits of the various process nodes, keeping in mind active power as well as leakage. With the challenge of minimising energy usage, many designs have taken advantage of a dual chip architecture, allowing for lower active power consumption by using the advanced process nodes, while achieving standby times with much lower leakage through the more established process nodes.

Solving the Power Problem: Two Mainstream Architectures

Currently, AI glasses solutions on the market mainly use two mainstream architectures:

“Application Processor + Coprocessor” Architecture

The “application processor + coprocessor” solution can bring users the richest functional experience and maximise battery life. The application processors used in AI Glasses are based on advanced processes, focusing on high performance, usually supporting high-resolution cameras, video encoding, high-performance neural network processing, and Wi-Fi/Bluetooth connectivity. In turn, coprocessors steer towards mature process technologies, focusing on lower frequencies to reduce operating and quiescent power consumption. The combination of lower active and standby power enables always-on features such as microphone beam forming and noise reduction for voice wake-up, voice calls, and music playback.

“MCU-only” Architecture

The “MCU-only” solution opens the door to designs with longer battery life, lighter and smaller frames, giving OEMs an easier path towards user comfort. With weight being one of the most important factors in the user experience of glasses, the MCU-only architecture reduces the number of components as well as the size of the battery. The weight of the glasses can be brought down to within 30g.

The strategy of an MCU-only architecture puts more emphasis on the microcontroller’s features and capabilities. Many features of the AP-Coprocessor design are expected within the MCU design. It is therefore critical to include features such as NPU, DSP, and a high-performing CPU core.

NXP’s Solution: The i.MX RT Family as the Ideal Coprocessor

The i.MX RT500, i.MX RT600 and i.MX RT700 has three chips in NXP’s i. MX RT low-power product family. These chips, as coprocessors, are currently widely used in the latest AI eyewear designs for many customers around the world. The i.MX RT500 Fusion F1 DSP can support voice wake-up, music playback, and call functions of smart glasses. The i.MX RT600 is mainly used as an audio coprocessor for smart glasses, supporting most noise reduction, beamforming, and wake-up algorithms. The i.MX RT700 features dual DSP (HiFi4/HiFi1) architecture and supports algorithmic processing of multiple complexities, while enabling greater power savings with the separation of power/clock domains between compute and sense subsystems.

How the i.MX RT700 Maximises Battery Life

As a coprocessor in AI glasses, the i.MX RT700 can flexibly configure power management and clock domains to switch roles based on different application scenarios: it can be used as an AI computing unit for high-performance multimedia data processing, and it can also be used as a voice input sensor hub for data processing in ultra-low power consumption.

AI glasses mainly rely on voice control to achieve user interaction, so voice wake-up is the most commonly used scenario and the key to determining the battery life of AI glasses. In mainstream use cases, the coprocessor remains in active mode at the lowest possible core voltage levels, awaiting the user’s voice commands, quickly switching to speech recognition mode with noise reduction in potentially noisy environments. Based on this user scenario, the i.MX RT700 can be configured to operate in sensor mode; at this time, only a few modules, such as HiFi1 DSP, DMA, MICFIL, SRAM, and power control (PMC), are active. The Digital Audio Interface (MICFIL) allows microphone signal acquisition; DMA is used for microphone signal handling; HiFi1 is used for noise reduction and wake-up algorithm execution, while the compute domain is in a power-down state.

Other low-power technologies included in the RT700, such as distortion-free audio clock source FRO, microphone module FIFO, and hardware voice detection (Hardware VAD), DMA wake-up also ensures that the system power consumption of i.MX RT700 voice wake-up scene can be under 2 mW, maximising power consumption while continuously monitoring.

RT700 also powers MCU-only

For display-related user scenarios, the i.MX RT700 can be configured in “High Performance Mode”, where the Vector Graphics Accelerator (2.5D GPU), Display Controller (LCDIF), and Display Bus (MIPI DSI) are enabled. While enabling high performance, the compute domain also takes advantage of low-power technologies such as MIPI ULPS (Ultra Low Power State), dynamic voltage regulation within the Process Voltage Temperature (PVT) tuning, and other low-power technologies.

With the continuous integration of intelligent hardware and artificial intelligence, choosing the right low-power high-performance chip has become the key to product innovation. With its deep technology accumulation, the i.MX RT series provides a solid foundation for cutting-edge applications such as AI glasses.

The post AI Glasses: Ushering in the Next Generation of Advanced Wearable Technology appeared first on ELE Times.

The semiconductor technology shaping the autonomous driving experience

Пн, 01/12/2026 - 08:10

Courtesy: Texas Instruments

Last summer in Italy, I held my breath as I prepared to drive down a narrow cobblestone road. It was pouring rain with no sign of stopping, and I could hardly see. Still, I pressed the gas pedal, my shoulders tense and my hands gripping the wheel.

This is just one example of a stressful driving experience. Whether it’s enduring a long road trip or crawling through bumper-to-bumper traffic, many people find driving to be nerve-wracking. Though we can spend weeks finding the perfect car, deliberating which seats will feel the most comfortable or which stereo system will sound the richest, it’s hard to enjoy the ride when you are constantly scanning for hazards, adjusting to changing weather conditions, or navigating unknown roadways.

But what if you could appreciate the experience of being in your vehicle while trusting your car to navigate the stressful drives for you?

We’re progressing toward that future, with worldwide investment in autonomous vehicles expected to grow by over US$700 million in 2028. But to understand the vehicle of the future, we must first understand how its architecture is evolving.

How software-defined vehicles (SDVs) are transforming automotive architecture 

I can’t discuss the vehicle of the future without starting with the transition to software-defined vehicles (SDVs). Because SDVs have radar, lidar, and camera modules, they are critical to a future where drivers have the latest automated driving features without having to purchase a new vehicle every few years.

For automotive designers, SDVs require separating software development from the hardware, fundamentally changing the way that they build a car. When carmakers consolidate software into fewer electronic control units (ECUs), they can make their vehicle platforms more scalable and streamline over-the-air updates.  These ECUs can handle the control of specific autonomous functions in real time, such as automatic braking or self-steering modules.

How integrated sensor fusion enables higher levels of vehicle autonomy

When SDVs centralise software, they’re capable of integrating advanced driver assistance system technologies that enable increased levels of vehicle autonomy. On today’s roads, using the Society of Automotive Engineers’ Levels of Driving Automation, level 1 or 2 (which requires people to drive even when support features are engaged) is the most prevalent. But what about in the future?

I envision that one day, every car will have accurate level 3 or 4 autonomy, characterised by automated driving features that can operate a vehicle under specific conditions. The advances in technology happening now will enable drivers to trust features in future vehicles as much as features like cruise control today. Instead of being fully responsible for stressful driving tasks, we can trust the vehicle’s system to take the lead. And at the heart of this evolution are semiconductors.

To achieve higher levels of vehicle autonomy, the ability to accurately detect and classify objects and respond in real time will require more advanced sensing technologies. The concept of combining data from multiple sensors to capture a comprehensive image of a vehicle’s surroundings is called sensor fusion. For example, if a radar sensor classifies an object as a tree, a second technology, such as lidar or camera, can confirm it in order to communicate to the driver that the tree is 50 feet ahead, enabling swift action.  

Why future vehicles need a high-speed, Ethernet-based data backbone

I like to say that tomorrow’s cars are like data centres on wheels, processing multiple large streams of high-speed data seamlessly.

The car’s computer, among other functions, coordinates things such as radar, audio, and data transfer in a high-speed communication network around the vehicle. While legacy communication interfaces for in-vehicle networking, such as Controller Area Network (CAN) and Local Interconnect Network (LIN), remain essential for controlling fundamental vehicle applications such as doors and windows, these interfaces must seamlessly integrate with emerging technologies. In order to accommodate the higher data processing needs of new vehicles, Ethernet will be the prevailing technology. Automotive Ethernet has emerged as a “digital backbone” to efficiently manage applications ranging from audio to standard radar.

As vehicles become capable of higher levels of autonomy, automotive designers will need higher-bandwidth networks for applications including high-resolution video and streaming radar. At TI, our portfolio supports diverse functions with varying requirements, readying us for that network evolution. With technologies like FPD-Link, vehicles can stream uncompressed, high-bandwidth radar, camera, and lidar data to the central compute to respond to events in real-time.

Design engineers must also have a powerful processor in the central computing system that can take data from multiple technologies, such as lidar, camera, and radar sensors, to complete a fast, real-time analysis and provide a 4D data breakdown to better perform object classification.

With expertise in radar, Ethernet, FPD-Link technology and central compute, TI works with automotive designers to help optimise solutions from end to end. Rather than designing devices that only perform one function, we look at how to best optimise our device ecosystem. For example, we design radar devices that easily interface with our Jacinto processors to achieve faster, more accurate decision-making.

What these advancements mean for the future driving experience

In the future, if I encounter the same road and rainy conditions in Italy as I did this summer, I might not drive. Instead, I might trust my car to safely get me to my destination, while I relax in my seat.

The vehicle of the future might not exist yet. But the technologies we’re developing today are making the vehicle of the future – and maybe even the next breakthrough of the future – real.

The post The semiconductor technology shaping the autonomous driving experience appeared first on ELE Times.

The electronics Industry in 2026 and Beyond: A Strategic Crossroads

Пн, 01/12/2026 - 07:46

As we stand on the threshold of 2026, the global electronics industry is undergoing a profound transformation. It is now a linchpin of industrial, strategic, and geopolitical competition, with implications for economies, national security, and everyday life. In a world where electronic systems power everything from personal communication to national infrastructure, the industry’s trajectory through 2026 and beyond will be a trendsetter for economic competitiveness and technological leadership worldwide.

Worldwide, electronic systems and semiconductor markets have regained strong growth momentum following recent supply fluctuations and trade tensions. In major economies, consumer-facing electronics still matter – smart TVs, connected appliances and IoT devices feature prominently in growth forecasts – but industrial and strategic demand is shaping the industry’s future. AI acceleration, 5G/6G networks, edge computing and automated factories are expanding the role of electronics far beyond personal use into the backbone of tomorrow’s digital economy.

For emerging economies like India, 2026 marks a pivotal year. Once predominantly an assembly hub, India’s electronics landscape is evolving quickly toward manufacturing depth and export competitiveness. Under initiatives like Make in India and Production-Linked Incentive schemes, India is targeting an ambitious USD 300 billion in domestic electronics production by 2026.

Despite progress in finished products, the industry’s most strategic component – the semiconductor – remains the ultimate litmus test of technological sovereignty. Demand for advanced logic, memory and power chips continues to skyrocket as AI, data centres, autonomous systems and EVs proliferate. However, high-end semiconductor fabrication is concentrated in a few global hubs, creating political and economic frictions. Expansion efforts are underway; India aims to bring complex chip manufacturing and packaging closer to local markets.

Now the industry’s evolution will hinge on architectural and material innovation as much as volume growth. Emerging manufacturing techniques like 3D-printed electronics, wide-band-gap power devices (such as GaN and SiC), and advanced packaging are reshaping how electronic systems are built and what they can do.

Integration with AI and machine learning at the edge – beyond centralised cloud systems – is transforming everything from consumer devices to industrial controls. AI-powered industrial machines, smart wearables and edge computing systems are now central to innovation narratives that go far beyond smartphones and laptops.

Governments play a deciding role in semiconductor incentives, R&D investment, and skills ecosystem development. India’s push into electronics manufacturing underscores how policy can unlock domestic value addition and attract foreign direct investment.

A young workforce is being credited with driving innovation in design labs and new technology ventures. This demographic shift could help transcend low-value assembly toward high-value engineering and R&D.

By the end of the decade, the core electronics industry will be defined by: reducing reliance on limited geographic hubs for chips and components; hardware designed for AI workloads will proliferate; energy efficiency and green manufacturing will be essential competitive factors, and new alliances and regional clusters will diversify global supply chains.

Let us check a few facts about a comprehensive, forward-looking overview of India’s electronics industry – where it stands now, the key forces shaping its future, and what lies ahead in the coming decade. India’s electronics production rose from Rs. 1.9 lakh crore in 2014–15 to Rs. 11.3 lakh crore in FY 2024–25 – a six-fold jump in a decade. Exports have similarly surged eightfold in that period.

Production Linked Incentive schemes significantly boost manufacturing across mobile phones, IT hardware, and components. The Electronics Components Manufacturing Scheme offers capital subsidies to build domestic production of PCBs and critical parts. The Scheme for Promotion of Manufacturing of Electronic Components and Semiconductors supports capital expenditure for high-value component plants. These policies aim to reduce dependence on imports, attract foreign investors, and expand high-value manufacturing. The global supply-chain shift, e.g., China + 1 strategies, is prompting electronics makers to diversify production to India. States like Uttar Pradesh, Tamil Nadu, Karnataka, and Andhra Pradesh are becoming hubs for manufacturing and exports — bringing infrastructure and investment.

There are certain challenges that India must overcome. It includes Component Import Dependency – despite growth in assembly, 85–90 % of electronics component value is still imported, especially from China, Korea, and Taiwan. Building domestic supply chains for PCBs, semiconductors, connectors, and precision parts remains a major hurdle. Bureaucratic delays in certifications are slowing production schedules and product launches. Production costs in India can be 10–20 % higher than in other Asian hubs, and R&D infrastructure for high-end semiconductors is still limited. India needs deep innovation capacity – not just assembly, but to move up the value chain.

India has set a target for itself for the coming years, such as a target of up to USD 500 billion in electronics manufacturing output by 2030. Achieving this would require scaling capacity, improving infrastructure, and drawing more global players into deeper parts of the supply chain. India needs to broaden Electronics Ecosystem Growth – Automotive electronics, industrial IoT, wearables/AI devices, and telecom equipment to expand domestic and export markets. EMS output is projected to grow rapidly, potentially capturing a larger share of the global EMS market. Semiconductor Ecosystem Development – policies are moving into a “scale-up phase” to build design, assembly, and, over time, manufacturing capabilities – crucial for tech sovereignty and global relevance. Global shifts in supply chain diversification present opportunities for India to attract investments that might otherwise be concentrated in China or Southeast Asia.

Geopolitical-economic dynamics are a significant stumbling block for India’s electronics industry, especially in relation to China and the United States – but it’s also both a challenge and an opportunity.

India’s electronics manufacturing growth has been strongly influenced by global tensions between China and the U.S. After the pandemic and during the U.S.-China trade/tech war, global supply chains began diversifying away from China – a “China +1” effect – and India benefited from this shift as multinational firms looked for alternatives for parts of their production.

Despite India’s assembly growth in mobile phones and other electronics, the industry remains heavily reliant on Chinese imports for key components and machinery. This dependency means that geopolitical friction with China can slow production, raise costs, and create supply bottlenecks for Indian electronics makers.

U.S.-India trade Frictions are also impacting growth. The U.S. imposed a high tariff of up to 50 % on Indian goods, which affects overall trade dynamics that make it harder for Indian electronics producers to scale exports cost-effectively. Hence, India is caught in a complex geopolitical squeeze: China remains essential for many inputs but is a strategic rival, while the U.S. provides market and technology ties but has also used tariffs as leverage.

On the other hand, India’s electronics exports to the U.S. had raced ahead by leveraging trade tensions that kept Chinese goods less competitive. But the recent reduction of U.S.–China tariffs has reduced India’s cost edge by around 10 percentage points, threatening export growth and investment momentum in the sector. India’s industry competitiveness isn’t purely industrial – it’s shaped by geopolitical policy decisions in Washington and Beijing.

Nevertheless, India’s electronics industry is poised for one of the most transformative growth phases in its history. With supportive policy frameworks, rising global demand, and strategic investments in talent and infrastructure, India could evolve from a largely assembly-focused hub to a comprehensive electronics and semiconductor powerhouse over the next decade – if it successfully strengthens its component base, resolves regulatory bottlenecks, and nurtures innovation ecosystems.

Devendra Kumar
Editor

The post The electronics Industry in 2026 and Beyond: A Strategic Crossroads appeared first on ELE Times.

Keysight & Samsung: Industry-First NR-NTN S-Band & Satellite Mobility Success

Пн, 01/12/2026 - 07:20

Keysight Technologies announced a groundbreaking end-to-end live new radio non-terrestrial networks (NR-NTN) connection in band n252, as defined by the Third Generation Partnership Project (3GPP) under Release 19, achieved using Samsung Electronics’ next-generation modem chipset. The demonstration, taking place at CES 2026, includes live satellite-to-satellite (SAT-to-SAT) mobility using commercial-grade modem silicon and cross-vendor interoperability, marking an important milestone for the emerging direct-to-cell satellite market.

The achievement also represents the public validation of n252 in an NTN system, a new band expected to be adopted by next-generation low Earth orbit (LEO) constellations.

Reliable global connectivity is a growing requirement for consumers, vehicles, IoT devices, and critical communications. As operators, device manufacturers, and satellite providers accelerate investment in NTN technologies, this achievement shows decisive progress toward direct-to-cell satellite coverage.

With the addition of n252 alongside earlier NTN demonstrations in n255 and n256, all major NR-NTN FR1 bands have now been validated end-to-end. This consolidation of band coverage is critical for enabling modem vendors, satellite operators, and device manufacturers to evaluate cross-band performance and mobility holistically as they prepare for commercial NTN services.

Keysight’s NTN Network Emulator Solutions recreate realistic multi-orbit LEO conditions, SAT-to-SAT mobility, and end-to-end routing while running live user applications over the NTN link. Together with Samsung’s chipset, the system validates user performance, interoperability, and standards conformance, providing a high-fidelity test environment that reduces risk, accelerates trials, and shortens time-to-market for NR-NTN solutions expected to scale in 2026.

The demonstration integrates Samsung’s next-generation modem chipset with Keysight’s NTN emulation portfolio to deliver real, standards-based NTN connectivity across a complete system. The setup validates end-to-end link performance, mobility between satellites, and multi-vendor interoperability, essential requirements for large-scale NTN deployments.

Peng Cao, Vice President and General Manager of Keysight’s Wireless Test Group, Keysight, said: “Together with Samsung’s System LSI Business, we are demonstrating the live NTN connection in 3GPP band n252 using commercial-grade modem silicon with true SAT-to-SAT mobility. With n252, n255, and n256 now validated across NTN, the ecosystem is clearly accelerating toward bringing direct-to-cell satellite connectivity to mass-market devices. Keysight’s NTN emulation environment enables chipset and device makers a controlled way to prove multi-satellite mobility, interoperability, and user-level performance, helping the industry move from concept to commercialisation.”

The post Keysight & Samsung: Industry-First NR-NTN S-Band & Satellite Mobility Success appeared first on ELE Times.

Quantum Technology 2.0: Road to Transformation

Пн, 01/12/2026 - 07:12

Courtesy: Rhode & Schwarz

After more than 100 years of research, quantum technology is increasingly finding its way into everyday life. Examples include its use in cell phones, computers, medical imaging methods and automotive navigation systems. But that’s just the beginning. Over the next few years, investment will increase significantly, and lots of other applications will take the world by storm. While test & measurement equipment from Rohde & Schwarz and Zurich Instruments is helping develop these applications, the technology group’s encryption solutions are ensuring more secure communications based on quantum principles.

Expectations for quantum technology are greater than in almost any other field. That’s no surprise, given the financial implications associated with the technology. For example, consulting firm McKinsey & Company estimates the global quantum technology market could be worth 97 billion dollars by 2035. According to McKinsey, quantum computing alone could be worth 72 billion dollars, and quantum communications up to 15 billion.

Previous developments clearly show that the projected values are entirely realistic. Many quantum effects have become part of our everyday lives. Modern smartphones, for example, contain several billion transistors, predominantly in flash memory chips. Their function – controlling currents and voltages – is based on the quantum mechanical properties of semiconductors. Even the GPS signals used in navigation systems and the LEDs used in smartphone flashlights are based on findings from quantum research.

To celebrate these achievements, UNESCO declared 2025 the “International Year of Quantum Science and Technology” – exactly 100 years after German physicist Werner Heisenberg developed his quantum mechanics theory based on the research findings of the time. Quantum technology was also in the spotlight with the 2025 Nobel Prize in Physics, which was awarded to quantum researchers John Clarke, Michel Devoret, and John Martinis.

Quantum technology 2.0: what can we expect?

Quantum physics in secure communications: Whether personal or professional, beach holiday snapshots or development proposals for new products, our data and data transmission need to be protected. Companies today consistently name cyberattacks and the resulting consequences as the top risk to their business. Developments in quantum computing are revealing the limits of conventional encryption technologies. Innovations in quantum communications are the key to the future, as they enable reliable detection of unauthorised access. This means you can create a genuine high-security channel for sensitive data.

Upgrading supply chains: Global flows of goods reach every corner of the Earth, and everything is now just a click away: a new tablet for home use or giveaways for a company party. But behind the scenes lies a complex logistics network of manufacturers, service providers, suppliers, merchants, shipping companies, courier services, and much more. The slightest backlog at a container port or change in the price of purchased items means alternatives must be found – preferably in real time. But the complexity of this task is also beyond what conventional computers can handle.

Personalised medicine: Everyone is different, and so are our illnesses. Cancer cells, for example, differ from one person to the next and often change over time. These differences and changes are already well documented in analytical terms, which has created huge amounts of data. Big Data is the buzzword. But evaluating this data quickly and effectively, to develop personalised forms of treatment, is impossible for conventional computers.

Fast. Faster. Quantum computing. 

Our world is controlled by binary code. Conventional computers process data as sequences of ones and zeros, true or false, off or on. This applies to everything, from simple text processing to virtual reality in the metaverse. But the world we live and work in is becoming increasingly complex. The amount of data we need to process is growing rapidly. In 2024, global digital data traffic had more than quadrupled over the space of just five years to 173.4 zettabytes. By 2029, experts believe this number will reach 527.5 zettabytes, equivalent to 527.5 trillion gigabytes.

Conventional computers face two insurmountable obstacles as a result: time and complexity. The larger the volume of data, the more time you need to process that data sequentially. The more complex the problem, the lower the probability that a binary code, with only two states, will be able to efficiently calculate a solution. Quantum computers have the potential to overcome both obstacles using insights from modern physics.

Hand in hand instead of either-or

Like conventional bits, quantum bits (qubits) form quantum mechanical memory units. In addition to just zeros and ones, they can also assume overlapping, mixed states. This simultaneity represents a fundamental technological paradigm shift. We can now run conventional sequential calculation methods simultaneously, which is why a quantum computer can save so much time.

But above all, the new quantum mechanical approach allows us to process new and much more complex questions. However, it’s not an either-or decision, either conventional processing power or quantum computing. Instead, what matters is integrating existing and quantum systems depending on the task.

Physics versus logic

In the quantum world, a particle can be in two places at the same time. Only when it is observed can you narrow down its location, for example, by measuring it. This unusual property is also why it is extremely unstable. Instead of using individual physical qubits, which can be very error-prone, multiple qubits are grouped into a logical qubit. However, the challenge here is that you need quantum systems with as many as one million logical qubits in order to answer practical questions, like protein folding. A logical qubit can contain up to 100 physical qubits, but the highest processing capacity is currently only 1,225 physical qubits.

Zurich Instruments has been part of the Rohde & Schwarz family since 2021. The T&M market for quantum computing holds enormous potential for both companies. Operating and maintaining quantum computers requires a wide range of specific T&M solutions because RF signals need to be generated and measured with extremely high precision to effectively create and record quantum states. Control systems for quantum computers are part of the company’s portfolio.

Secure. More secure. Quantum communications

Quantum computers have the potential to push the limits of processing efficiency. But this brings challenges, including secure communications – increasingly a priority in view of “Q-Day”, the point at which quantum computers will be able to crack classic encryption.

That is why alternative encryption methods are becoming increasingly important. There are essentially two main approaches. The first is post-quantum cryptography, which involves conventional encryption methods with one key difference: they can survive attacks from quantum computers unscathed. The algorithms used in this approach are based on theoretical assumptions for which no effective attacks are currently known using either quantum or conventional computers.

The other approach relates to quantum key distribution (QKD). The German Federal Office for Information Security (BSI) and the National Institute of Standards and Technology (NIST) are two of the main drivers of innovation in this area. In an increasingly digitalised world, private-sector customers, and government customers in particular, are dependent on trustworthy IT security solutions. Secure communications networks have become a critical infrastructure in advanced information societies.

These innovative solutions are shifting the focus of cryptology. Conventional methods, as well as more recent post-quantum methods, are based on mathematical assumptions, i.e. the idea that certain tasks cannot be calculated with sufficient efficiency. Quantum key distribution, by contrast, is based on physical principles. Rohde & Schwarz Cybersecurity is providing and leveraging its extensive expertise in security solutions, as well as its experience in building and implementing secure devices and systems, in a variety of research projects.

The post Quantum Technology 2.0: Road to Transformation appeared first on ELE Times.

Develop Highly Efficient X-in-1 Integrated Systems for EVs

Пн, 01/12/2026 - 07:03

Courtesy: Renesas

The recent tightening of CO2 emission regulations has accelerated the electrification of automobiles at an unprecedented pace. With the global shift from hybrid vehicles to electric vehicles (EVs), automakers are demanding more efficient, safe, and reliable systems. System integration, known as “X-in-1”, becomes the focus of attention. This innovative concept integrates functions traditionally controlled by separate MCUs, such as inverters, onboard chargers (OBC), DC/DC converters, and battery management systems (BMS), into a single microcontroller (MCU), achieving simultaneous miniaturisation, cost reduction, and efficiency improvement. As electric vehicles evolve, demand grows for X-in-1 configurations that consolidate multiple applications onto a single MCU.

At the core of this X-in-1 approach is Renesas’ RH850/U2B MCUs. This next generation of MCUs delivers the advanced control, safety, and security required by EVs on a single chip. It features a high-performance CPU with up to six cores, operating at up to 400MHz, enabling both real-time control and parallel processing. It also offers comprehensive analogue and timer functions for inverter and power converter applications, enabling efficient control of the entire electrification system on a single chip. Furthermore, the RH850/U2B MCUs offer a wide memory lineup, allowing flexible implementation of the optimal X-in-1 system tailored to specific requirements.

Figure 1. Comparison of MCU Configuration Before and After X-in-1 Integration

The RH850/U2B MCU demonstrates overwhelming superiority in inverter control, maximising the driving performance of EVs. With dedicated hardware optimised for inverter control, including a resolver-to-digital converter (RDC), an analogue-to-digital converter (ADC), and timers for three-phase motors, the RH850/U2B MCU enables high-speed, high-precision control at the hardware level that software alone cannot achieve. The integrated RDC eliminates the need for external angle detection ICs, contributing to reduced component count and simplified board design. Furthermore, the embedded Renesas proprietary Enhanced Motor Control Unit (EMU) executes complex control calculations in the hardware, significantly reducing CPU load while achieving high-speed, high-precision motor control (EMU is only included in the RH850/U2B6).

Figure 2. Comparison of External RDC and Internal RDC

The next-generation power devices using silicon carbide (SiC) and gallium nitride (GaN) are increasingly being adopted in OBCs and DC/DC converters. These devices enable high efficiency and fast switching, directly contributing to shorter charging times and improved energy efficiency. On the other hand, the RH850/U2B MCU incorporates a multifunctional timer (generic timer module (GTM)*2 and high-resolution PWM) that is capable of generating high-speed, high-resolution waveforms (minimum resolution of 156.25ps). This facilitates control that leverages the high-speed switching characteristics of SiC and GaN. It also incorporates a 12-bit fast comparator for high-frequency switching control and protection operations.

In addition to speed and energy efficiency, the RH850/U2B MCU also delivers outstanding performance in battery management systems, the heart of EVs. Monitoring and controlling the voltage and temperature of hundreds of cells demands high processing power. The RH850/U2B MCU features a multi-core CPU, allowing surplus resources to be allocated to BMS processing. This enables system miniaturisation and cost reduction without requiring additional MCUs.

As EVs proliferate, the importance of safety and security becomes critical. Compliant with ISO 26262 ASIL D, the RH850/U2B MCU ensures functional safety at the hardware level. It also incorporates security features compliant with EVITA Full, enabling the construction of highly secure systems even in X-in-1 configurations.

The evolution of EVs is moving towards faster, safer, and more efficient use of automobiles. Achieving this requires meeting new demands that conventional MCUs cannot fully address. The RH850/U2B MCU enables users to meet the needs of EVs with high-speed, high-precision inverter control via dedicated hardware; highly efficient switching control in OBCs and DC/DC converters using high-resolution, high-speed timers; multi-core utilisation in battery management systems; and comprehensive safety and security support.

The post Develop Highly Efficient X-in-1 Integrated Systems for EVs appeared first on ELE Times.

Nuvoton and ITRI Join Forces to Accelerate Edge AI Adoption Across Industries with the Entry-Level M55M1 AI MCU

Птн, 01/09/2026 - 09:58

Nuvoton Technology, centred on its NuMicro M55M1 AI MCU, is partnering with the Industrial Technology Research Institute (ITRI) to promote integrated “hardware–software” edge AI solutions. These solutions support diverse application scenarios, including manufacturing, smart buildings, and healthcare, enabling industries across the board to adopt AI quickly in a “usable, manageable, and affordable” way, and bringing AI directly into frontline equipment and business processes.

Aligned with the National Science and Technology Council (NSTC) and the Ministry of Economic Affairs (MOEA) initiative to build the Taiwan Smart System Integration and Manufacturing Platform, Nuvoton follows ITRI’s three key pillars for AI development—data, computing power, and algorithms—together with a six-dimension AI readiness framework covering AI strategy, organizational culture, talent and skills, infrastructure, data governance, and risk management. Based on this framework, Nuvoton modularises its toolchains, AI models, and development board offerings, and works with ITRI’s Chip and System Integration Service Platform Program to establish a TinyML micro-computing platform. This platform enables small and medium-sized enterprises (SMEs) to complete proof-of-concept (PoC) projects with minimal entry barriers, progress toward pilot production, and scale through replication. At the same time, it promotes “dual-brain collaboration” between AI experts and domain specialists, increasing project success rates and supporting the government’s vision of building Taiwan into an “AI Island.”

As one of the few entry-level AI solutions on the market, the M55M1 integrates an Arm Cortex-M55 core (up to 220 MHz) with an Arm Ethos-U55 micro-NPU in a single chip, delivering around 110 GOP/s of acceleration for mainstream CNN/DNN inference. The chip features up to 1.5 MB of on-chip SRAM and 2 MB of Flash. It can be expanded via HyperBus to support HyperRAM/HyperFlash, enabling real-time, offline, low-power AI inference and control directly at the edge. Together with Nuvoton’s in-house NuML Toolkit and a variety of readily available AI models (such as face recognition, object detection, speech/command recognition, and anomaly detection), developers can quickly get started using a standard MCU development flow, effectively lowering the barrier to AI adoption.

Nuvoton and ITRI will first focus on three key real-world application scenarios:

  • Edge inspection on manufacturing lines: Using CCAP for image pre-processing and U55 for inference to perform object detection or defect identification at the edge, supporting quality inspection as well as predictive analysis of equipment health.
  • People flow detection and energy-saving control in smart buildings: Leveraging lightweight sensing such as PIR, ToF, or low-resolution imaging, combined with time-based and zoned control strategies, to drive lighting/HVAC on/off and dimming/airflow adjustments, thereby improving energy efficiency.
  • Edge alerts for medical and long-term care: Performing posture and fall detection directly on end devices, uploading only events and key indicators to balance personal data protection with overall system availability.

Nuvoton and ITRI will continue to leverage Taiwan’s local supply chain and its strengths in hardware–software integration, using a systematic approach of “data × computing power × algorithms” to bring AI directly into real-world environments. With its single-chip capability to handle combined requirements in vision, audio, and control, the M55M1 enables small and medium-sized enterprises to embrace AI in an affordable and well-governed way.

Nuvoton is now collaborating with system integrators and field partners across scenarios such as manufacturing, buildings, healthcare, and public services, providing development boards, toolchains, and best-practice templates to help enterprises complete PoC and mass deployment in the shortest possible time. We welcome inquiries and partnership opportunities to jointly advance “AI in industries and industrialisation of AI,” accelerating AI transformation and value innovation across Taiwan’s many sectors.

The post Nuvoton and ITRI Join Forces to Accelerate Edge AI Adoption Across Industries with the Entry-Level M55M1 AI MCU appeared first on ELE Times.

Cadence to deliver pre-validated chiplet solutions to Accelerate Chiplet Time to Market

Птн, 01/09/2026 - 08:17
Cadence announced a Chiplet Spec-to-Packaged Parts ecosystem to reduce engineering complexity and accelerate time to market for customers developing chiplets targeting physical AI, data centre, and high-performance computing (HPC) applications. Initial IP partners joining Cadence include Arm, Arteris, eMemory, M31 Technology, Silicon Creations and Trilinear Technologies, as well as silicon analytics partner proteanTecs. To help reduce risk and streamline customer adoption, Cadence is collaborating with Samsung Foundry to build out a silicon prototype demonstration of the Cadence Physical AI chiplet platform, including pre-integrated partner IP on the Samsung Foundry SF5A process.
Extending their longstanding history of close collaboration, Cadence and Arm are working together to accelerate innovation across physical and infrastructure AI applications. Cadence will leverage the advanced Arm Zena Compute Subsystem (CSS) and other essential IP to enhance Cadence’s Physical AI chiplet platform and Chiplet Framework. The resulting new Cadence solutions accommodate the demanding next-generation edge AI processing requirements for automobiles, robotics and drones, as well as the needs of standards-based I/O and memory chiplets for data centre, cloud and HPC applications. The alliances reduce engineering complexities, offer customers a low-risk path to advanced chiplet adoption and pave the way for smarter, safer and more efficient systems.
“Cadence’s new chiplet ecosystem represents a significant milestone in chiplet enablement,” said David Glasco, vice president of the Compute Solutions Group at Cadence. “Multi-die and chiplet-based architectures are increasingly critical to achieving greater performance and cost efficiency amid growing design complexity. Cadence’s chiplet solutions optimise costs, provide customisation flexibility and enable configurability. By combining our extensive IP and SoC design expertise with pre-integrated and pre-validated IP from our robust partner ecosystem, Cadence is accelerating the development of chiplet-based solutions and helping customers mitigate risk to quickly realise their chiplet ambitions with greater confidence.”
Cadence has built spec-driven automation to generate chiplet framework architectures that combine Cadence IP and third-party partner IP with chiplet management, security, and safety features, all supported by advanced software. The generated EDA tool flow enables seamless simulation with the Cadence Xcelium Logic Simulator and emulation with the Cadence Palladium Z3 Enterprise Emulation Platform, while the physical design flow employs real-time feedback for efficient place-and-route cycles. The resulting chiplet architectures are standards-compliant to ensure broad interoperability across the chiplet ecosystem, including adherence to the Arm Chiplet System Architecture and future OCP Foundational Chiplet System Architecture. Cadence’s Universal Chiplet Interconnect Express (UCIe) IP provides industry-standard die-to-die connectivity, while a comprehensive protocol IP portfolio enables fast integration of leading-edge interfaces such as LPDDR6/5X, DDR5-MRDIMM, PCI Express (PCIe) 7.0, and HBM4.
An earlier prototype of the Cadence base system chiplet, which is part of the Cadence Physical AI chiplet platform and incorporates the Cadence chiplet framework, UCIe 32G, and LPDDR5X IP, has already been fully silicon validated.
Supporting Partner Quotes
“As compute demands surge across automotive, robotics, and other emerging applications, the industry needs scalable solutions that deliver higher performance, greater efficiency, and functional safety by design. By leveraging Arm Zena CSS, Cadence’s chiplet platform will meet the requirements of next-generation intelligent systems that will advance the physical AI landscape, accelerate chiplet adoption, and help customers reduce design complexity.”
Suraj Gajendra, Vice President of Products and Solutions, Physical AI Business Unit, Arm
“Arteris network-on-chip IP products, including Ncore and FlexNoC, are at the forefront of innovation, and we are pleased to support the Cadence Physical AI Chiplet Platform and Chiplet Framework. Together with Cadence, we are enabling customers to confidently adopt chiplet architectures with high-bandwidth, scalable, and production-proven interconnect technology for next-generation multi-die systems.”
Guillaume Boillet, Vice President of Strategic Marketing, Arteris
“eMemory’s enhanced OTP products complement Cadence’s Securyzr™ Root of Trust within the Cadence Chiplet Framework. As a leading provider of non-volatile memory technologies, the combination of eMemory technology and Cadence’s security subsystem results in a Physical AI Chiplet platform delivering secure storage and long-lifecycle key management, reinforcing the strong hardware foundation provided by Cadence for die-to-die security and safety in advanced chiplet designs.”
Charles Hsu, Chairman, eMemory
“M31 is proud to be a contributor to Cadence’s expanding chiplet ecosystem, continuously advancing interface IP on leading-edge process technologies and keeping pace with the latest MIPI standards. With proven automotive-grade IP and over a decade of experience supporting high-volume consumer applications, M31 delivers world-class MIPI PHY interface IP that enables customers to rapidly realise advanced chiplet solutions with flexible MIPI CSI and DSI integration.”
Scott Chang, CEO, M31 Technology
“We’re thrilled to partner with Cadence on its chiplet platform, embedding proteanTecs telemetry across all chiplet types. Together, we’re enabling safe, reliable and power-efficient physical AI for next-gen compute demands. It’s an amazing collaboration delivering real value for customers, building advanced SoCs and systems for automotive and autonomous applications.”
Ziv Paz, VP of Business Development, proteanTecs
 
“We’re pleased to collaborate with Cadence to demonstrate the competitiveness of Samsung’s SF5A technology. Through this trusted partnership, we look forward to the successful expansion of the Chiplet Spec-to-Packaged Parts ecosystem and helping customers accelerate reliable paths to cutting-edge silicon solutions for physical AI applications, including next-generation automotive designs.”
Taejoong Song, Vice President of Foundry Technology Planning, Samsung Electronics
 
“As a long-time Cadence partner, we’re pleased to deepen our collaboration through the Chiplet Spec-to-Parts initiative. Over the past 15 years, we’ve developed and delivered over 100 custom PLLs for Cadence across leading foundries. Throughout this partnership, we have provided high-performance low-jitter PLLs and specialised clocking solutions, and we’re excited to extend our collaboration to help accelerate next-generation chiplet-based designs.”
Pawel Banachowicz, PLL Product Line Development Director, Silicon Creations
 
“Trilinear Technologies is excited to provide our advanced DisplayPort IP as part of this innovative initiative. Collaborating with Cadence enables us to drive high-performance video connectivity and deliver flexible, future-ready display solutions for the chiplet ecosystem.”
Carl Ruggiero, CEO, Trilinear Technologies

The post Cadence to deliver pre-validated chiplet solutions to Accelerate Chiplet Time to Market appeared first on ELE Times.

Microchip Releases Custom Firmware For NVIDIA DGX Spark For Its MEC1723 Embedded Controllers

Птн, 01/09/2026 - 07:58
Microchip Technology announced the release of custom-designed firmware for its MEC1723 Embedded Controller (EC), customised to support NVIDIA DGX Spark personal AI supercomputers. The software is designed to optimise the MEC1723 EC’s capabilities for system management of AI workloads on the NVIDIA DGX platform. By focusing on firmware innovation within its controllers, Microchip is helping to improve performance and security in demanding AI computing architectures.
Embedded controllers play an important role in managing power sequencing, alerts and system-level energy regulation. In this application, the MEC1723 EC goes a step further to also manage critical firmware operations.
  • Secure firmware authentication: firmware code is digitally signed and authenticated by NVIDIA, helping to maintain platform integrity.
  • Root of Trust for system boot: cryptographic verification of the firmware using Elliptic Curve Cryptography (ECC-P384) public key technology. This establishes the root of trust for the entire laptop, which is critical because the EC is the first device to power on and authorise secure system boot.
  • Advanced power management: handles battery charging, alerts and system power state transitions to optimise energy efficiency.
  • System control: oversees key scan and keypad operations for reliable user input.
  • New host interface support: implements packet command format processing unique to the NVIDIA DGX interface, advancing beyond traditional byte-level data transfers.
  • Value-added integration: incorporates Electromagnetic Interference (EMI) and Static Random-Access Memory (SRAM) interfaces to improve overall system performance.
“The collaboration between Microchip and NVIDIA helps deliver secure, tailored firmware solutions that address the complex needs of modern computing platforms,” said Nuri Dagdeviren, corporate vice president of Microchip Technology’s secure computing group. “Our MEC1723 firmware is customised to provide reliable operation and advanced functionality for NVIDIA DGX architecture, supporting the evolving requirements of client computing.”
Microchip’s MEC embedded controllers are designed to support the next generation of notebook and desktop applications across industrial, data centre and consumer markets. These controllers provide advanced system management, security features and efficient power management, making them suitable for today’s high-performance computing needs.

The post Microchip Releases Custom Firmware For NVIDIA DGX Spark For Its MEC1723 Embedded Controllers appeared first on ELE Times.

Infineon and HL Klemove collaborate to advance innovation for SDVs

Птн, 01/09/2026 - 07:44

Infineon Technologies AG and HL Klemove aim to strengthen their strategic collaboration in automotive technologies. Their partnership aims to combine Infineon’s semiconductor expertise and system understanding with HL Klemove’s capabilities in advanced autonomous driving systems to accelerate innovation in vehicle electronic architecture for the Software-Defined Vehicle (SDV) era and advance autonomous driving technologies.

This collaboration reflects the shared commitment of both companies to delivering safe and efficient connected mobility solutions. By optimising resources and accelerating proof of concept development, the partners aim to bring innovative technologies to market faster. Together, they plan to build the foundation for future key projects with high-performance, highly reliable autonomous driving solutions that combine Infineon’s semiconductor expertise and HL Klemove’s system integration capabilities.

Under the MoU, the two companies will cooperate in key areas, including:

  • Next-generation Zonal Control Units: The companies will jointly develop zone controller applications using Infineon’s microcontrollers and power semiconductors. HL Klemove will lead application development, while Infineon provides semiconductor technology support. Through prototype development, the collaboration aims to strengthen competitiveness in SDV electronic architecture.
  • Next-generation Radar Technologies: HL Klemove will leverage Infineon’s radar semiconductor solutions to develop high-resolution and short-range satellite radar, preparing for commercialisation through proof of concept. Additionally, the companies will work on high-resolution imaging radar to achieve next-generation radar technologies capable of precise object recognition.
  • Vehicle Ethernet-based ADAS and Camera Solutions: The partners will cooperate on developing front camera modules and an ADAS parking control unit using Infineon’s Ethernet technology. HL Klemove will handle system and product development, while Infineon provides Ethernet semiconductor and networking technology to enable high-speed, highly reliable in-vehicle network solutions.

“Based on our holistic product portfolio, deep system understanding and application know-how, Infineon aims to empower the automotive industry to accelerate time-to-market of software-defined vehicles,” said Peter Schaefer, Executive Vice President and CSO Automotive of Infineon. “Our collaboration with HL Klemove combines Infineon’s technology leadership with HL Klemove’s system expertise to deliver safer and smarter mobility solutions.”

Yoon-Haeng Lee, CEO of HL Klemove, said, “This collaboration marks an important milestone in realising the next-generation electronic architecture required for the software-defined vehicle era. By combining HL Klemove’s system architecture and integration capabilities with Infineon’s semiconductor technology, we will accelerate innovation in key areas such as next-generation zonal controllers, vehicle Ethernet-based ADAS systems, and high-resolution radar.”

The post Infineon and HL Klemove collaborate to advance innovation for SDVs appeared first on ELE Times.

TSA to deploy Rohde & Schwarz QPS201 security scanners at airport checkpoints, ahead of Soccer World Cup, 2026

Птн, 01/09/2026 - 07:08

Rohde & Schwarz, a world leader in AI-based millimetre wave screening technology, announced today it has won a multi-million dollar award from TSA to supply its QPS201 AIT security scanners to passenger security screening checkpoints at selected Soccer World Cup 2026 host city airports.

“We are thrilled to receive this award to deliver QPS201’s high-volume and passenger-friendly on-person security screening technology to modernize checkpoints at the airports of cities hosting the matches,” said Frank Dunn, CEO of Rohde & Schwarz USA, Inc. “TSA’s continued investment in the QPS will also further expand Rohde & Schwarz’s economic impact as we grow and create jobs at our facilities in Maryland and Texas.”

“We are proud that TSA is investing in modernising security checkpoints at the Soccer World Cup 2026 host city airports with our high-performance QPS201 technology platform,” said Andreas Haegele, Vice President of Microwave Imaging. “Rohde & Schwarz is deeply committed to our partnership with TSA. We will continue to develop and deliver innovative and effective on-person screening solutions to make airport security more efficient and convenient in the upcoming mega decade of travel, including the Soccer World Cup, America’s 250th Anniversary and the Olympic Games.”

The QPS201 achieved TSA qualification in 2022, approving it for use in US passenger security screening checkpoints and has achieved certification to the TSA and European Civil Aviation Conference (ECAC) highest standards. There are more than 100 R&S QPS201 scanners deployed in US airports already, and more than 2,000 systems deployed in airports worldwide. The QPS201 uses safe millimetre wave radio frequency technology to rapidly and accurately screen passengers for concealed threats. The system requires only milliseconds per scan, and its open design and hands-down scan pose make security screening easy and accessible for travellers.

The post TSA to deploy Rohde & Schwarz QPS201 security scanners at airport checkpoints, ahead of Soccer World Cup, 2026 appeared first on ELE Times.

Fluentgrid Completes Wirepas Certified HES Integration, Joining The Growing Ecosystem For Smart Electricity Metering

Чтв, 01/08/2026 - 12:21

Fluentgrid Ltd., a leading provider of utility digitalisation platforms and advanced grid management solutions, announced its joining the Wirepas ecosystem and completing full integration of its Head-End System (HES) with the Wirepas Certified platform.

This milestone allows utilities and AMI service providers to seamlessly deploy Wirepas-based networks using Fluentgrid’s proven HES, enabling scalable, multi-vendor smart electricity metering rollouts with assured data reliability and secure, standards-aligned performance. Fluentgrid has already initiated its first pilots on the integrated platform, with early results confirming strong interoperability and field readiness. The integration reinforces both companies’ commitment to supporting India’s RDSS program by ensuring solutions that directly address the needs of utilities and the realities of large-scale deployment.

“Fluentgrid has always been committed to providing utilities with open, flexible and future-
proof digital infrastructure,” said Vipresh Gannamani, Director, Fluentgrid. “By integrating our Head-End System with the Wirepas Certified platform, we are expanding the choice and
interoperability available to our customers. This collaboration ensures that utilities can adopt large-scale mesh deployments with confidence, supported by a robust, field-tested ecosystem, aligned with the national goal of enabling the RDSS vision.”

Wirepas CEO Teppo Hemiä commented:
“Fluentgrid’s integration brings tremendous value to the Wirepas ecosystem in India. A strong and interoperable Head-End System is essential for the scale the market demands. Their completed integration and ongoing pilots are proof of real progress towards open, multi-vendor smart metering architectures, and fully in line with our focus on supporting utilities and helping India achieve the ambitions of the RDSS program.”

The combined capabilities of Fluentgrid’s HES and the Wirepas Certified platform provide
utilities, AMISPs and system integrators with an ultra-resilient, infinitely scalable solution that accelerates deployment timelines while maintaining full transparency and interoperability across the value chain.

The post Fluentgrid Completes Wirepas Certified HES Integration, Joining The Growing Ecosystem For Smart Electricity Metering appeared first on ELE Times.

Cadence Reinforces Long-Term R&D Commitment, Celebrating 20 years in Pune

Чтв, 01/08/2026 - 11:24

Cadence, a global leader in electronic system design, celebrated 20 years in Pune as a core research and development hub. This milestone marks two decades of sustained investment and innovation in the region. Established in 2006 by Tensilica, now part of Cadence, this anniversary marks the company’s early belief in Pune’s technology and engineering ecosystem during a period when few multinational technology companies operated there.

Starting with a five-member team, Cadence now employs over 300 employees in Pune and continues to scale its talent base. The Pune centre is a key part of the Silicon Solutions Group. Teams here develop highly complex digital signal processing (DSP) IP, AI accelerators, DDR, and mixed-signal IP for leading semiconductor and electronics companies worldwide. These technologies enable critical applications across consumer electronics, data centres, and automotive markets.

“As we celebrate 20 years in Pune, we take pride in the world-class IP teams here, who collaborate with our global teams to deliver products used by customers worldwide,” said Boyd Phelps, Senior Vice President and General Manager, Silicon Solutions Group at Cadence. “The continued growth of our Pune site emphasises Cadence’s confidence in the region’s talent and our ongoing commitment to investing in people, capabilities, and infrastructure across India.”

As it enters its third decade in Pune, the company remains dedicated to advancing cutting-edge silicon IP and nurturing local talent. Cadence actively partners with MeitY, AICTE, IITs, and over 400 universities to build a strong chip-design talent pipeline. It also supports startups through initiatives like Chips to Startup (C2S). Through advanced EDA tools and India-led innovations in AI-driven and chiplet-based design, Cadence is helping advance India’s semiconductor mission while accelerating global innovation.

The post Cadence Reinforces Long-Term R&D Commitment, Celebrating 20 years in Pune appeared first on ELE Times.

Breakthrough in D-band Wireless: Anritsu and VTT Demonstrate World-Leading Transmit array-Based High-Speed Connectivity

Чтв, 01/08/2026 - 07:49

Anritsu and VTT Technical Research Centre of Finland have demonstrated a major advance in D-band wireless communications by validating a beam-steering transmit array antenna system using advanced test equipment. The achievement confirms the feasibility of stable, high-capacity wireless links for next-generation backhaul, industrial, defence and future 6G networks.

Using Anritsu’s precision test equipment and VTT’s steerable transmitarray antenna, the teams achieved high-speed wireless links across the 110–170 GHz D-band. Link performance and beam-steering behaviour were assessed under realistic over-the-air (OTA) conditions using wideband modulated signals up to 8 GHz bandwidth. This system-level characterisation, from signal generation to OTA performance, confirmed multi-gigabit data rates in the tens-of-Gbps range, including 20 Gbps over 1 m and reliable operation up to 7 m, setting a new benchmark for D-band connectivity.

The demonstration features a lightweight, scalable transmitarray antenna developed by VTT, incorporating advanced phase-shifting elements and vector-modulator MMICs. Its electronically steerable design provides rapid, precise beam control without mechanical movement, maintaining signal strength under changing conditions. Supported by Anritsu’s state-of-the-art test equipment, the results reflect a proven, instrumentation-grade measurement approach that ensures reliability and scalability for future deployments.

“Anritsu is proud to collaborate with VTT to advance the practical use of D-band wireless technology. Together, we have validated performance levels that bring high-frequency wireless links closer to real-world deployment,” said Jonathan Borrill, CTO, Test & Measurement, Anritsu.

“This milestone shows how strategic partnerships turn deep tech into a competitive advantage. By combining VTT’s steerable transmitarray expertise with Anritsu’s precise instrumentation‑grade validation, we shorten adoption cycles and scale D‑band from the lab to live networks — creating growth opportunities across critical infrastructure, manufacturing, defence, 6G and beyond,” said Tauno Vähä‑Heikkilä, Director, Strategic Partnerships, VTT.

Anritsu and VTT will now engage with industry partners to evaluate use cases and prepare the technology for upcoming field trials and deployments, marking a landmark step toward realising the potential of D-band wireless for next-generation networks.

The post Breakthrough in D-band Wireless: Anritsu and VTT Demonstrate World-Leading Transmit array-Based High-Speed Connectivity appeared first on ELE Times.

Redefining Edge Computing: How the STM32V8 18nm Node Outperforms Legacy 40nm MCUs

Срд, 01/07/2026 - 12:10

STMicroelectronics held a virtual media briefing, hosted by Patrick Aidoune, General Manager, General Purpose MCU Division at ST, on November 17, 2025. The briefing was held before their flagship event, the STM32 Summit, where they launched STM32V8, a new generation of STM32 microcontrollers.

STMicroelectronics introduced its new generation microcontroller, STM32V8, under the STM32 class recently. Built on an innovative 18nm process technology with FD-SOI and phase change memory (PCM) technology included, this microcontroller is the first of its kind in the world.  It is the first under 20nm process to use FD-SOI along with an embedded PCM technology.

FD-SOI Technology

The FD-SOI is a silicon technology, co-developed by ST, which brought innovation in the aerospace and automotive applications. The 18nm process, co-developed with the Samsung Foundry, provides a cost-competitive leap in both performance as well as power consumption.

The FD-SOI technology gives a strong robustness to ionising particles and reliability in harsh operating environments, which makes it particularly suitable for intense radiation exposure found in earth orbit systems. The FD-SOI also helps reduce the static power consumption, along with allowing operations on a lower voltage supply, while sustaining harsh industrial environments as well.

Key Features

STM32V8’s Arm Cortex-M85 core, along with the 18nm process, gives it a clock speed of up to 800MHz, making it the most powerful STM32 ever shipped. It has also been embedded with up to 4 Mbytes of user memory in a competitive dual bank, allowing bank swapping for seamless code updates.

Keeping in mind the needs of developers, the STM32V8 provides for more compute headroom, along with more security and improved efficiency. Compared it is 40nm process node with the same technologies, the STM32V8 brings with it improved performance, higher density, and better power efficiency.

Industrial Applications

This new microcontroller is a multipurpose system to benefit several industries:

  • Factory Automation and Robotics
  • Audio Applications
  • Smart Cities and Buildings
  • Energy Management Systems
  • Healthcare and Biosensing
  • Transportation (ebikes)

Achievements

ST’s new microcontroller has been selected by SpaceX for its high-speed connectivity system in the Starlink Satellite System.

“The successful deployment of the Starlink mini laser system in space, which uses ST’s STM32V8 microcontroller, marks a significant milestone in advancing high-speed connectivity across the Starlink network. The STM32V8’s high computing performance and integration of large embedded memory and digital features were critical in meeting our demanding real-time processing requirements, while providing a higher level of reliability and robustness to the Low Earth Orbit environment, thanks to the 18nm FD-SOI technology. We look forward to integrating the STM32V8 into other products and leveraging its capabilities for next-generation advanced applications,” said Michael Nicolls, Vice President, Starlink Engineering at SpaceX.

STM32V8, like its predecessors, is expected to draw significant benefit from ST’s edge AI ecosystem, which is under continued expansion. Currently, the STM32V8 is in early-stage access for selected customers with key OEMs’ availability as of the first quarter 2026 and with broader availability to follow.

Apart from unveiling the new generation microcontroller, ST also announced the expansion of its STM32 AI Model Zoo, which is part of the comprehensive ST Edge AI Suite of tools. The STM32 AI Model Zoo has more than 140 models from 60 model families for vision, audio, and sensing AI applications at the edge, making it the largest MCU-optimised library of its kind.

This AI Model Zoo has been designed, keeping in mind the requirements of both data scientists and embedded systems engineers, a model that’s accurate enough to be useful and that also fits within their energy and memory constraints.

The STM32 AI Model Zoo is the richest in the industry, for it not only offers multiple models, but also scripts to easily retrain models, evaluate accuracy, and deploy on boards. ST has also introduced native support for PyTorch models. This complements their existing support for TensorFlow, Keras AI frameworks, LiteRT, and ONNX formats, giving developers additional flexibility in their development workflow. They are also introducing more than 30 new families of models, which can use the same deployment pipeline. Many of these models have already been quantised and pruned, meaning that they offer significant memory size and inference time optimisations while preserving accuracy.

Additionally, they also announced the release of STM32 Sidekick, their new AI agent on the ST Community, available 24/7. This new AI agent is trained on official STM32 documentation (datasheets, reference manuals, user manuals, application notes, wiki entries, and community knowledge base articles) to help users locate relevant technical data, obtain concise summaries of complex topics, and discover insights and documents. Alongside, they announced STM32WL3R, a version of their STM32WL3 tailored for remote control applications supporting the 315 MHz band. The STM32WL3R is a sub-GHz wireless microcontroller with an ultra-low-power radio.

~ Shreya Bansal, Sub-Editor

The post Redefining Edge Computing: How the STM32V8 18nm Node Outperforms Legacy 40nm MCUs appeared first on ELE Times.

“‘Bharat’ will become a major player in entire electronics stack…”, Predicts Union Minister, Ashwini Vaishnaw

Срд, 01/07/2026 - 11:17

Union Electronics and IT Minister Ashwini Vaishnaw predicted that ‘Bharat’ will become a major player in the entire electronics stack, in terms of design, manufacturing, operating system, applications, materials, and equipment.

In an X post, the Union Minister drew attention to a major milestone for Prime Minister Narendra Modi’s ‘Make in India’ initiative and making India a major producer economy since Apple shipped $50 billion worth of mobile phones in 2025.

“Electronics production has increased six times in the last 11 years. And electronics exports have grown 8 times under PM Modi’s focused leadership. This progress has propelled electronics products among the top three exported items,” Vaishnaw noted.

He further informed that 46 component manufacturing projects, laptop, server, and hearable manufacturers had added to the ecosystem, which are making electronics manufacturing a major driver of the manufacturing economy.

“Four semiconductor plants will start commercial production this year. Total jobs in electronics manufacturing are now 25 lakh, with many factories employing more than 5,000 employees in a single location. Some plants employ as many as 40,000 employees in a single location,” the minister informed, adding that “this is just the beginning”.

Last week, the industry welcomed the approval of 22 new proposals under the third tranche of the Electronics Components Manufacturing Scheme (ECMS) by the government, saying that it marks a decisive inflexion point in India’s journey towards deep manufacturing and the creation of globally competitive Indian champions in electronics components.

With this, the total number of ECMS-approved projects rises to 46, taking cumulative approved investments to over Rs 54,500 crore. Earlier tranches saw seven projects worth Rs 5,532 crore approved on October 22 and 17 projects amounting to Rs 7,172 crore on November 17. The rapid scale-up across tranches underscores the strong industry response and the growing confidence in India’s components manufacturing vision.

According to the IT Ministry, the 22 projects approved in the third tranche are expected to generate production worth Rs 2,58,152 crore and create 33,791 direct jobs.

The post “‘Bharat’ will become a major player in entire electronics stack…”, Predicts Union Minister, Ashwini Vaishnaw appeared first on ELE Times.

NVIDIA’s Jetson T4000 for Lightweight & Stable Edge AI Unveiled by EDOM

Срд, 01/07/2026 - 08:51

EDOM Technology announced the introduction of the NVIDIA Jetson T4000 edge AI module, addressing the growing demand from system integrators, equipment manufacturers, and enterprise customers for balanced performance, power efficiency, and deployment flexibility. With powerful inference capability and a lightweight design, NVIDIA Jetson T4000 enables faster implementation of practical physical AI applications.

Powered by NVIDIA Blackwell architecture, NVIDIA Jetson T4000 supports Transformer Engine and Multi-Instance GPU (MIG) technologies. The module integrates a 12-core Arm Neoverse-V3AE CPU, three 25GbE network interfaces, and a wide range of I/O options, making it well-suited for low-latency, multi-sensor, and real-time computing requirements. In addition, Jetson T4000 features a third-generation programmable vision accelerator (PVA), dual encoders and decoders, and an optical flow accelerator. These dedicated hardware engines allow stable AI inference even under constrained compute and power budgets, making the platform particularly suitable for mid-range models and real-time edge applications.

For system integrators (SIs), the modular architecture of Jetson T4000, combined with NVIDIA’s mature software ecosystem, enables rapid integration of vision, sensing, and control systems. This significantly shortens development and validation cycles while improving project delivery efficiency, especially for multi-site and scalable edge AI deployments.

For equipment manufacturers, Jetson T4000’s compact form factor and low-power design allow flexible integration into a wide range of end devices, including advanced robotics, industrial equipment, smart terminals, machine vision systems, and edge controllers. These capabilities help manufacturers bring stable AI inference into products with limited space and power budgets, accelerating intelligent product upgrades.

Enterprise users can deploy Jetson T4000 across diverse scenarios such as smart factories, smart retail, security, and edge sensor data processing. By performing inference and data pre-processing at the edge, organisations can reduce system latency, lower cloud workloads, and improve overall operational efficiency—while maintaining system stability and deployment flexibility.

In robotics and automation applications, Jetson T4000 features low power consumption, high-speed I/O and a compact footprint, making it an ideal platform for small mobile robots, educational robots, and autonomous inspection systems, delivering efficient and reliable AI computing for a wide range of automation use cases.

NVIDIA Jetson product lineup spans from lightweight to high-performance modules, including Jetson T4000 and T5000, addressing diverse requirements ranging from compact edge devices and industrial control systems to higher-performance inference applications. With NVIDIA’s comprehensive AI development tools and SDKs, developers can rapidly port models, optimise inference performance, and seamlessly integrate AI capabilities into existing system architectures.

Beyond supplying Jetson T4000 modules, EDOM Technology leverages its extensive ecosystem of partners across chips, modules, system integration, and application development. Based on the specific development stages and requirements of system integrators, equipment manufacturers, and enterprise customers, EDOM provides end-to-end support—from early-stage planning and technical consulting to ecosystem enablement. By sharing ecosystem expertise and practical experience, EDOM helps both existing customers and new entrants to the edge AI domain quickly build application capabilities and deploy edge AI solutions tailored to real-world scenarios.

The post NVIDIA’s Jetson T4000 for Lightweight & Stable Edge AI Unveiled by EDOM appeared first on ELE Times.

Anritsu to Bring the Future of Electrification Testing at CES 2026

Срд, 01/07/2026 - 08:24

Anritsu Corporation will exhibit Battery Cycler and Emulation Test System RZ-X2-100K-HG, planned for sale in the North American market as an evaluation solution for eMobility, at CES 2026 (Consumer Electronics Show), one of the world’s largest technology exhibitions to be held in Las Vegas, USA, from January 6 to January 9, 2026.

The launch of the RZ-X2-100K-HG in the North American market represents the first step in the global expansion efforts of TAKASAGO, LTD., which holds a significant share in the domestic EV development market, and it is an important measure looking ahead to future global market growth.

At CES 2026, a concept exhibition will showcase the Power HIL evaluation system combining the RZ-X2-100K-HG with dSPACE’s HIL simulator, demonstrating a new direction for the EV evaluation process.

Additionally, the power measurement solutions from DEWETRON, which joined the Anritsu Group in October 2025, will also be exhibited. Using a three-phase motor performance evaluation demonstration, we will present example applications.

About the RZ-X2-100K-HG

The RZ-X2-100K-HG is a test system developed by TAKASAGO, LTD. of the Anritsu Group, equipped with functions for charge-discharge testing and battery emulation that support high voltage and large current. It is a model based on the RZ-X2-100K-H, which has a proven track record in Japan, adapted to comply with the United States safety standards and input power specifications. This system is expected to be used for testing the performance, durability, and safety of automotive batteries and powertrain devices in North America.

About Power HIL

Power HIL (Power Hardware-in-the-Loop) is an extended simulation technology that combines virtual and real elements by adding a “real power supply function” to HIL (Hardware-in-the-Loop). Power HIL creates a virtual vehicle environment with real power, reproducing EV driving tests and charging tests compatible with multiple charging standards under conditions close to reality. This allows for high-precision and efficient evaluation of battery performance, safety, and charging compatibility without using an actual vehicle.

Terminology Explanation

[*] Battery Emulation Test System

A technology that simulates the behaviour of real batteries (voltage, current, internal resistance, etc.) using a power supply device to evaluate how in-vehicle equipment operates.

The post Anritsu to Bring the Future of Electrification Testing at CES 2026 appeared first on ELE Times.

Keysight’s Software Solution for Reliable AI Deployment in Safety-Critical Environments

Срд, 01/07/2026 - 08:02

Keysight Technologies, Inc. introduced Keysight AI Software Integrity Builder, a new software solution designed to transform how AI-enabled systems are validated and maintained to ensure trustworthiness. As regulatory scrutiny increases and AI development becomes increasingly complex, the solution delivers transparent, adaptable, and data-driven AI assurance for safety-critical environments such as automotive.

AI systems operate as complex, dynamic entities, yet their internal decision processes often remain opaque. This lack of transparency creates significant challenges for industries, such as automotive, that must demonstrate safety, reliability, and regulatory compliance. Developers struggle to diagnose dataset or model limitations, while emerging standards — such as ISO/PAS 8800 for automotive and the EU AI Act- mandate explainability and validation without prescribing clear methods. Fragmented toolchains further complicate engineering workflows and heighten the risk of conformance gaps.

Keysight AI Software Integrity Builder introduces a unified, lifecycle-based framework that answers the critical question: “What is happening inside the AI system, and how do I ensure it behaves safely in deployment?” The solution equips engineering teams with the evidence needed for regulatory conformance and enables continuous improvement of AI models. Unlike fragmented toolchains that address isolated aspects of AI testing, Keysight’s integrated approach spans dataset analysis, model validation, real-world inference testing, and continuous monitoring.

Core capabilities of Keysight AI Software Integrity Builder include:

  • Dataset Analysis: Analyse data quality using statistical methods to uncover biases, gaps, and inconsistencies that may affect model performance.
  • Model-Based Validation: Explains model decisions and uncovers hidden correlations, enabling developers to understand the patterns and limitations of an AI system.
  • Inference-Based Testing: Evaluates how models behave under real-world conditions, detects deviations from training behaviour, and recommends improvements for future iterations.

While open-source tools and vendor solutions typically address only isolated aspects of AI testing, Keysight closes the gap between training and deployment. The solution not only validates what a model has learned, but also how it performs in operational scenarios — an essential requirement for high-risk applications such as autonomous driving.

Thomas Goetzl, Vice President and General Manager of Keysight’s Automotive & Energy Solutions, said: “AI assurance and functional safety of AI in vehicles are becoming critical challenges. Standards and regulatory frameworks define the objectives, but not the path to achieving a reliable and trustworthy AI deployment. By combining our deep expertise in test and measurement with advanced AI validation capabilities, Keysight provides customers with the tools to build trustworthy AI systems backed by safety evidence and aligned with regulatory requirements.”

With AI Software Integrity Builder, Keysight empowers engineering teams to move from fragmented testing to a unified AI assurance strategy, enabling them to deploy AI systems that are not only performant but also transparent, auditable, and compliant by design.

The post Keysight’s Software Solution for Reliable AI Deployment in Safety-Critical Environments appeared first on ELE Times.

Molecular Beam Epitaxy (MBE) Growth of GaAs-Based Devices

Срд, 01/07/2026 - 06:45

Courtesy: Orbit & Skyline

In the semiconductor ecosystem, we are familiar with the chips that go into our devices. Of course, they do not start as chips but are made into the familiar form once the process is complete. It is easy to imagine how to arrive at that end in silicon-based technology, but things are far more interesting in the III-V tech world. Here, we must first achieve the said III-V film using a thin-film deposition method. It is obvious that this would form the bedrock of the device, and quality is critical. Minimal defects, highest mobility, and a plethora of demands following the advent of technology have made this aspect extremely important in today’s world.

In this blog, we will cover how Molecular Beam Epitaxy (MBE) enables the growth of GaAs-based devices, its history, advantages, challenges, and the wide range of optoelectronic applications it supports. Looking to optimise thin-film growth or improve device yield? Explore our Semiconductor FAB Solutions for end-to-end support across Equipment, Process, and Material Supply.

What Is Molecular Beam Epitaxy (MBE)?

Molecular Beam Epitaxy (MBE) is a well-known thin-film growth technique developed in the 1960s. Using ultra-high vacuum (UHV) conditions, it grows high-purity thin films with atomic-level control over the thickness and doping concentration of the layers. This provides excellent control to tune device properties and, in the case of III–V films, bandgap engineering. Such sought-after features make MBE widely renowned for producing the best-quality films, which currently lead device performance in applications such as LEDs, solar cells, sensors, detectors, and power electronics.

However, its major drawbacks include high costs and slow growth rates, limiting large-scale industry adoption. Need support with MBE tool installation, calibration, or fab floor setup? Our Global Field Engineering and Fab Facility Solutions teams can help.

A Brief History of MBE Technology

The concept of Molecular Beam Epitaxy was first introduced by K.G. Günther in a 1958 publication. Even though his films were not epitaxial—being deposited on glass, John Davey and Titus Pankey expanded his ideas to demonstrate the now-familiar MBE process for depositing GaAs epitaxial films on single-crystal GaAs substrates in 1968.

The final version of the technology was given by Arthur and Cho in the late 1960s, observing the MBE process using a Reflection High Energy Electron Diffraction (RHEED) in-situ process. If you work with legacy MBE platforms or require upgrade support, our Legacy Tool Management Services ensure continuity and extended tool life.

Why GaAs? The First Semiconductor Grown by MBE

The first semiconductor material to be grown using MBE, gallium arsenide or GaAs for short, is one of the leading III-V semiconductors in high-performance optoelectronics such as solar cells, photodetectors, lasers, etc. Due to its several interesting properties, such as a high band gap of 1.43 eV, high mobility, high absorption coefficient, and radiation hardness, it finds use in sophisticated applications such as space photovoltaics as well as infrared detectors and next-generation quantum devices.

Since GaAs was the first material to be studied using the MBE method, it is far better understood with decades of research on devices. The efficiency of heterojunction solar cells grown on substrates such as Ge was as high as 15-20% in the 1980s. Although the current numbers are the best in the industry, using MBE for growing GaAs solar cells comes with its own set of challenges and advantages:

  • Throughput and cost: Commercially, it is not as viable as some of the other vapor phase growth techniques since it is a slow and expensive process. Growth rates of MBE films are usually in the range of ~1.0 μm/h, which are far behind the CVD achieved rates of up to ~200 μm/h.
  • Thickness and uniformity: Solar cell structures require absorber layers with thicknesses of the order of several microns. Maintaining uniformity over such a range is not trivial.
  • Defect management: Thin films are beset with a range of defects such as dislocations, antisite defects, point defects, background impurities and so on. Optoelectronic devices suffer heavily due to the presence of defects as carrier lifetimes reduce and consequently open circuit voltage and fill factor. Therefore, multiple factors such as substrate quality, interface sharpness, and growth conditions are mandatory.
  • Doping and alloy incorporation: MBE is one of the best techniques to dope and make alloys, especially when it comes to III-V compounds. Band gap engineering to expand the available bandwidth for solar absorption is one of the most important advantages of using MBE. When making multiple junctions or tandem cells, several growth challenges, such as phase separation, strain, and exact control of the composition of each layer, are challenging.
  • Surface and interface quality: Interfacial strain is one of the major causes of loss of carriers due to recombination. When making solar cell stacks, there are multiple layers where interfaces are required, such as window layers, tunnel junctions, and passivation layers. MBE is excellent at providing abrupt interfaces due to its fast shutter speed and ultra-high vacuum conditions, resulting in high-performance devices.

A lot of the advantages of MBE are nullified due to its challenges, which makes it more of a hybrid technique when it comes to industrial applications. This has resulted in the usage of higher throughput methods, such as MOVPE/MOCVD, along with hybrid attempts to improve efficiency.

Other Optoelectronic Devices Grown Using MBE

In III-V materials and beyond, MBE has excelled in growing device-quality layers of several other types of optoelectronic structures:

  • LASERs and VCSELs: One of the most grown stacks by MBE is of AlGaAs/GaAs heterostructure for quantum well lasers and vertical cavity surface emitting lasers (VCSELs). AlGaAs/GaAs multi-quantum well VCSELs with distributed Bragg reflectors (DBRs) have been successfully demonstrated with threshold currents, continuous wave operations at elevated temperatures, GHz modulation speeds, etc.
  • Quantum Cascade LASERs (QCLs): The same GaAs/AlGaAs heterostructures have been fabricated for application in mid-infrared QCLs using MBE. Its specialty in producing abrupt interfaces and controlled doping is used in growth methods to reduce interface roughness and improve performance.
  • Infrared Photodetectors: A leading IR photodetector currently is HgCdTe (MCT), which has been grown using MBE on GaAs substrates. GaSb-based nBn detectors are also grown using superlattices of InAs/GaSb, which reduces lattice mismatch due to buffer layers.
  • High mobility 2D electron gas heterostructures: One of the most important discoveries of the last couple of decades has been that of 2-dimensional electron gas, which has led to applications such as high electron mobility transistor (HEMT). AlGaAs/GaAs heterostructures support the formation of this 2DEG, where the purity of the source material is critical. MBE grown films have shown mobilities as high as ~ 35 x 106 cm2/V.s.

Conclusion

MBE is a complex, slow process that has largely been confined to R&D labs traditionally. However, the quality of the deposited layers is unparalleled and has helped in improving and discovering new devices. In the last decade or so, there has been partial adoption of MBE in the industry due to the ability of the tool to provide cutting-edge device quality. However, mass adoption is unlikely due to the low quantity of wafers that are possible to grow at a time, and so we remain content with discovering the next generation of devices.

The post Molecular Beam Epitaxy (MBE) Growth of GaAs-Based Devices appeared first on ELE Times.

Сторінки