Українською
  In English
ELE Times
Qualitas Semiconductor Picks Anritsu’s Vector Network Analyzer for High-Speed Interconnect Signal Integrity Verification
Qualitas Semiconductor Co., Ltd., a leading developer specialising in PHY IP solutions for high-speed interconnects, has adopted Anritsu’s ShockLine 4-Port Performance Vector Network Analyzer (VNA) MS46524B to enhance signal integrity verification for its high-speed interface IP development. Qualitas has significantly improved the quality and reliability of its IP solutions by establishing a verification environment that enables highly accurate, repeatable signal-integrity evaluations across the entire system, including PHY IP.
Qualitas develops high-speed interface IP solutions, including SerDes PHY IP, PCI Express® PHY IP, UCIe interconnect solutions, and Ethernet PHY IP, and it collaborates with global customers across advanced semiconductor markets in fields such as AI, data centres, automotive, and mobile systems.
As semiconductor interface technologies continue to increase data transmission speeds, system-level verification that includes the characteristics of the entire interconnect channel, such as the PCB, package, and socket, has become increasingly important, rather than just the performance of the chips. In high-speed signal environments, factors such as transmission loss, reflection, and crosstalk affect signal integrity, making precise measurement-based verification environments essential.
To address these requirements, Qualitas has adopted Anritsu’s ShockLine MS46524B to analyse the characteristics of high-speed interconnect channels and quantitatively verify signal integrity, based on differential S-parameter analysis and time-domain reflectometry (TDR) measurements.
The ShockLine MS46524B provides high-frequency measurement stability, support for mixed probe and coaxial cable environments, and high-resolution TDR measurement capabilities, enabling precise analysis of subtle impedance variations occurring in the package and PCB structures. Through this approach, Qualitas has established a verification environment that is close to the conditions of real systems, enabling it to provide the reliability required in the PHY IP development process.
Anritsu highlights the importance of signal integrity verification solutions and measurement technologies that are required in next-generation interface technology environments, and it plans to support semiconductor and high-speed interface development companies in building more efficient verification environments.
The post Qualitas Semiconductor Picks Anritsu’s Vector Network Analyzer for High-Speed Interconnect Signal Integrity Verification appeared first on ELE Times.
The Tomorrow for AI and India’s edge advantage
Courtesy: Qualcomm
Artificial intelligence is entering its next chapter, one that reshapes not only how computing works, but how people experience technology in their daily lives. Intelligence is no longer just a feature, but is being built directly into devices and woven into systems and experiences so that it becomes ambient and always present.
In this next chapter, AI runs everywhere — across smartphones, PCs, wearables, cars, industrial machines, robots and connected infrastructure. These systems will understand context and the physical world around them and adjust in real time to our needs. Intelligence will operate quietly alongside us — working in the background, responding instantly, adapting continuously and ultimately expanding what’s possible in productivity, creativity and learning.
This marks a fundamental shift in how humans interact with technology. The interfaces we’ve relied on for decades — screens, apps, menus — will matter less as intelligence becomes more natural and intuitive. We won’t have to tell our devices what to do because they will understand our intent, anticipate what we want and act on our behalf. Some devices will increasingly see what we see, hear what we hear, understand what we read and write. In many cases, AI will feel less like a tool and more like a trusted assistant — always available, always learning and designed around us.
As agentic AI assistants become more common, they will become your personal companion in your home, the workplace and your car — everywhere you go. For example, in India, smart glasses are already being used to make digital payments using voice commands or by scanning a QR code. In your car, your AI assistant will not only help you find the fastest route but can also manage your errands, make recommendations or answer questions about places of interest.
In industries, edge AI boxes are being used to improve decision-making and operational efficiency, including monitoring and optimising production processes in a manufacturing facility or better managing inventory in a retail store.
Making these experiences real requires a new architecture — one where intelligence is distributed seamlessly across every computing device from cloud to edge. Training and deep reasoning will continue to scale in the cloud. At the same time, immediacy, perception and personalisation, as well as ambient and physical AI, will happen on devices — closer to people and things.
“To realise this future, democratizing access to AI is essential. That requires competitive and efficient data centre technology, powerful on-device intelligence and advanced connectivity working together.”
India’s size, diversity, economic growth and digital momentum make it one of the most important countries for AI’s next chapter. With hundreds of millions of connected users, a vibrant developer ecosystem, and deep expertise across engineering and software, India is not simply adopting AI — it is helping define how AI can work for the world.
In agriculture, AI can help enable precision farming and natural resource optimisation. Access to healthcare can be improved by on-device screening and diagnostics, which extend care into clinics, homes,s and remote communities. AI will realise the vision of smart cities with intelligent traffic management, smart infrastructure, security, and more. And, AI-enabled devices, such as PCs, smartphones, and wearables, will make education more personalised and support continuous, lifelong learning. These are not abstract ideas; they are practical pathways to broader participation in the AI economy.
To realise this future, democratizing access to AI is essential. That requires competitive and efficient data centre technology, powerful on-device intelligence, and advanced connectivity working together. It also requires an ecosystem approach — bringing together industry, startups, academia, and policymakers to ensure innovation is trusted, accessible,e and sustainable.
At Qualcomm, we’ve been building toward this future — advancing high-performance, power-efficient, and heterogeneous computing, AI, and wireless technologies that enable intelligence everywhere. But no single company can define AI’s next chapter alone. Progress will come through collaboration, from aligning technology with real-world needs, and from ensuring the benefits of AI extend beyond early adopters to entire societies.
With the right choices, India can help shape a future where intelligence empowers people, accelerates opportunity, and reaches every community — setting an example the world can follow.
The post The Tomorrow for AI and India’s edge advantage appeared first on ELE Times.
Posifa Technologies Introduces PVC4001-C MEMS Pirani Vacuum Transducer for Wide-Range Vacuum Measurement
Posifa Technologies has introduced its new PVC4001-C MEMS Pirani vacuum transducer, the latest device in the company’s PVC4000 series. Designed for cost-effective OEM integration, the transducer combines a MEMS thermal conduction sensor, measurement electronics, a microprocessor, and an onboard barometric pressure sensor in an ultra-compact PCB assembly with a connector-terminated wire harness.
Based on Posifa’s second-generation MEMS thermal conduction chip, the PVC4001-C operates on the principle that the thermal conductivity of gases is proportional to vacuum pressure. Its electronics and microprocessor amplify and digitise the sensor signal and provide output via an I²C interface. For applications requiring calibrated output, users can enter up to 10 pairs of calibration points, which are used by a built-in piecewise linearization algorithm.
The PVC4001-C is designed to deliver stable performance across changing operating conditions. A built-in temperature sensor supports a temperature compensation algorithm to offset changes in thermal conductivity caused by ambient temperature variation. In addition, a pulsed excitation scheme — in which the sensor is heated for about 100 ms and then turned off for one second — helps minimise drift due to self-heating in high vacuum, while also reducing power consumption for battery-powered instruments.
The device provides a measurement range from 0.001 Torr to 900 Torr (1.3*10-4 kPa to 120 kPa) with a response time of less than 200 ms. Because Pirani vacuum sensors typically lose resolution above 10 Torr, the PVC4001-C adds an onboard barometric pressure sensor that supports measurement from 10 Torr to 760 Torr with 5 % accuracy across that extended range. This combination makes the device especially well-suited for portable digital vacuum gauges and for leak detection in closed systems maintained under primary vacuum, including vacuum-insulated panels.
Additional features of the PVC4001-C include low power consumption, resistance to contamination, and an operating temperature range of -25 °C to +85 °C.
The post Posifa Technologies Introduces PVC4001-C MEMS Pirani Vacuum Transducer for Wide-Range Vacuum Measurement appeared first on ELE Times.
STMicroelectronics to support AI infrastructure demand with high-volume production of its industry-leading silicon photonics platform
STMicroelectronics is now entering high-volume production for its state-of-the-art silicon photonics-based PIC100 platform used by hyperscalers for optical interconnect for data centres and AI clusters. The 800G and 1.6T PIC100 transceivers enable higher bandwidth, lower latency, and greater energy efficiency as AI workloads surge.
“Following the announcement of its new silicon photonics technology in February 2025, ST is now entering high-volume production for leading hyperscalers. The combination of our technology platform and the superior scale of our 300 mm manufacturing lines gives us a unique competitive advantage to support the AI infrastructure super-cycle,” said Fabio Gualandris, President, Quality, Manufacturing & Technology, STMicroelectronics. “Looking ahead, we are planning and executing on capacity expansions to enable more than quadrupling of production by 2027. This fast expansion is fully underpinned by customers’ long-term capacity reservation commitments.”
“The data centre pluggable optics market continues to expand strongly, reaching $15.5 billion in 2025. We expect the market to grow at a compound annual growth rate (CAGR) of 17% from 2025 through 2030, surpassing $34 billion by the end of the forecast period. In addition, co-packaged optics (CPO) will emerge as a rapidly growing segment, contributing more than $9 billion in revenue by 2030. Over the same period, the share of transceivers incorporating silicon photonics modulators is projected to increase from 43% in 2025 to 76% by 2030,” said Dr. Vladimir Kozlov, CEO and Chief Analyst at LightCounting. “ST’s leading silicon photonics platform, coupled with its aggressive capacity expansion plan, illustrates its capabilities to provide hyperscalers with secure, long-term supply, predictable quality, and manufacturing resilience.”
Upcoming PIC100 TSV Platform TechnologyAI infrastructure is experiencing unprecedented scaling, with cloud-optical interconnect performance becoming a critical bottleneck. Drawing on years of silicon photonics innovation, ST’s PIC100 platform provides state-of-the-art optical performance, including best-in-class silicon and silicon nitride waveguide losses (respectively as low as 0.4 and 0.5 dB/cm), advanced modulator and photodiode performance, as well as an innovative edge coupling technology.
In parallel with high-volume PIC100 production, ST is planning to introduce the next step in its silicon photonics technology roadmap: the PIC100 TSV, a new and unique platform that integrates through-silicon via (TSV) technology to further increase optical connectivity density, module integration, and system-level thermal efficiency. The PIC100 TSV platform is designed to support future generations of Near Packaged Optics (NPO) and co-packaged optics (CPO), aligning with hyperscalers’ long-term migration paths toward deeper optical–electronic integration for scale up.
The post STMicroelectronics to support AI infrastructure demand with high-volume production of its industry-leading silicon photonics platform appeared first on ELE Times.
Impact of AI on Computing and the Criticality of Testing
Courtesy: Teradyne
Artificial intelligence (AI) is transforming industries, enhancing our daily lives, and improving efficiency and decision-making, but its need for computing power is growing at an astonishing rate, doubling every three months (Figure 1). To maintain this pace, the semiconductor industry is moving beyond traditional chip development – it has entered the era of heterogeneous chiplets in advanced integrated packages.
(Figure 1: The Growth of Compute Requirements. Source: https://openai.com/index/ai-and-compute/)
The Rise of Chiplets
Chip companies like NVIDIA and AMD are rewriting the rules, designing architectures that combine multiple CPUs and GPUs in a single advanced package along with high bandwidth memory (HBM). AI workloads require rapid access to vast amounts of data, made possible by integrating HBMs. This approach, combining two, four, or more processing cores with HBM stacks, requires a complex, advanced packaging technique developed by TSMC called CoWos® – Chip-on-Wafer-on-Substrate, typically referred to as 2.5/3D packaging (Figure 2). These packages can exceed 100 mm x 100 mm in size and will require wafer interposer probers that can handle large CoW modules/stacks and also meet significantly larger thermal dissipation requirements, as discussed below.
(Figure 2: 2.5D/3D packaging architecture, Source: Teradyne)
To maintain peak performance, these heterogeneously integrated advanced packaging devices need proprietary high-speed interfaces to communicate efficiently. All these requirements contribute to an increasingly complex semiconductor landscape.
Testing Becomes More Complex in Step with Chip Advancements
As package complexity increases, so does the need for more deliberate test strategies. In the transition from monolithic dies to chiplets, long-established test methods are not always directly transferable because test IP is now distributed across multiple dies and, in some cases, across different design teams or companies. This fragmentation requires a clearer definition of what must be tested at each stage—die, bridge, interposer, substrate, and stack—and which standards or techniques apply to each scope.
Packing multiple dies into a single chiplet-based system is a major advancement, but it raises a key challenge: verifying that every component functions correctly before final assembly. Multi-die packages require rigorous screening to avoid yield loss, and it is not enough to qualify only the dies. Interposers, substrates, bridges, and stacks also need to be validated, using test techniques appropriate to each layer. The industry is thus moving into “known-good-everything”, from known-good-die (KGD) to known-good-interposer (KGI), to known-good-CoW (KG-CoW), and so on. (Figure 3)
(Figure 3: Possible test insertions to ensure KGD and KG-CoW. Source: Teradyne)
High-speed communication between chiplets introduces an additional layer of complexity. Dies must exchange data at extreme speeds – such as during GPU-to-HBM transfers – yet their physical and electrical interfaces vary by manufacturer. Open standards like Universal Chiplet Interconnect Express (UCIe
) continue to evolve, but chiplet interfaces still differ widely. To support this diversity, test solutions increasingly need interface IP that behaves like the device’s native protocol to avoid electrical overstress or probe-related damage. Some suppliers now offer UCIe-compliant PHY and controller IP that device makers can integrate, enabling automated test equipment (ATE) platforms to test high-speed links safely and consistently.
(Figure 4: Chip-level bare cooling, Source: Teradyne)
Manufacturers and test operators must also pay close attention to thermal management. More processing power means more heat dissipation issues, requiring advanced cooling methods – perhaps even liquid cooling inside the package itself (Figure 4). More die in the package means more connections, and thus, more resources are needed in the tester. More transistors mean higher power supply current requirements, more power supply instruments, and an increased set of thermal challenges that demand innovative cooling solutions and advanced adaptive thermal control (ATC) strategies.
Lastly, manufacturing test operations must consider the interposer, a physical interface layer that electrically connects a chip to a substrate or other active component. For example, a multilayer or 2.5D package includes multiple dies on an interposer assembled on top of a substrate. That interposer functions as a mini silicon board, routing signals from the upper floor die to the bottom floor die. It’s critical that the interposer is also a known good die or known good interposer (KGI) to ensure adequate yields for advanced packages.
The Future of AI and Semiconductor Testing
There has been an uptick in industry recognition that semiconductor testing is an integral part of today’s chiplet and advanced packaging trend. As this unfolds, AI computing will continue its pace of unprecedented evolution, relying on semiconductor testing to fill a crucial role in ensuring quality devices get to market in the shortened timelines today’s market demands. Semiconductor test will remain the unsung hero of AI-driven computing, steadily enabling the next wave of technological breakthroughs.
The post Impact of AI on Computing and the Criticality of Testing appeared first on ELE Times.
Advanced Aerospace Materials: Driving Hypersonic, Stealth and High-Efficiency Defence Systems
By- Shreya Bansal, Sub-Editor
The aerospace and defence sector is undergoing a materials revolution driven by escalating performance demands that conventional alloys and composites can no longer meet. As military and commercial aircraft push toward hypersonic speeds, extended operational ranges, and stealth capabilities, traditional materials face fundamental limitations in thermal tolerance, weight-to-strength ratios, and electromagnetic properties. This article examines the current landscape of advanced materials, including ceramic matrix composites, high-entropy alloys, graphene-enhanced structures, and metamaterials that are displacing legacy materials in critical applications.
The transition isn’t merely about incremental improvement; it represents a paradigm shift in how aerospace components are designed and manufactured. Engineers are abandoning materials that have dominated the industry for decades in favour of solutions that enable previously impossible capabilities: jet engines operating at temperatures that would melt nickel super alloys, airframe structures that self-repair micro-damage, and radar-absorbing surfaces engineered at the molecular level. Through expert interviews with materials scientists and aerospace engineers, this article explores why this transition is happening now, what technical and economic factors are driving adoption, and which materials are positioned to define the next generation of aerospace and defence systems.
The Materials Revolution Redefining Aerospace and DefenceThe aerospace and defence sector is undergoing a profound materials transformation. As aircraft and defence systems push toward hypersonic velocities, extended mission endurance, lower radar signatures, and improved fuel efficiency, conventional materials such as aluminium alloys, titanium, and nickel-based superalloys are reaching their performance ceilings.
The next generation of aerospace capability is no longer driven solely by aerodynamics or propulsion; it is increasingly defined by materials science. From ceramic matrix composites (CMCs) that withstand temperatures exceeding traditional alloy limits to high-entropy alloys (HEAs) engineered at the atomic scale, advanced materials are enabling systems once considered technologically unattainable.
Vern Benson, Northrop Grumman Technical Fellow (2026):
“Previous generations of aircraft were mostly made of metal, while newer versions are approximately 50% composite materials, with an even higher percentage for military aircraft… In order to compete with metals, we have to be highly automated in our manufacturing process.”
For decades, aerospace engineering relied on aluminium alloys, titanium, and nickel-based superalloys as the backbone of structural and propulsion systems. These materials delivered an optimal balance of strength, weight, corrosion resistance, and manufacturability for subsonic and supersonic flight regimes. However, modern mission profiles, ranging from hypersonic vehicles and advanced fighter aircraft to long-endurance unmanned systems, have exposed the inherent limitations of these conventional materials. Aluminium alloys lose strength rapidly at elevated temperatures, titanium becomes cost-prohibitive at scale, and nickel superalloys are approaching their thermal operating thresholds in next-generation turbine engines. As propulsion systems push for higher combustion temperatures to improve efficiency, and as airframes demand lighter structures to extend range and payload capacity, traditional materials no longer provide sufficient performance margins. These realities have shifted material selection from being a secondary design consideration to a primary performance enabler in aerospace and defence programs. Conventional materials, while proven and reliable, were engineered for a different era of performance requirements. Aluminium alloys struggle under extreme thermal loads. Titanium offers strength but at a weight and cost penalty. Nickel superalloys approach thermal limits in advanced jet engines.
Baba Kalyani, Chairman & MD, Bharat Forge (2025):
“At a time of heightened geopolitical and supply-chain uncertainty… the emphasis is firmly on upgrading platforms, systems and technologies… we must emerge as an ‘Innovation driven Economy’ that manufactures and exports best-in-class products.”
Hypersonic systems and next-generation combat aircraft demand materials that can endure sustained extreme heat while maintaining structural integrity. The tipping point for abandoning traditional materials often occurs during thermal modelling or fatigue simulations, where safety margins erode under mission stress scenarios.
Engineers are increasingly identifying material limitations during early-stage digital twin simulations, triggering a search for advanced alternatives before physical prototyping even begins.
Ceramic Matrix Composites and High-Entropy Alloys: From Research to RealityAdvanced materials such as ceramic matrix composites (CMCs) and high-entropy alloys (HEAs) are redefining what aerospace components can endure. CMCs, for example, can withstand temperatures hundreds of degrees higher than conventional superalloys while weighing significantly less. Their integration into turbine shrouds and combustor liners has enabled engines developed by companies such as GE Aerospace to operate at higher efficiencies while reducing cooling requirements. Similarly, HEAs, engineered with multiple principal elements rather than one dominant base metal, exhibit exceptional strength, oxidation resistance, and thermal stability. These materials are transitioning from laboratory experimentation to production-grade deployment, driven by advances in additive manufacturing and precision powder metallurgy. What was once considered experimental is now becoming operational, marking a fundamental shift in aerospace material qualification and deployment timelines.
Five years ago, advanced materials comprised a niche percentage of aerospace structural systems. Today, in next-generation engines and airframes, they are becoming integral rather than experimental.
Lifecycle Economics: Justifying Advanced Material InvestmentWhile advanced materials often command higher upfront costs, their value becomes evident when evaluated over the full lifecycle of an aerospace platform. Lighter airframes reduce fuel consumption, directly lowering operational expenditures and emissions. Higher-temperature engine materials increase thermodynamic efficiency and reduce maintenance frequency. Extended component durability minimises downtime and replacement cycles, factors critical for both commercial airlines and military operators. Leaders at organisations such as RTX Corporation have emphasised that procurement decisions are increasingly based on total ownership cost rather than acquisition price alone. Moreover, as production volumes scale and supply chains mature, the cost gap between conventional and advanced materials continues to narrow. When assessed across decades of operational service, advanced materials frequently offer compelling economic advantages despite their initial premium.
Fuel savings, extended maintenance intervals, and enhanced durability can offset initial procurement premiums.
Yet supply chain maturity remains a gating factor. Dependency on rare earth elements, specialised ceramic fibres, or advanced powder metallurgy inputs introduces geopolitical and availability risks.
Government R&D partnerships, particularly in the U.S., Europe, and Asia, have played a critical role in de-risking early-stage adoption by subsidising demonstration programs and pilot manufacturing lines.
Manufacturing Evolution: Scaling Advanced Materials SafelyThe adoption of advanced materials requires a parallel transformation in manufacturing methodologies and quality assurance frameworks. Unlike traditional metal alloys that rely on well-established forging and casting processes, materials such as CMCs and ultra-high-temperature ceramics demand specialised fabrication techniques, including fibre weaving, matrix infiltration, and controlled sintering. Additive manufacturing has further expanded possibilities, enabling complex geometries that were previously impossible to machine. However, these innovations introduce new challenges in inspection, certification, and scalability. Companies like Boeing are investing heavily in digital twins, advanced non-destructive evaluation methods, and automated production systems to ensure reliability and repeatability. Successfully integrating advanced materials into flight-ready systems requires not only material innovation but also synchronised advances in manufacturing science, workforce training, and regulatory certification standards.
Dr. G. Satheesh Reddy, Former Chairman, DRDO (2025):
“While precision manufacturing of the stealth airframe is critical, the speciality materials, including RAM (Radar Absorbent Material), have already been developed… Manufacturing the aircraft itself won’t be a challenge if we work entirely from the provided designs and material lists with disciplined scaling.”
CMC production, for instance, requires precision fibre weaving and matrix infiltration processes not used in conventional metallurgy. Quality assurance shifts from visual inspection and ultrasonic testing to advanced microstructural evaluation.
Organisations must also retrain engineers and technicians in new fabrication techniques, creating workforce transition challenges alongside technological ones.
Strategic and Geopolitical Implications of Materials LeadershipIn aerospace and defence, materials innovation is no longer purely a technical matter; it is a strategic asset. Advanced composites enable stealth characteristics through radar absorption and electromagnetic manipulation. Ultra-high-temperature ceramics support hypersonic weapon systems capable of withstanding extreme aerodynamic heating. Lightweight structures extend operational reach and payload flexibility, enhancing mission effectiveness. As global competition intensifies, control over critical materials, precursor chemicals, and advanced manufacturing capabilities has become a national security priority. Organisations such as Airbus and major U.S. defence contractors increasingly view materials science as a pillar of competitive differentiation. Nations investing heavily in advanced material research are positioning themselves to define the performance boundaries of next-generation aircraft, spacecraft, and defence platforms.
From a defence perspective, material science leadership is increasingly viewed as a domain of geopolitical competition similar to semiconductors and AI. Control over precursor materials, advanced manufacturing capabilities, and a proprietary composite formulation carries strategic weight.
Dependence on foreign-sourced rare materials or advanced fibres presents vulnerabilities that governments are actively addressing through domestic production incentives.
Looking Ahead – The Next Frontier in Aerospace MaterialsLooking ahead, the next frontier in aerospace materials will be shaped by computational design, artificial intelligence, and sustainability imperatives. AI-driven materials discovery platforms can simulate atomic interactions and predict performance characteristics before physical prototypes are fabricated, dramatically accelerating development cycles. At the same time, environmental considerations are becoming central to material selection. Recyclable composites, bio-derived resins, and low-emission manufacturing processes are gaining traction as aerospace companies commit to long-term decarbonization goals. Future “smart materials” may combine self-healing properties, adaptive electromagnetic behaviour, and extreme thermal resistance within a single multifunctional structure. As aerospace systems grow more complex and mission demands intensify, the integration of digital engineering and advanced materials science will define the trajectory of innovation for decades to come. AI-driven materials discovery platforms now simulate atomic structures and predict performance before physical synthesis, dramatically reducing time-to-certification.
Future “dream materials” would combine:
- Extreme thermal resistance
- Structural strength
- Lightweight properties
- Electromagnetic tunability
- Sustainability
Such materials would enable sustained hypersonic travel, near-zero-emission propulsion systems, and adaptive stealth platforms.
The post Advanced Aerospace Materials: Driving Hypersonic, Stealth and High-Efficiency Defence Systems appeared first on ELE Times.
Disruptions from Wide Bandgap Continue Turbulence
Courtesy: Avnet
When we experience major shifts in the technology landscape, we should expect disruption and turbulence. OEMs, suppliers and distributors are coming together to navigate rough waters.
The shift from conventional silicon toward wide bandgap (WBG) substrates for power applications, predominantly Silicon Carbide (SiC) and Gallium Nitride (GaN), isn’t without its challenges. The supply chain for WBG solutions is now maturing, which means we’re seeing consolidation, acquisition and even some attrition.
This turbulence, while a cause for concern, is predictable and surmountable. Confidence in WBG technology remains high, but the market conditions and geopolitical risks must be acknowledged as contributing factors.
New applications are exhibiting polarised conditions. Demand for electric vehicles (EVs) has plateaued in some regions, while the thirst for AI data centres seems unquenchable. Both are served by WBG devices. The technological benefits of WBG are clear, but challenges around its use can still be a barrier. Recently, we asked SiC market leader onsemi for its thoughts and recommendations.
With a fully vertically integrated supply chain, from crystal growth to final packaging, onsemi is addressing scalability, quality control and cost efficiencies in-house. While it sees yield and the transition from 150mm to 200mm wafers as ongoing challenges, demand volatility can cause inventory fluctuations. This is a key contributor to the turbulence we’re seeing, but it’s part of the natural evolution of new technology.
Partnerships with companies including Vitesco and Magna, and long-term agreements are part of onsemi’s strategy to secure supply and fund capacity expansion. Its recent acquisition of Qorvo’s SiC JFET technology and the United Silicon Carbide subsidiary further strengthened its position.
The impact of increased demand
As demand increases, higher volumes will drive down per-unit pricing, which justifies the investments being made in larger wafer sizes and new fabs. Onsemi currently operates fabs and packaging facilities in multiple regions, helping to avoid supply chain disruptions related to geopolitical tensions and export controls.
As capacity expands due to demand, suppliers will see higher returns on their investments. This will support the industry’s virtuous cycle of investment in capacity. As technology matures, production volumes and yields improve and lead to greater process stability. Also, onsemi contributes to JEDEC (Joint Electron Device Engineering Council), Automotive Electronics Council (AEC) and the European Centre for Power Electronics’ Working Group “Automotive Qualification Guidelines” (AQG) to define SiC standards, which promote stability and interoperability between suppliers.
Turbulence will give way to stability
WBG is a transformative technology. It brings benefits to critical applications, including EVs, AI data centers and renewable energy. The strategic investments being made by suppliers, such as onsemi, demonstrate the industry’s commitment.
Pairing the right technology with key applications is crucial. For example, data centre power and circuit protection are applications where SiC JFETs are differentiated by their low on-resistance and switching frequencies, surpassing GaN and even SiC MOSFETs.
Market conditions and the challenges of adopting new technology are contributing to the turbulence we’re seeing in the market. But demand continues to grow, paving the way to stability.
The post Disruptions from Wide Bandgap Continue Turbulence appeared first on ELE Times.
Securing Humanoid Robotics with TPM-Anchored FPGAs
Courtesy: Lattice Semiconductor
The humanoid robotics market is rapidly transitioning from experimental prototypes to early commercial deployments. What once belonged in research labs is now appearing in factories and controlled service environments, driven by advances in sensing, actuation, and edge intelligence.
Humanoids represent what many describe as the ultimate expression of physical AI — but the market is still early. “The market is still early, but it is moving quickly,” says Eric Sivertson, VP of the Security Business at Lattice. “We are seeing humanoid robotics transition from research and pilot stages into early commercial deployments. Humanoids represent the ‘physical AI’ ultimate instantiation, but adoption is not yet widespread — although the momentum is real.”
With companies such as Tesla, Boston Dynamics, Figure AI, and Agility Robotics pushing forward, analysts anticipate a significant inflexion point around 2026–2027.
But scaling humanoids to production-grade systems demands more than innovation. It requires trust.
Reliability Before Scale
Industrial buyers expect 99.99% uptime, seamless integration into human environments, and safe 24/7 operation. Yet developers continue to face challenges in reliability, dexterity, battery life, and real-world autonomy.
Sivertson notes that many prototypes still fall short of industrial expectations: “Utility is one of the most common concerns. Because the technology is still early, many prototypes and pilots fall short of industrial-grade expectations such as 99.99 per cent uptime, continuous 24/7 operation, and safe integration into human environments.”
Among the most persistent technical gaps are dexterity, failure-free operation, and energy efficiency. These challenges push developers toward hardware architectures capable of deterministic, low-latency control.
Determinism at the Hardware Layer
Humanoid systems require dense sensor fusion and sub-microsecond motor control loops. Variable latency is not acceptable when stabilising balance or controlling fine manipulation. Unlike CPUs and GPUs that execute instructions through pipelines, FPGAs implement functionality directly in hardware.
“Unlike instruction-based processors constrained by pipelines, FPGAs implement functionality directly in hardware. That enables critical operations to execute predictably within a single clock cycle,” explains Sivertson. This deterministic execution becomes foundational when motors, joints, and actuators must respond instantly and predictably under all conditions — including fault scenarios.
Security Is Now a First-Order Requirement
As humanoids move into human-shared spaces, cybersecurity becomes inseparable from physical safety.
Sivertson is unequivocal: “With humanoids, it’s impossible to separate safety and security.” A compromised humanoid is not simply a system failure — it can cause physical harm, exfiltrate enterprise data, violate privacy, or coordinate attacks across shared vulnerabilities.
He warns against applying legacy models: “It’s very easy to fall into a square peg in a round hole design fallacy. Humanoids are not traditional IT, industrial robotics, or consumer IoT — even though they incorporate elements of all three.”
Perhaps the most dangerous mindset is postponing security. “Security cannot be bolted on at the end. It must be considered throughout the design process and across the full lifecycle. The idea of ‘functionality first, harden later’ usually introduces more risk than intended.”
TPM-Anchored FPGAs and Hardware Root of Trust
To establish trust at scale, developers are increasingly adopting TPM-anchored FPGA architectures aligned with Trusted Computing Group specifications.
These architectures provide:
- Authenticated boot
- Per-node cryptographic identity
- Secure firmware updates
- Runtime attestation
- Hardware Root of Trust (HRoT)
Sivertson emphasises that TPM alone is not enough in dynamic humanoid systems: “In static systems, TPM-based attestation can sometimes be sufficient. In humanoids, it is only the beginning of an attestation-to-cyber-resilience chain. Active, real-time monitoring and immediate mitigation are also required.”
By combining TPM-based identity with FPGA-enforced deterministic control, developers can embed strong protections at the robot’s most critical physical interfaces. Lock-step redundancy, parallel fail-safe mechanisms, and real-time validation of attack surfaces further reduce cascading risks.
The Safety–Security Tension
One of the more subtle engineering challenges lies in the philosophical difference between safety and security systems.
“In a safety system, you monitor malfunctions and maintain a controlled course of action. In a secure system, if a breach occurs, the response is often to shut down or deny. While the monitoring mechanisms may be similar, the prescribed responses can be fundamentally opposed.” Designing humanoids requires setting clear precedence between these responses — without compromising either domain.
Building Trustworthy Physical AI
As humanoids evolve from pilots to scaled deployments, the competitive advantage will belong to platforms built on trusted foundations. Lattice Semiconductor positions its low-power, Root-of-Trust-enabled FPGAs at this intersection of determinism and embedded security — enabling developers to advance without sacrificing reliability or safety.
The potential of humanoids is enormous. But as Sivertson suggests, the responsibility is equally significant. The future of physical AI will not be defined solely by dexterity or autonomy — but by whether these machines can be trusted to operate safely in the real world.
The post Securing Humanoid Robotics with TPM-Anchored FPGAs appeared first on ELE Times.
Keysight Expands Digital‑Layer Error Performance Validation for High‑Speed 1.6T Interconnects in AI Data Centres
Keysight Technologies, Inc. introduced the Functional Interconnect Test Solutions (FITS) portfolio and FITS-8CH, the suite’s first product. FITS-8CH delivers digital-layer bit error ratio (BER) and forward error correction (FEC) performance validation for high-speed optical and copper interconnects used in network equipment and production network infrastructures.
As interconnect speeds increase and designs grow more complex, manufacturers of chips, optical and copper interconnects, and network equipment face mounting pressure to ensure reliability before products reach mass production and throughout the manufacturing process. Traditional physical-layer test tools play a vital role in validating electrical lanes against industry specifications, establishing a strong compliance baseline. Building on this foundation, system-level validation helps extend insight into the performance of fully integrated interconnects and operational sub-assemblies, including error behaviour in realistic environments.
Accurate assessment of real‑world system conditions is only possible when all interconnect electrical or optical lanes undergo high-speed error-performance validation. Without this testing, the risk of production delays or costly failures in the field increases. This includes validating error performance for high‑speed PAM4 electrical lanes operating at 53 Gb/s, 106 Gb/s, and 212 Gb/s, which underpin today’s 400GE, 800GE, and 1.6T Ethernet network architectures.
FITS-8CH addresses this system-level error performance gap by providing multiple-lane error performance validation at the digital layer, supporting PAM4 error performance assessment across all relevant electrical lane speeds and extending beyond physical-layer measurements. This enables reliable validation throughout the design, development, and manufacturing of high-speed interconnects for high-volume deployment in large-scale networks. The chassis also integrates with Keysight’s physical layer test solutions, expanding the number of applications and topologies it supports.
Built for reliability, scale, and manufacturing readiness, FITS‑8CH supports today’s network-testing demands, where even marginal error performance can impact large-scale deployments. Key benefits include:
- Multiple-lane BER and FEC Validation: Enables simultaneous, bi‑directional real-time testing on all eight transmit and eight receive channels, supporting PAM4 signalling speeds from 53 Gb/s to 212.5 Gb/s. Validating system‑level error performance using BER and FEC enables testing of complete optical and copper interconnect assemblies rather than isolated measurements at critical stages, including R&D, product development, in‑process manufacturing, end‑of‑line testing, and system‑level qualification. Using this approach, manufacturers can confidently release verified pre‑production designs to mass production and benchmark reliability under real‑world operating conditions.
- Flexible Channel Architecture: Two complementary channel groups — high‑drive outputs and chip‑to‑module (C2M) interfaces — support a broader range of electrical fixtures and interconnect topologies. This architecture gives teams greater flexibility to support more configurations of electrical fixtures, Ethernet interconnects, active cables, and silicon topologies without redesigning test setups or compromising signal fidelity.
- High‑Quality Signal Generation: IEEE P802.3dj‑compliant signal generation and excellent signal integrity performance, even under difficult conditions, provide clean, well‑controlled transmit signals required for accurate BER and FEC measurements at all supported channel speeds. By delivering signals that meet defined requirements, teams can evaluate error performance based on the true behaviour of the device or interconnect under test, rather than limitations introduced by the test environment. This is especially important in high‑speed, multiple-lane designs, where small signal variations can lead to borderline or misleading results.
- Automated Lane Tuning: Optimises PAM4 signal output performance with lane‑by‑lane tuning that automatically adjusts transmit tap settings and opens the electrical eye of the PAM4 signal for each lane. This improves measurement consistency and repeatability, reducing the risk of passing assemblies with marginal or borderline error performance.
- Early Detection of Manufacturing and Configuration Issues: Identifies problems such as mechanical misalignment, thermal failures, and non-optimised or incorrect digital signal processor (DSP) tap settings during in‑process or end‑of‑line testing—reducing the costly impact and likelihood of defective products reaching customers.
Kenji Liao, High‑Speed Interconnect PM Director, UDE Corporation, said: “With FITS‑8CH, Keysight provides the digital‑layer error performance analysis we need to verify 1.6T AEC BER‑per‑lane requirements under realistic operating conditions. The ability to characterise lane‑level error behaviour across complete interconnect assemblies helps us identify margin issues earlier and maintain consistency as we transition designs into volume production. Integrating this solution into our development and manufacturing workflow strengthens our confidence that UDE’s high‑speed interconnects will meet the stringent performance targets our customers expect. The partnership between UDE and Keysight allows us to use this new solution to support error performance validation across development and manufacturing.”
Ram Periakaruppan, Vice President and General Manager, Network Test & Security Solutions, Keysight, said: “As validation requirements move up the stack from the physical layer, our customers increasingly need solutions that scale across development, manufacturing, and deployment. FITS‑8CH represents Keysight’s expansion into digital‑layer interconnect validation, combining years of deep measurement expertise with the global reach, field support, and portfolio continuity customers rely on for production environments, including AI data centres.
This is the first offering in our FITS portfolio, a new series of solutions designed to support error performance validation across the entire product lifecycle.”
The post Keysight Expands Digital‑Layer Error Performance Validation for High‑Speed 1.6T Interconnects in AI Data Centres appeared first on ELE Times.
CEA-Leti and NcodiN Collaborate on 300 mm Silicon Photonics for Bandwidth-Consuming AI Interconnects
CEA-Leti and NcodiN, a French deep-tech startup pioneering nanolaser-enabled photonic interconnects, announced a strategic collaboration to industrialise NcodiN’s optical interposer technology on a 300 mm integrated photonics process.
NcodiN, which received €16 million in seed financing last November, is developing optical interconnects designed to relieve a critical data-movement bottleneck limiting performance in next-generation semiconductors. The collaboration will accelerate the company’s proof-of-concept work into industrial-grade 300 mm processes—moving beyond copper interconnects and marking a major step toward scalable, in-package, long-reach optical links for future computing architectures and artificial intelligence (AI) chips.
As AI systems demand orders of magnitude increases in bandwidth and energy efficiency, the industry is shifting from copper to optical interconnects.
‘World’s Smallest Laser on Silicon’
NcodiN is building NConnect, the integrated optical interconnect platform powered by the world’s smallest laser on silicon—500× smaller than today’s industry-standard devices. The company’s nanolaser-enabled photonic interposers pave the way to ultra-dense integration (>5,000 nanolasers/mm²) and record-low energy operation (~0.1 pJ/bit). Building on CEA-Leti’s advanced photonics integration expertise, NcodiN is transitioning its nanolaser to a 300 mm silicon photonics platform. This is a foundational step toward scalable, wafer-level optical interconnects for high-end computing and AI applications.
“NcodiN’s nanolaser-enabled photonic interconnects overcome the long-standing bottleneck of bulky, inefficient photonic components that have prevented large-scale adoption,” said Francesco Manegatti, co-founder and CEO of NcodiN. “Our collaboration with CEA-Leti aims to demonstrate NConnect’s compatibility with 300 mm wafers, which is essential for commercial-scale production and cost-effective adoption in AI-centric processors and high-bandwidth computing systems.”
‘Turning Point for Optical Interconnects’
Sébastien Dauvé, CEO of CEA-Leti, said the partnership underscores the two parties’ shared commitment to enabling scalable photonic infrastructure capable of meeting tomorrow’s computing demands.
“Transitioning photonics to a 300 mm CMOS-compatible process is a turning point for optical interconnects that can finally be produced at the scale, cost, and reliability the AI industry requires,” he said. “This collaboration with NcodiN highlights a key part of CEA-Leti’s mission: transferring advanced semiconductor and microelectronics technologies to industry, where they serve a range of vital markets.”
The post CEA-Leti and NcodiN Collaborate on 300 mm Silicon Photonics for Bandwidth-Consuming AI Interconnects appeared first on ELE Times.
How good are ultra-low bitrate speech codecs?
Courtesy: Rhode and Schwarz
Quality Evaluation of Speech Coding Technologies
A comprehensive quality test was conducted to evaluate the perceived quality of various speech coding technologies under realistic conditions. The study compared current mobile network codecs with traditional low-bitrate codecs and emerging AI-based ultra-low bitrate speech coding solutions.
In the test, a set of German speech samples spoken by various speakers was processed through each codec type. A controlled listening experiment was applied to assess overall speech quality with respect to the naturalness of reproduced speech, combined with typical transmission impairments such as packet loss and bandwidth constraints. The evaluation aimed to reflect real-world usage scenarios, including mobile calls, popular IP-based voice services, and speech transmission over satellite links.
To achieve statistically meaningful results, a formal listening test was conducted in a standardised acoustic environment following the ITU-T P.800 methodology using the Absolute Category Rating (ACR) approach. A total of 32 participants – men and women from various age groups – were invited to rate the speech samples. The test ensured balanced demographic representation and controlled conditions to obtain reliable subjective quality scores. Participants evaluated multiple samples per codec type, and the results were statistically analysed to identify significant differences in perceived quality.
Key categories included:
- Modern Mobile Codecs: Including EVS and AMR-WB, which are widely deployed in LTE and 5G networks. Additionally, OPUS (used in WhatsApp) and Satin (used in MS Teams) were considered under real transmission conditions. These codecs offer high fidelity and robustness, especially under variable network conditions.
- Legacy Low-Bitrate Codecs: Such as MELP and LPC-10, and the amateur radio codec Codec2, representing earlier generations of strong speech compression. These codecs were originally designed for extremely bandwidth-constrained environments and are still used in specialised applications.
- Ultra-Low Bitrate AI-Based Codecs: Leveraging deep learning models for end-to-end speech representation and reconstruction. The tested codecs operate in the bitrate range of approximately 600 bit/s to 3 kbit/s. For comparison, 600 bit/s is only one hundredth of the well-known ISDN transmission rate (64 kbit/s) and just one fortieth of the bitrate typically used in VoLTE (24 kbit/s).
Ultra-low bitrate codecs are of particular interest for use in satellite-based communication systems (e.g., Non-Terrestrial Networks, NTN) in Direct-to-Cell or Direct-to-Device mode (smartphones receive signals directly from satellites), where bandwidth is highly constrained, and latency is critical. They are also relevant in military and tactical communication scenarios, where efficient spectrum usage and resilience to transmission errors are essential.
Performance of AI-Based Codecs
The new AI-based codecs support 8 kHz wideband and 12 kHz super-wideband audio and demonstrate a significant leap in perceived speech quality and naturalness compared to classical low-bitrate codecs. Some AI-based solutions approached the performance level of high-quality codecs such as AMR-WB and EVS, making them promising candidates for future communication systems under strong bitrate constraints or high network load situations. The computational complexity of these codecs was not investigated in this study; however, some implementations introduce only a short delay that is acceptable for use in real-time communication.
These codecs deliver speech that sounds natural and pleasant to the listener without question. However, they do not always reproduce all speaker-specific characteristics with full accuracy. For example, pitch and intonation may be slightly altered, and in some cases, initial phonemes or consonants may be replaced or smoothed. While this may be acceptable for everyday conversation, it can limit their applicability in scenarios requiring speaker identification, authentication, or mission-critical communication.
The following table shows some representative results of the listening experiment; the Mean Opinion Score (MOS) rates the subjectively perceived quality on a scale from 1 (bad) to 5 (excellent):

The detailed results of this evaluation, including statistical analysis, codec performance rankings, and listener feedback, are presented at the ITU-T SG12 meeting in September 2025. These insights are expected to contribute to ongoing discussions around codec standardisation, the definition of “quality,” and its automated prediction, particularly in the context of future mobile and satellite communication systems.
The post How good are ultra-low bitrate speech codecs? appeared first on ELE Times.
NXP CoreRide Puts Automakers on Fast Path to 48 V Scalable Zonal Architectures
NXP Semiconductors introduced its NXP CoreRide Z248 zonal reference system – the semiconductor industry’s first pre-validated, design-ready zonal foundation that combines advanced 48 V energy distribution, deterministic data handling, functional safety, and real-time responsiveness. The hardware-software foundation is designed to optimise system performance, reduce system integration effort, shorten development cycles, and allow OEMs and Tier 1s to focus investment where it matters most. It sets a new benchmark for accelerating the journey from zonal architecture concepts to production‑ready implementations.
Built on NXP’s S32K5 microcontroller series, its integrated advanced MRAM technology unlocks ultra-fast, ultra-frequent over-the-air updates throughout the entire vehicle lifecycle. At the software level, the Z248 integrates a comprehensive pre-validated software stack that streamlines complex development of smart data energy network (SDEN) functionalities such as impedance, power and protection monitoring, intelligent data routing, AI‑enabled virtual sensing, diagnostics, and audio.
With its built-in, validated remote protocol stack (RCP), it supports the up-integration of end node functions and ECU consolidation to enable new cost-optimised vehicle architectures. It also addresses key challenges of 48 V zonal systems by managing energy conversion, distribution, and protection within a single, integrated architecture.
The Z248 is rigorously validated through thousands of system-level tests demonstrating outstanding low-power modes, fast boot and fast wake-up response. It is supported by a modern, collaborative continuous integration, continuous testing and continuous delivery (CI/CT/CD) development environment that allows significantly faster test loops with OEMs and tier 1s, shortening validation cycles.
Why it matters: Automakers are being asked to move faster, scale broader, and spend smarter – even as safe zonal consolidation, hybrid power systems, and AI-enabled features dramatically increase architecture complexity. NXP’s new CoreRide zonal reference system brings scalability to this rising architectural complexity. It reduces risk by helping OEMs and tier 1s accelerate development into production, and it eases the switch from legacy platforms and lower total cost of ownership – freeing them from complex integration to put them on a path to production.
“As new E/E architectures redefine vehicle design, our focus is simple: give the automotive ecosystem the foundation to move faster and differentiate with confidence,” said Sébastien Clamagirand, SVP and General Manager, Automotive Systems & Platforms at NXP Semiconductors. “The NXP CoreRide zonal reference system Z248 delivers a performance-optimised, scalable 48 V foundation that intelligently fuses power, data and software, while dramatically simplifying system integration, reducing time to market, and enabling OEMs to focus on vehicle differentiation and long‑term value creation.”
More details: The Z248 zonal reference system is delivered with a complete Board Support Package (BSP) with pre-integrated software from the NXP CoreRide partner ecosystem, including GLIWA’s performance monitoring suite, Green Hills’ software compiler and Vector’s embedded software and tools. The full package undergoes extensive validation to help ensure optimised performance, while continuously improving processing efficiency and power consumption based on the primary use cases of a zonal ECU.
It’s a scalable, safe and secure hardware-software stack that adapts easily to different variants of SDV E/E architectures and integrates naturally with NXP’s broader system offering. It leverages technologies across computing, networking, power management and 48 V energy distribution, including NXP’s S32K566 zonal microcontroller featuring on-chip MRAM that significantly accelerates ECU programming times, both in factory settings and during over-the-air (OTA) updates.
The reference system also integrates 48‑volt‑capable power components such as eFuse, PMIC and DC‑DC converters, robust in‑vehicle networking through Ethernet PHY and CAN transceivers, and built-in audio support. In addition, it introduces a new concept for zonal I/O extension. Designed for broad applicability with housing and a wiring loom, the new NXP CoreRide Z248 zonal reference system can be deployed across ICE, hybrid and BEV platforms, supporting the industry’s move toward zonal processing and ECU consolidation.
Ecosystem Voices
Peter Gliwa, CEO and Founder of GLIWA
“NXP understood that the eco-system, the tooling around a new platform, is essential for its success. With our Analysis Suite T1 built into the NXP CoreRide Z248 zonal reference system, high efficiency, proper timing analysis and timing verification are very well addressed.”
Dan Mender, Vice President of Business Development at Green Hills Software
“Green Hills is proud to play a central role in NXP’s transformative reference solution strategy, which simplifies and accelerates production-focused automotive ECU development through pre-integrated hardware and software optimised for zonal automotive architectures. By leveraging Green Hills’ integrated software solutions, customers can develop high-quality, safety-critical applications with a minimal footprint and optimal performance, while significantly reducing time to deployment.”
Sam Yeh, Chairman of Inventec
“In response to the automotive E/E architecture trend toward zonal and centralised designs, Inventec is collaborating with NXP Semiconductors to support the advancement of next-generation zonal architectures. Through this collaboration, Inventec can provide hardware design and JDM support to OEMs as part of NXP’s zonal E/E architecture initiatives.”
Jochen Rein, Senior Vice President, Business Unit Software Platform at Vector
“The combination of the NXP CoreRide platform and Vector’s software foundation provides a robust basis for next‑generation zonal architectures. We enable our joint customers to reduce their time- to-market due to a pre-integrated and highly optimised software stack.” Vector contributes as an NXP CoreRide partner, providing pre‑integrated software and tools that help streamline development and ensure smooth integration within the zonal ECU architecture.”
The post NXP CoreRide Puts Automakers on Fast Path to 48 V Scalable Zonal Architectures appeared first on ELE Times.
Microchip Helps Manufacturers Meet Cybersecurity Regulations, Expands Security Services in the Trust Platform
The post Microchip Helps Manufacturers Meet Cybersecurity Regulations, Expands Security Services in the Trust Platform appeared first on ELE Times.
Everspin Launches New Generation of Unified Memory for Embedded Systems
Everspin Technologies, a leading developer and manufacturer of magnetoresistive random access memory (MRAM) persistent memory solutions, today announced the UNISYST MRAM family, a new generation of unified memory designed to fundamentally change how embedded systems store and access code and data.
“System designers are running into the physical and performance limits of NOR flash, especially as process nodes move below 40 nanometers and workloads become more demanding,” said Sanjeev Aggarwal, president and CEO of Everspin Technologies. “With UNISYST, we are extending our MRAM roadmap to higher densities while giving customers a practical way to start with PERSYST today and migrate to a code-and-data MRAM architecture as soon as it is available.”
UNISYST is a unified code-and-data MRAM architecture that bridges traditional configuration memory and higher-density persistent storage, extending MRAM into traditional NOR flash applications where superior performance, endurance and reliability are valued. Built as a natural extension of Everspin’s existing PERSYST MRAM platform, UNISYST gives customers a practical, simple migration path from today’s serial MRAM devices to higher-density unified memory without requiring changes to system architecture or software.
Everspin will initially offer the UNISYST family in densities ranging from 128 megabits to 2 gigabits, using a standard xSPI interface operating up to octal SPI at 200MHz. The devices are planned to feature AEC-Q100 Grade 1 qualification and minimum 10-year data retention at extreme temperatures, supporting demanding environments across automotive, aerospace, industrial and edge AI applications.
“As generative AI models move from the cloud to embedded systems, we’re suddenly dealing with assets that are tens or even hundreds of megabytes in size,” said Kwabena W. Agyeman, President and Co-founder of OpenMV. “Storing those models is only part of the challenge — updating them quickly during development and deployment is equally important. High-speed, non-volatile Everspin UNISYST MRAM changes what’s practical for edge AI systems by removing the write bottlenecks associated with traditional flash.”
UNISYST delivers high-bandwidth read and write speeds in a non-volatile memory device, enabling fast boot, rapid updates and predictable performance without the tradeoffs of traditional flash-based designs. By combining high-speed access with persistent storage, UNISYST supports software-defined systems that require frequent reconfiguration while maintaining data integrity across power cycles.
Everspin MRAM has been deployed in mission-critical storage applications for nearly two decades, valued for its endurance and reliability. UNISYST builds on Everspin’s proven MRAM foundation with capabilities designed to support more complex, software-defined systems:
- Code-and-data MRAM architecture designed as a next-generation alternative to other non-volatile memory
- Standard xSPI interface operating up to octal SPI at 200MHz
- Read bandwidth of up to 400 MB/s and write bandwidth of approximately 90 MB/s, over 400 times faster than NOR flash
- Write endurance up to 10 times higher than typical NOR
- AEC-Q100 Grade 1 qualification and minimum 10-year data retention for high-reliability designs
UNISYST is aimed at applications where non-volatile memory must combine high bandwidth, high endurance and predictable behaviour over temperature and time. Target use cases include:
- AI at the edge: Fast AI weight updates, critical storage at the edge, local code-and-data storage for workloads that need fast boot, rapid reconfiguration and non-volatile operation close to the sensor, with the ability to execute in place, removing the need for multiple system memories
- Military and aerospace: Field-programmable gate array (FPGA) configuration and code storage for mission-critical systems, including low-Earth orbit satellites and other platforms that require frequent over-the-air updates
- Automotive: Control, logging and configuration memory in systems that must meet Grade 1 temperature requirements and long-term data retention
- Industrial and casino gaming: High-traffic logging and configuration in environments that demand fast writes, long endurance and persistent storage supporting data logging
The launch of UNISYST represents a platform-level expansion of Everspin’s MRAM portfolio, extending the company’s role from a niche memory supplier to a mainstream memory player serving a multibillion-dollar market. By unifying code storage and data memory, Everspin is addressing the growing demands of software-defined systems that require faster boot times, frequent updates and predictable behaviour over long operating lifetimes.
The post Everspin Launches New Generation of Unified Memory for Embedded Systems appeared first on ELE Times.
TI’s microcontroller portfolio and software ecosystem expanded to enable edge AI in every device
Texas Instruments (TI) introduced two new microcontroller (MCU) families with edge artificial intelligence (AI) capabilities, supporting the company’s commitment to enabling edge AI across its entire embedded processing portfolio. The MSPM0G5187 and AM13Ex MCUs integrate TI’s TinyEngine neural processing unit (NPU), a dedicated hardware accelerator for MCUs that optimises deep learning inference operations to reduce latency and improve energy efficiency when processing at the edge.
TI’s embedded processing portfolio is supported by a comprehensive development ecosystem, including the CCStudio integrated development environment (IDE). Its generative AI features allow engineers to use simple language to accelerate code development, system configuration and debugging through industry-standard agents and models paired with TI data. Altogether, TI is accelerating the adoption of edge AI across electronic devices, from real-time monitoring in wearable health monitors and home circuit breakers to physical AI in humanoid robots. These end-to-end innovations are featured in TI’s booth at embedded world 2026, March 10-12, in Nuremberg, Germany.
“TI invented the digital signal processor almost 50 years ago, laying the groundwork for today’s edge AI processing,” said Amichai Ron, senior vice president, Embedded Processing and DLP® Products at TI. “Now TI is leading the next phase of innovation by integrating the TinyEngine NPU across our entire microcontroller portfolio, including general-purpose and high-performance, real-time MCUs. By enabling AI across our software, tools, devices and ecosystem, we are making edge AI accessible and easy to use for every customer and every application.”
“While much of the world has been focused on AI acceleration and NPUs in bigger SoCs, it turns out some of the more interesting and far-reaching applications of AI can be enabled inside smaller chips like microcontrollers,” said Bob O’Donnell, President and Chief Analyst at TECHnalysis Research. “Edge-based applications of AI acceleration can make consumer devices more intelligent and industrial devices more efficient. Plus, if you can combine these chips with software development tools that themselves leverage AI to help build AI features, you bring the power of AI acceleration to a significantly wider audience of engineers and device designers.”
Advanced intelligence at your fingertips
Consumers are always looking for everyday technology to be more intelligent, from fitness wearables to home appliances and electrical systems. However, many engineers believe that AI capabilities are limited to higher-end applications due to high costs, power demands, and coding requirements. TI’s new MSPM0G5187 Arm Cortex-M0+ MSPM0 MCU represents a fundamental shift for embedded designers, who can now bring edge AI to a wide range of simpler, smaller and more cost-effective applications.
With local computation, the TinyEngine NPU executes computations required by neural networks in parallel to the primary CPU running application code. Compared to similar MCUs without an accelerator, this hardware acceleration:
- Minimises the flash memory footprint.
- Lowers latency by up to 90 times per AI inference.
- Reduces energy utilisation by more than 120 times per AI inference.
Such levels of efficiency allow resource-constrained devices – including portable, battery-powered products – to process AI workloads. At under US$1 in 1,000-unit quantities, the MSPM0G5187 MCU reduces system and operating costs by offering an affordable alternative to other MCU or processor architectures.
Real-time control plus AI acceleration for multimotor systems
Motor control applications in appliances, robotics and industrial systems increasingly call for intelligent features such as adaptive control and predictive maintenance, but implementing these capabilities has historically required complex, multi-chip designs. Building on over two decades of motor control leadership through the C2000
real-time MCU portfolio, TI’s new AM13Ex MCUs are the industry’s first to integrate a high-performance Arm Cortex-M33 core, TinyEngine NPU and advanced real-time control architecture into a single chip.
This degree of integration enables designers to implement sophisticated motor control and AI features simultaneously without external components, lowering bill-of-materials costs by up to 30%. Key enhancements include:
- The ability to maintain precise real-time control loops for up to four motors while the TinyEngine NPU runs adaptive control algorithms for load sensing and energy optimisation.
- An integrated trigonometric math accelerator that performs calculations 10 times faster than coordinate rotation digital computer (CORDIC) implementations, delivering more precise, responsive motor-control performance.
Easily train, optimise and deploy AI models
Both MCU families are supported by TI’s CCStudio Edge AI Studio, a free development environment that simplifies model selection, training and deployment across TI’s embedded processing portfolio. This edge AI toolchain gives engineers full flexibility to run AI models on TI MCUs through either hardware or software implementations. Today, there are more than 60 models and application examples available in the tool to help developers start deploying edge AI in any device, with additional tasks and models planned in the future.
The post TI’s microcontroller portfolio and software ecosystem expanded to enable edge AI in every device appeared first on ELE Times.
R&S to showcase future-proof EMC testing solutions at EMV 2026
Rohde & Schwarz will participate in EMV 2026, Europe’s premier trade fair and congress dedicated to electromagnetic compatibility, held from March 24-26 in Cologne. At the event, which serves as a crucial platform for industry professionals, the company will show its latest advancements in test & measurement equipment to address the evolving challenges within the EMC landscape.
Rohde & Schwarz will demonstrate a broad portfolio of solutions designed to streamline and optimise EMC testing across diverse sectors, including power electronics, consumer, industrial, automotive, Satcom, military, and wireless communications. EMC testing is evolving to meet the demands of emerging technologies and a crowded radio frequency spectrum. Innovations like AI, 6G, and quantum computing present new challenges for ensuring reliable performance, while widespread electrification and increased bandwidth requirements necessitate testing at higher frequencies. To address these shifts, Rohde & Schwarz is developing scalable and modular test solutions focused on repeatable, reliable measurements – streamlining the path from initial assessment to final certification. A further focus is on bridging the gap between real-world field performance and laboratory testing.
At the show, Rohde & Schwarz will showcase a versatile and adaptable solution for conducted and radiated emission testing with the EMI test receivers R&S EPL1001 and R&S EPL1007 with frequency ranges up to 1 GHz and 7.125 GHz. These receivers provide a scalable approach to EMC testing, allowing users to select the optimal configuration for their needs, whether for efficient pre-compliance measurements or fully CISPR 16-1-1 compliant testing for certification.
Rohde & Schwarz is showcasing a speed-optimised EMI test with its industry-leading R&S ESW test receiver — with ESW-B1000R 970 MHz bandwidth extension — and the automated R&S ELEKTRA software: A live demonstration highlights the system’s capabilities for rapid and detailed device characterisation with 3D emission plots generated by R&S ELEKTRA for a typical commercial EMI test. Complementing this is the R&S HF1444G14 high-gain antenna, extending testing capabilities up to 44 GHz for standards like MIL-STD and FCC.
Rohde & Schwarz will also be expanding its R&S BBA300 family of broadband amplifiers with its new dual-band amplifier series R&S BBA300-CDE/FG for 380 MHz to 13 or 18 GHz and the R&S BBA300-DE1000 with an output power of up to 1000 W in the 1 GHz to 6 GHz range. With high linearity, continuous and very wide frequency bands, and innovative protection concepts for high availability, the R&S BBA300 family meets the requirements for EMC immunity testing today and tomorrow.
Rohde & Schwarz will also show its full vehicle antenna test (FVAT) capabilities at the show. Modern vehicles increasingly rely on multiple antennas – for GNSS, Wi-Fi, cellular services like C-V2X, and more to enable safety, convenience and infotainment features – requiring comprehensive full-vehicle antenna testing. This testing enables vehicle manufacturers and their suppliers to characterise radiation performance, verify RF robustness, ensure co-existence of different wireless technologies and ultimately validate the functions and services enabled by wireless connectivity.
For in-depth signal analysis, Rohde & Schwarz will feature the R&S MXO 3 Series oscilloscope, boasting an unmatched acquisition rate exceeding 4.5 million waveforms per second and featuring up to 8 channels. This advanced oscilloscope also includes powerful standard functions such as a very fast FFT and zone trigger capabilities that empower engineers to quickly and precisely understand complex circuit behaviour, essential for effective EMI troubleshooting and design optimisation.
Rohde & Schwarz will also actively contribute to the congress with technical sessions, workshops and demos focusing on EMI test speed optimisation, EMC for medical products and closed-loop Reverb chamber testing. Attendees can also join a panel discussion exploring the impact of Artificial Intelligence on the EMC landscape, covering its current benefits and potential future challenges. Besides others, a Rohde & Schwarz expert will discuss AI’s role in areas like testing and development, and address concerns about new vulnerabilities.
The post R&S to showcase future-proof EMC testing solutions at EMV 2026 appeared first on ELE Times.
Infineon extends leadership position in global microcontroller market
Infineon Technologies further extends its number one position in the global microcontroller market. According to the latest research by Omdia [1], the company increased its total microcontroller market share to 23.2 per cent in 2025 (2024: 21.4 per cent), achieving a year-on-year gain of 1.8 percentage points – the largest increase among its competitors. Notably, this market share gain was achieved against the backdrop of a slightly declining microcontroller market (-0.3 percent).
“This great market result reflects our relentless commitment to accelerating innovation for customer value, outstanding system solutions, and strong customer relations,” said Andreas Urschitz, Chief Marketing Officer and Member of the Management Board at Infineon. “With our superior product portfolio, reliable software, and easy-to-use development tools, we help our customers create value and address the global challenges of decarbonization and digitalisation. Outgrowing the market is a direct outcome of our continued investment in technology and our close collaboration with our partners worldwide.”
Ethernet to enhance microcontroller business for software-defined vehicles
Infineon climbed to the top spot in the global microcontroller market for the first time in 2024, after becoming the number one in the specific market for automotive microcontrollers already one year earlier. The company’s leading market position will be further strengthened by the successful acquisition of Marvell’s Automotive Ethernet business, a milestone transaction completed in August 2025. This move expands Infineon’s cutting‑edge connectivity portfolio, enhancing the company’s system capabilities for central compute architectures in software-defined vehicles (SDV). Integrating the industry-leading BRIGHTLANE automotive Ethernet portfolio with Infineon’s AURIX, PSOC and TRAVEO automotive microcontroller families creates an unmatched system offering for SDVs, enabling features such as autonomous driving, advanced driver‑assistance systems, and secured over‑the‑air updates.
Infineon microcontrollers empower physical AI, such as humanoid robots
Furthermore, the acquisition opens additional growth opportunities in emerging IoT fields and physical AI, such as humanoid robotics. AURIX, PSOC and MOTIX microcontrollers from Infineon empower humanoid robots to safely perceive, think, and interact with their environment in real-time, facilitating advanced computing, smart actuation and motor control, connectivity, and intelligent edge functions.
Infineon enables the key functional blocks in humanoid robots, supporting customers from concept to mass production across industrial, service, and home applications. With its PSOC portfolio, Infineon continues to expand its presence in industrial and consumer markets, offering scalable, secure, and power‑efficient microcontroller solutions widely used in smart home systems, industrial control equipment and connected IoT devices.
Cybersecurity features for future requirements are already implemented today
From IoT devices to connected vehicles, industrial infrastructure, AI‑driven applications, and robotics, cybersecurity is essential. Therefore, Infineon microcontrollers are engineered with future-proof security in mind to protect data, identities and systems from the start and across the entire lifecycle. This includes, for example, complying with international security standards such as ISO/SAE 21434 (automotive security) for the latest generation AURIX and TRAVEO MCUs. Furthermore, Infineon engineers architectures that meet future requirements, such as from the EU Cyber Resilience Act or for post-quantum cryptography, already today – for example, in the latest PSOC products for industrial and consumer applications, as well as AURIX and TRAVEO automotive MCUs.
Infineon at embedded world 2026: Showcasing future-ready innovations
From 10 to 12 March 2026, at embedded world in Nuremberg, Germany, Infineon is presenting its comprehensive portfolio of industrial, consumer and automotive microcontrollers, with a strong focus on innovation for secured, connected, and intelligent systems. Visitors can experience this at Infineon’s booth (Hall 4A, Booth 138) and through a series of presentations and live demos.
[1] Based on or includes research from Omdia: Annual 2001-2025 Semiconductor Market Share Competitive Landscaping Tool – 4Q25. March 2026. Results are not an endorsement of Infineon Technologies AG. Any reliance on these results is at the third party’s own risk.
The post Infineon extends leadership position in global microcontroller market appeared first on ELE Times.
Traction Inverter: Keys to understanding the inverter, the traction, and why X-in-1 solutions are increasingly popular
Courtesy: STMicroelectronics
Traction inverters are at the heart of electric vehicles, meaning that they are one of the modules with the most significant impact on overall efficiency, range, and performance. According to the US Department of Energy, the electric drive system is responsible for some of the most significant losses in an EV, totalling about 18%. Moreover, a report by McKinsey & Company explains that the “top reasons” for consumers to avoid EVs are costs, charging concerns, and range anxiety, two of which are mainly impacted by the traction inverter’s performance. Optimising the electric drive train is thus the quickest and surest way to improve an EV to make it more compelling, and why ST recently published a white paper on traction inverters
Why are traction inverters challenging? The role of a traction inverter
A traction inverter
In a nutshell, the traction inverter takes the DC electrical energy from the battery, converts it into properly commutated three-phase alternating current, and sends it to a traction motor, which then converts it into kinetic energy. Consequently, the traction inverter is also responsible for modulating the AC sent to the motors to adjust for things like torque and speed. Similarly, regenerative braking, which converts mechanical energy into DC power to recharge the battery, also depends on the traction inverter. Hence, the reason drivers love the responsiveness of their EVs, as well as how certain driving features can extend the overall range, is dependent on the performance of the traction inverter, among other things.
The challenges behind the traction and the inversion
A DC-DC Converter
While most two-wheel-drive vehicles will have one or two inverters, an all-wheel drive may have up to one inverter per traction motor and one traction motor per wheel. It all depends on how car makers want to address the car’s overall performance. Hence, it’s easy to see some of the challenges that engineers must solve when designing a traction inverter that must not only convert electrical energy but also sense phase current, monitor motor position, and even manage control loops. While many engineers focus on the “inverter”, “traction” comes with a unique set of challenges, such as determining a rotor’s position with precision, or the whole traction inverter will be grossly inefficient.
Moreover, as EVs increasingly support high-power DC charging, they come with higher DC-link voltages, which means the traction inverter must adapt to reduce losses while enabling traction motors to draw more power. It’s a great example of how modern car modules are highly interdependent and how changing one aspect of the vehicle has ripple effects on many other systems and modules. As the white paper shows (see Table 3), there’s a strong “correlation between motor power, battery size, and DC link voltage.” Put simply, engineers can’t design traction inverters in isolation but must take a more global approach or risk seriously hampering performance due to a poorly suited system.
How to find solutions and design great traction inverters? Choosing the right gate driversTo answer these challenges, the white paper aims to provide key concepts and solutions engineers can apply to their designs. For instance, it looks at how to use gate drivers and power transistors to modulate the current in stator windings. Too often, teams treat these devices as commodities and miss the critical impact they may have on their traction inverters. However, a mismatch between the transistors and gate drivers will result in significantly higher losses, among other things. It’s why a galvanically isolated driver for IGBT and SiC MOSFETs, like the STGAP4S, can make a tremendous difference. ST even offers an evaluation board, the EVALSTGAP4S, which significantly hastens the development of a proof of concept.
Finding the right microcontroller
The SR5E1-EVBE5000P
Another challenge is the ability to control the traction motors with enough precision and speed to improve the EV’s performance. Such a feat is directly tied to the microcontroller that will house the PWM timers and the logic responsible for calculating the field-oriented control mechanisms, among other functions. Using the wrong device will not only hinder performance but also create critical problems that cannot be fixed easily unless the platform supports things like over-the-air updates, the highest levels of functional safety, and more. ST is already offering MCUs tailored for EV applications, like the new Stellar E series and evaluation boards like the SR5E1-EVBE5000P.
Adopting the X-in-1 trendAnd the white paper contains so many more solutions, tips, and expert advice. As ST offers a unique and wide-ranging portfolio of devices that can directly improve traction inverters, the paper also helps engineers anticipate a new trend: X-in-1. Increasingly, we see makers coming up with integrated systems that include the on-board charger, DC-DC converter, and traction inverter. Since these systems impact one another, integrating them helps create a more meaningful and intentional design. However, that means engineers must widen their expertise and rely on a portfolio that includes a broader range of devices.
The post Traction Inverter: Keys to understanding the inverter, the traction, and why X-in-1 solutions are increasingly popular appeared first on ELE Times.
5 Upcoming AIoT Trends to Lookout for in 2026
Courtesy: Hikvision
As we enter 2026, the convergence of artificial intelligence (AI) and IoT infrastructure is reshaping industries, unlocking unprecedented opportunities to optimise operations, enhance security, and improve sustainability. Yet with great technological power comes great responsibility, and the AIoT industry is increasingly focused on ensuring AI develops in ways that are safe, ethical, and beneficial to all. Here are the five key trends shaping the AIoT landscape in 2026.
Scenario-based AIoT solutions are rapidly unlocking new business valueThanks to AIoT, we are witnessing a profound digital shift moving beyond basic IT informatisation to deep integration with Operational Technology (OT). In this transition, business value is no longer created by fragmented data collection, but increasingly by harvesting insights naturally and continuously from daily operations. By embedding perception capabilities into specific real-world scenarios, AIoT is enabling organisations to move from manual management to much more agile, automated control.
This is creating operational capabilities that were once impossible, enabling real-time decision-making, which can rapidly deliver new business value. In the field of industrial safety, for example, we see workshops shifting from reactive response to proactive prevention. Hazardous manual inspections are being replaced by advanced spectral technologies such as TDLAS, which remotely detect natural gas leaks in seconds. The result is a dramatic reduction in response times to emergency situations.
It’s a similar story with quality control. Food manufacturers, for example, are now leveraging AI-driven X-ray systems to instantly identify foreign objects like stones, glass, and bone that were once invisible.
Or consider inventory management, where mining and feed plants are now utilising 3D millimetre-wave radar to automatically scan silos. This is yet another application of AIoT that, in this case, is creating a new level of precision in volumetric data, eliminating human error, and enabling fully automated, real-time control.
Large-scale AI models are evolving into new capabilities for “AI+”Large-scale AI models are empowering the core analysis and processing flow through “AI+” integration. While large language models have revolutionised human-digital interaction, industry-specific models are now reshaping how IoT data interacts with the physical world.
We can already see that by embedding AI into data analysis and signal processing, these models significantly enhance precision and efficiency. For example, traffic and perimeter security models, trained on massive datasets, are pushing the limits of perception. By processing complex data, they minimise false alarm rates for incidents and intrusions. Meanwhile, in audio sensing, “AI+ signal processing” is redefining audio capture by filtering background static and isolating human voices in noisy environments. This technology improves the signal-to-noise ratio, ensuring clear sound pickup even in challenging conditions.

Deeply anchored in this multi-modal understanding, AI Agents are now bridging the gap between perception and human intent. Powered by large language models, these agents enable users to communicate naturally using everyday language. Commands like “Find the person wearing purple clothes who parked a blue SUV this morning” are processed by intelligent security systems to automatically retrieve relevant video segments. Such capabilities are transforming AIoT systems from specialised tools that require professional training into intelligent assistants that are accessible to everyone.
Edge AI is transforming devices from data collectors to intelligent analysersAnother shift we are seeing is towards edge computing. Increasingly, the “Cloud + AI” model is no longer the only option for enterprise digitalisation. By moving AI functions from the cloud to the edge, organisations can achieve millisecond-level response times, operate seamlessly offline, and maintain on-premises privacy. It’s an architectural shift that eliminates bandwidth dependency and significantly reduces infrastructure overhead.
Because devices process raw data directly, this localised architecture extends its value by greatly optimising storage efficiency. This is particularly significant for complex video analysis, powered by visual AI models. Here, edge devices can now precisely identify key targets such as people or vehicles at the source. Based on this accurate segmentation, the system applies differentiated encoding—preserving critical foreground details, while compressing background areas that contribute little investigative value.
This AI-driven approach drastically reduces storage requirements without sacrificing visual clarity. For organisations deploying thousands of cameras across multiple sites, this naturally translates into substantial savings on storage infrastructure, lower ongoing costs, and simplified data management, making large-scale AIoT deployments economically viable.
Responsible AI is embedding ethics into every stage of innovationAI is transforming our lives, work, and business at an unprecedented pace. Yet, this revolution brings a critical responsibility: to ensure innovation unfolds safely, ethically, transparently, and beneficially for all. Responsible AI is no longer optional—it is both a moral imperative and a strategic necessity that builds trust, mitigates risk, and drives long-term innovation. As public awareness and regulatory oversight intensify globally, from Europe’s regulatory pioneering to regional initiatives worldwide, international collaboration becomes essential to harnessing AI’s potential while, at the same time, promoting security, prosperity, and human well-being.

Responsible AI practices, then, must permeate the entire AI lifecycle—from research and development to deployment and real-world application.
This includes establishing guiding principles and governance frameworks, adopting responsible approaches throughout development, and ensuring safety, accountability, and transparency in products and solutions. It is a systematic endeavour requiring industry-wide coordination and collective action across sectors and borders, involving policymakers, industry partners, researchers, and other stakeholders. Only through sustained commitment and open collaboration can we shape an AI future that truly serves humanity.
AIoT is expanding technology’s role from business to society and the environmentAnother key trend that we are seeing is the rapid expansion of application areas for AIoT. In addition to the traditional business solutions, AIoT is now being widely adopted for broader social and environmental applications, demonstrating how intelligent systems can serve humanity and nature.

In ecological protection, for example, specialised AIoT devices are revolutionising conservation efforts, from wildlife monitoring to vegetation health tracking. Indeed, crop growth monitoring systems that leverage AIoT technologies for large-scale, real-time analysis of crop health are becoming increasingly widespread in agriculture. This capability addresses the inefficiencies of manual inspections, enabling precise management and optimising yields through digitisation.
AIoT is also being used to improve public safety. AI-driven drowning prevention systems, for example, are being deployed in areas which are known to be high risk. They utilise real-time video analytics to detect hazardous conditions, automatically identifying when an individual enters dangerous areas, for example. When this happens, the technology triggers an immediate alert, transforming passive monitoring (or no monitoring at all) into a highly effective and proactive solution that can save lives.
Looking ahead: the future of AIoTFor organisations accelerating their digital transformation journeys, these trends offer both guidance and inspiration. The future of AIoT, after all, is about creating real value for businesses, enhancing experiences for people, and building a more sustainable world for everyone. And that future is arriving now.
The post 5 Upcoming AIoT Trends to Lookout for in 2026 appeared first on ELE Times.
Space internet is coming, and satellite networks could bypass app stores and telcos entirely
Low Earth Orbit (LEO) satellite constellations are entering a new phase of telecom relevance. What began as fixed satellite broadband for remote homes has evolved into direct-to-device connectivity integrated within 3GPP Non-Terrestrial Network standards. Modern satellites are no longer simple bent-pipe relays. They incorporate regenerative payloads, digital beamforming arrays, onboard processing, and inter-satellite optical links that allow orbital mesh routing. The engineering sophistication is undeniable.
However, for telecom professionals and network architects, the key discussion is not about technological capability. It is about architectural positioning: can satellite networks scale to rival terrestrial radio access networks (RAN)? Can they bypass traditional telecom operators? And do they meaningfully challenge app-store ecosystems? The answers require a grounded understanding of spectrum physics, link budgets, and capacity density.
Spectrum Architecture: IMT and Non-IMT Realities
Direct-to-device satellite systems operate either in traditional satellite allocations (non-IMT bands such as L-band or S-band) or within IMT spectrum harmonised under 3GPP NTN specifications.
In non-IMT bands, scalability faces structural limits. Propagation at these frequencies is highly dependent on near line-of-sight conditions. Building penetration loss, urban canyon multipath fading, and foliage attenuation reduce reliability. Unlike terrestrial networks that can densify through small cells and sectorization, satellites illuminate wide geographic footprints. They cannot dynamically increase cell density in obstructed urban terrain.
This makes non-IMT direct-to-handset connectivity better suited for open environments such as rural regions, highways, maritime routes, and disaster zones rather than dense urban centres. IMT integration under NTN introduces greater harmonisation. Release 17 and beyond specify extended timing advance calibration, Doppler shift compensation, modified Hybrid Automatic Repeat Request (HARQ) timing, and satellite-aware mobility management. Devices can theoretically switch between terrestrial LTE/5G and orbital access with protocol continuity.
Yet the operational model remains conditional. Satellite access is typically triggered when terrestrial RSRP or SINR drops below defined thresholds. The modem evaluates signal quality and only activates NTN mode when necessary. This ensures satellite resources are preserved, and terrestrial networks handle high-density traffic loads.
Elon Musk, CEO of SpaceX, captured the strategic goal succinctly:
“There should be no dead zones anywhere in the world for your cell phone.” The emphasis is on coverage ubiquity, not urban capacity replacement.
Capacity Density: The Defining Constraint
The most decisive technical limitation is spectral density. Terrestrial operators achieve massive throughput through:
- Massive MIMO spatial multiplexing
- Dense macro-cell grids
- Small-cell layering in high-traffic zones
- Fibre-backed backhaul
- Millimeter-wave overlays
- Aggressive frequency reuse patterns
Satellite beams, even with advanced spot-beam architectures and frequency reuse, cover substantially larger areas. The spectral efficiency per square kilometre cannot match dense terrestrial deployments. Additionally, handheld devices operate under strict uplink power constraints, limiting achievable modulation and coding schemes for satellite links.
From a Shannon capacity standpoint, satellite systems are optimised for wide-area coverage, not high-density concurrency. In densely populated markets, even a mid-sized terrestrial operator can deliver greater aggregate throughput than an orbital beam serving the same footprint. This reality defines satellite’s optimal roles:
- Extending connectivity to underserved geographies
- Providing redundancy during disasters
- Supporting maritime and aviation mobility
- Enabling IoT in sparse environments
- Enhancing national connectivity resilience
Gwynne Shotwell, President of SpaceX, has consistently emphasised connectivity as foundational infrastructure. Reliable global access enables economic participation in regions where terrestrial networks are economically infeasible. The engineering model aligns with that vision.
Inter-Satellite Routing and Cloud-Native Architecture
Modern LEO constellations differentiate themselves through inter-satellite optical links (ISLs). Instead of routing traffic exclusively through ground gateways, data can hop between satellites before downlinking closer to its destination. This reduces dependence on terrestrial fibre choke points and can optimise long-haul routing paths.
Software-defined payloads further allow dynamic beam shaping, adaptive spectrum allocation, and load balancing. Combined with cloud-native packet cores and virtualised network functions, satellite systems increasingly resemble distributed edge clouds in orbit.
However, engineering challenges persist:
- Beam handover must be predictive to prevent session drops.
- Doppler shift compensation requires continuous frequency correction.
- Latency variability introduces jitter that must be absorbed at the transport layer.
- Congestion control algorithms, often QUIC-based, must adapt dynamically.
These are solvable challenges, but they reinforce the reality that satellite networks are engineered for resilience and reach rather than metro throughput supremacy.
Application Distribution and App-Store Dynamics
The notion that satellite networks could bypass app stores often conflates connectivity with runtime control. Satellite networks can facilitate cloud-streamed applications, Progressive Web Apps leveraging Web Assembly, multicast firmware updates, and enterprise-managed OTA deployments. However, runtime enforcement remains device-governed. Operating systems from Apple and Google maintain secure boot chains, code-signing validation, and hardware root-of-trust mechanisms independent of the access network.
Thus, while connectivity may be decentralised, execution control remains centralised within device ecosystems. App-store displacement at mass consumer scale remains unlikely in the near term. Satellite-enabled distribution is most viable in enterprise, industrial, defence, and controlled-device environments where policy governance is internally managed.
Global Regulatory Architecture
Satellite beams inherently traverse national borders. This introduces complex regulatory questions regarding lawful intercept, spectrum harmonisation, emergency service prioritisation, and data sovereignty. Unlike terrestrial towers confined within licensed areas, orbital coverage footprints overlap multiple jurisdictions simultaneously.
Regulators worldwide are converging toward coexistence frameworks where satellite operators must comply with local licensing, security audits, and traffic monitoring obligations. Encryption policies, gateway localisation requirements, and national security clearances are increasingly embedded within approval processes.
Indian Regulatory Perspective
In India, satellite internet operates within a structured licensing regime under the Department of Telecommunications. Operators must obtain a Global Mobile Personal Communication by Satellite (GMPCS) license to provide satellite communication services. Spectrum allocation is subject to administrative assignment or auction-based frameworks, depending on policy direction. Gateway earth stations require approval from national authorities, and security compliance is mandatory. Traffic monitoring capabilities must be provisioned in accordance with lawful intercept regulations. Data localisation considerations, especially under emerging digital governance frameworks, may require traffic breakout within Indian jurisdiction rather than pure inter-satellite routing for domestic data flows.
Additionally, satellite services must align with spectrum coordination under the Wireless Planning & Coordination (WPC) Wing. Coexistence with terrestrial IMT networks requires careful interference management and harmonisation. Regulatory approvals also involve security vetting of network elements and equipment supply chains.
India’s regulatory approach emphasises sovereign oversight while encouraging innovation through hybrid terrestrial-satellite integration models. Partnerships between satellite operators and domestic telecom providers are often preferred to ensure compliance with national security and licensing frameworks.
Industry Alignment: Complement, Not Replace
Sunil Bharti Mittal, Chairman of Bharti Airtel, has emphasised cooperation between satellite and terrestrial operators. In dense markets, terrestrial RAN grids remain unmatched in spectral reuse efficiency and urban throughput.
The long-term architecture, therefore, becomes hybrid:
- Terrestrial networks manage dense capacity loads.
- Satellite networks eliminate coverage gaps.
- Multi-RAT device logic dynamically orchestrates between both.
This convergence is not theoretical. It is already embedded within modem firmware design, NTN standardisation, and regulatory frameworks.
Engineering Takeaways
Telecom engineers and policymakers should focus on:
- Intelligent multi-RAT orchestration between terrestrial and NTN layers
- Adaptive transport protocols for variable-latency satellite links
- Robust cryptographic identity frameworks for secure OTA distribution
- Spectrum coexistence planning in IMT-integrated NTN deployments
- Regulatory compliance mechanisms for cross-border satellite beams
Conclusion
Space internet is a meaningful technological evolution. Advanced beamforming, regenerative payloads, inter-satellite optical routing, and NTN standardisation represent major engineering progress. But spectrum reuse laws and capacity density constraints remain decisive. Satellite networks excel in reach, resilience, and redundancy. Terrestrial networks dominate high-density throughput and urban spectral efficiency. The future of connectivity is not orbital disruption of telecom operators or wholesale bypass of app ecosystems. It is a structured convergence of a layered architecture where Earth and orbit operate in coordinated harmony.
Engineers who design seamless integration across these layers will define the next decade of global communications.
The post Space internet is coming, and satellite networks could bypass app stores and telcos entirely appeared first on ELE Times.



