Українською
  In English
ELE Times
Inside the Hardware Lab: How Modern Electronic Devices Are Engineered
The engineering of contemporary electronic devices reflects a convergence of system thinking, material maturity, multidisciplinary collaboration, and accelerated development cycles. In laboratories across the world, each new product emerges from a structured, iterative workflow that integrates architecture, hardware, firmware, testing, and manufacturing considerations into a cohesive design process. As electronic systems become more compact, intelligent, and operationally demanding, the pathway from concept to certified production device requires a high level of methodological discipline.
This article outlines how modern electronics are engineered, focusing on workflows, design considerations, and the interdependencies that define professional hardware development today.
Requirements Engineering: Establishing the Foundation
The design of any electronic device begins with a comprehensive articulation of requirements. These requirements typically combine functional objectives, performance targets, environmental constraints, safety expectations, and compliance obligations.
Functional objectives determine what the system must achieve, whether sensing, processing, communication, actuation, or power conversion. Performance parameters such as accuracy, latency, bandwidth, power consumption, and operating lifetime define the measurable boundaries of the design. Environmental expectations—temperature range, ingress protection, shock and vibration tolerance, electromagnetic exposure, and mechanical stresses—shape the system’s robustness profile.
Regulatory frameworks, including standards such as IEC, UL, BIS, FCC, CE, and sector-specific certifications (automotive, medical, aerospace), contribute additional constraints. The initial requirement set forms the reference against which all subsequent design decisions are evaluated, creating traceability between intent and implementation.
System Architecture: Translating Requirements into Structure
System architecture bridges conceptual requirements and concrete engineering design. The process involves defining functional blocks and selecting computational, sensing, power, and communication strategies capable of fulfilling the previously established criteria.
The architecture phase typically identifies the processing platform—ranging from microcontrollers to SoCs, MPUs, or FPGAs—based on computational load, determinism, power availability, and peripheral integration. Communication subsystems are established at this stage, covering interfaces such as I²C, SPI, UART, USB, CAN, Ethernet, or wireless protocols.
The power architecture also takes shape here, mapping energy sources, conversion stages, regulation mechanisms, and protection pathways. Considerations such as thermal distribution, signal isolation, noise-sensitive regions, and preliminary enclosure constraints influence the structural arrangement. The architectural framework becomes the guiding reference for schematic and PCB development.
Component Selection: Balancing Performance, Reliability, and Lifecycle
Modern device design is deeply influenced by semiconductor availability, lifecycle predictability, and performance consistency. Component selection involves more than identifying electrically suitable parts; it requires an understanding of long-term supply chain stability, tolerance behaviour, temperature performance, reliability data, and compatibility with manufacturing processes.
Processors, sensors, regulators, discrete, passives, communication modules, and protection components are evaluated not only for electrical characteristics but also for de-rating behaviours, thermal performance, and package-level constraints. Temperature coefficients, impedance profiles, safe-operating-area characteristics, clock stability, and signal integrity parameters become central evaluation factors.
The resulting bill of materials represents an intersection of engineering decisions and procurement realities, ensuring the device can be produced reliably throughout its intended lifespan.
Schematic Design: The Logical Core of the Device
Schematic design formalizes the architectural plan into detailed electrical connectivity. This stage defines logical relationships, reference paths, power distribution, signal conditioning, timing sequences, and safety structures.
Circuit blocks—analog conditioning, digital logic, power conversion, RF front-ends, sensor interfaces, and display or communication elements—are designed with full consideration of parasitic behaviour, noise propagation, and functional dependencies. Power distribution requires careful sequencing, decoupling strategies, transient response consideration, and ripple management. Signal interfaces require appropriate level shifting, impedance alignment, and termination strategies.
Test points, programming headers, measurement references, and diagnostic interfaces are defined at this stage to ensure observability during validation. The schematic ultimately serves as the authoritative source for layout and firmware integration.
PCB Layout: Integrating Electrical, Mechanical, and Thermal Realities
PCB layout transforms the schematic into a physical system where electrical performance, manufacturability, and thermal behaviour converge. The arrangement of components, routing topology, layer stack-up, ground referencing, and shielding determines the system’s electromagnetic and thermal characteristics.
High-speed interfaces require controlled impedance routing, differential pair tuning, length matching, and clear return paths. Power networks demand minimized loop areas, appropriate copper thickness, and distribution paths that maintain voltage stability under load. Sensitive analog signals are routed away from high-noise digital or switching-power regions. Thermal dissipation—achieved through copper pours, thermal vias, and heat-spreading strategies—ensures the system can sustain continuous operation.
Mechanical constraints, such as enclosure geometry, connector placement, mounting-hole patterns, and assembly tolerances, influence layout decisions. The PCB thus becomes a synthesized embodiment of electrical intent and mechanical feasibility.
Prototyping and Hardware Bring-Up: Validating the Physical Implementation
Once fabricated, the prototype enters hardware bring-up, a methodical verification process in which the design is examined against its expected behavior. Validation typically begins with continuity and power integrity checks, ensuring that supply rails meet voltage, ripple, and transient requirements.
System initialization follows, involving processor boot-up, peripheral activation, clock stability verification, and interface-level communication checks. Subsystems are evaluated individually—power domains, sensor blocks, RF modules, analog interfaces, digital buses, and storage components.
Observations from oscilloscopes, logic analyzers, current probes, and thermal imagers contribute to a detailed understanding of the device’s operational profile. Any deviations from expected behavior guide iterative optimization in subsequent revisions.
Firmware Integration: Achieving Functional Cohesion
Firmware integration establishes coordination between hardware capabilities and system functionality. Board-support packages, peripheral drivers, middleware stacks, and application logic are aligned with the hardware’s timing, power, and performance characteristics.
Real-time constraints influence the choice of scheduling structures—whether bare-metal loops, cooperative architectures, or real-time operating systems. Communication stacks, sensor acquisition pipelines, memory management, and power-state transitions are implemented and tested on the physical hardware.
Interaction between firmware and hardware exposes edge cases in timing, voltage stability, electromagnetic sensitivity, or analog behavior, which often inform refinements in both domains.
Validation and Testing: Confirming Performance, Robustness, and Compliance
Comprehensive testing examines a device’s functionality under nominal and boundary conditions. Functional validation assesses sensing accuracy, communication stability, user-interface behavior, control logic execution, and subsystem interoperability. Reliability evaluation includes thermal cycling, vibration exposure, mechanical stress tests, humidity conditioning, and operational aging.
Electromagnetic compatibility testing examines emissions and immunity, including radiated and conducted profiles, ESD susceptibility, fast transients, and surge resilience. Pre-compliance evaluation during early prototypes reduces the probability of redesign during final certification stages.
Data collected during validation ensures that the system behaves predictably throughout its expected operating envelope.
Manufacturing Readiness: Transitioning from Prototype to Production
Production readiness involves synchronizing design intent with assembly processes, quality frameworks, and cost structures. Design-for-manufacturing and design-for-assembly considerations ensure that the device can be fabricated consistently across multiple production cycles.
Manufacturing documentation—including fabrication drawings, Gerber files, pick-and-place data, test specifications, and assembly notes—forms the reference package for contract manufacturers. Automated test equipment, in-circuit test fixtures, and functional test jigs are developed to verify each assembled unit.
Bill-of-materials optimization, yield analysis, and component sourcing strategies ensure long-term production stability.
Compliance and Certification: Meeting Regulatory Obligations
Final certification ensures that the device adheres to the safety, electromagnetic, and environmental requirements of the markets in which it will be deployed. Testing laboratories evaluate the system against regulatory standards, verifying electrical safety, electromagnetic behaviour, environmental resilience, and user-level protections.
The certification phase formalizes the device’s readiness for commercial deployment, requiring complete technical documentation, traceability data, and repeatable test results.
Lifecycle Management: Sustaining the Design Beyond Release
After the product reaches the market, lifecycle management ensures its sustained usability and manufacturability. Engineering change processes address component obsolescence, firmware enhancements, mechanical refinements, or field-observed anomalies.
Long-term reliability data, manufacturing feedback, and supplier updates contribute to ongoing revisions. In connected systems, firmware updates may be deployed over the air, extending functionality and addressing vulnerabilities.
Lifecycle management closes the loop between deployment and continuous improvement.
Conclusion
The design of a modern electronic device is a coordinated engineering endeavour that integrates requirements analysis, architectural planning, hardware design, firmware development, validation, manufacturing readiness, and lifecycle stewardship. Each stage influences the next, forming a continuous chain of interdependent decisions.
As technological expectations expand, the engineering methodologies supporting electronic design continue to mature. The result is a disciplined, multi-phase workflow that enables the creation of devices that are reliable, certifiable, scalable, and aligned with the complex operational demands of contemporary applications.
The post Inside the Hardware Lab: How Modern Electronic Devices Are Engineered appeared first on ELE Times.
EDOM Seminar Explores the Next Generation of Physical AI Robots Powered by NVIDIA Jetson Thor
The wave of innovation driven by generative AI is sweeping the globe, and AI’s capabilities are gradually extending from language understanding and visual recognition to action intelligence closer to real-world applications. This change makes physical AI, which integrates “perception, reasoning, and action,” the next important threshold for robotics and smart manufacturing. To help Taiwanese industries grasp this multimodal trend, EDOM Technology will hold the “AI ×Multimodal Robotics: New Era of Industrial Intelligence Seminar” on December 3, showcasing NVIDIA Jetson Thor, the ultimate platform for physical AI and robotics, and featuring insights from ecosystem partners who will share innovative applications spanning smart manufacturing, autonomous machines, and education.
As AI technology rapidly advances, robotics is shifting from the traditional perception and response model to a new stage where they can autonomously understand and participate in complex tasks. The rise of multimodal AI enables machines to simultaneously integrate image, voice, semantic, and spatial information, making more precise judgments and actions in the real world, making it possible to “know what to do” and “know how to do it.” As AI capabilities extend from the purely digital realm to the real world, physical AI has become a core driving force for industrial upgrading.
Multimodal × Physical AI: The Next Key Turning Point in Robotics
The seminar focuses on the theme of “Physical AI Driving the Intelligent Revolution of Robotics”, explores how AI, through multimodal perception and autonomous action capabilities, is reshaping the technical architecture and application scenarios of human-machine collaboration. Through technical sharing and case analysis, the seminar will help companies grasp the next turning points of smart manufacturing.
This event will focus on NVIDIA Jetson Thor and its software ecosystem, providing a panoramic view of future-oriented multimodal robotics technology. The NVIDIA Jetson Thor platform combines high-performance GPUs, edge computing, and multimodal understanding to complete perception, inference, decision-making, and action planning all at the device level, significantly improving robot autonomy and real-time responsiveness. Simultaneously, the platform is deeply integrated with NVIDIA Isaac, NVIDIA Metropolis, and NVIDIA Holoscan, creating an integrated development environment from simulation, verification, and testing to deployment, thus accelerating the implementation of intelligent robots and edge AI solutions. NVIDIA Jetson Thor also supports LLM, visual language models (VLMs), and various generative AI models, enabling machines to interpret their surroundings, interact, and take action more naturally, becoming a core foundation for advancing physical AI.
In addition to the core platform analysis, the event features multiple demonstrations and exchange sessions. These includes a showcase of generative AI-integrated robotic applications, highlighting the latest capabilities of the model in visual understanding and action collaboration; an introduction to the ecosystem built by EDOM, sharing cross-field cooperation experiences from education and manufacturing to hardware and software integration; and a hands-on technology experience zone, where attendees can see the practical applications of NVIDIA Jetson Thor in edge AI and multimodal technology.
From technical analysis to industry exchange, Cross-field collaboration reveals new directions for smart machines:
- Analyses of the core architecture of NVIDIA Jetson Thor and the latest developments in multimodal AI by NVIDIA experts.
- Case studies on how Nexcobot introduces AI automation in smart manufacturing.
- Ankang High School, which achieved excellent results at the 2025 FIRST Robotics Competition (FRC) World Championship, showcases how AI and robotics courses can cultivate students’ interdisciplinary abilities in education.
- Insights into LLM and VLM applications in various robotic tasks given by Avalanche Computing.
Furthermore, EDOM will introduce its system integration approaches and deployment cases powered by NVIDIA IGX Orin and NVIDIA Jetson Thor, presenting the complete journey of edge AI technology from simulation to application implementation.
The event will conclude with an expert panel. Featuring leading specialists, the discussion covers collaboration, challenges, and international trends brought by multimodal robotics, helping industries navigate and anticipate the next phase of smart machine innovation.
Driven by physical AI and multimodal technologies, smart machines are entering a new phase of growth. The “AI × Multimodal Robotics: New Era of Industrial Intelligence Seminar” will not only showcase the latest technologies but also aim to connect the supply chain in Taiwan, enabling the manufacturing and robotics industries to seize opportunities in multimodal AI. The event will take place on Wednesday, December 3, 2025, at the Taipei Fubon International Convention Center, with registration and demonstration beginning at 12:30 PM. Enterprises and developers focused on AI, robotics, and smart manufacturing are welcome to join and stay at the forefront of multimodal technology. For more information, please visit https://www.edomtech.com/zh-tw/events-detail/jetson-thor-tech-day/
The post EDOM Seminar Explores the Next Generation of Physical AI Robots Powered by NVIDIA Jetson Thor appeared first on ELE Times.
Nuvoton Releases Compact High-Power Violet Laser Diode (402nm, 1.7W)
Nuvoton Technology announced today the launch of its compact high-power violet laser diode (402nm, 1.7W), which achieves industry-leading optical output power in the industry-standard TO-56 CAN package. This product realizes compact size, high output power, and long-life, which were previously considered difficult, through our proprietary chip design and thermal management technologies. As a result, it contributes to space-saving and long-life optical systems for a wide range of optical applications.
Achievements:
1. Achieves industry-leading optical output power of 1.7W at 402nm in the industry-standard TO-56 CAN package, contributing to the miniaturization of optical systems.
2. Realizes long-life through proprietary chip design and thermal management technologies, reducing the running costs of optical systems.
3. Expands the lineup of mercury lamp replacement solutions, improving flexibility in product selection according to application.
Latest Addition
In addition, this product is newly added to their lineup of mercury lamp replacement solutions using semiconductor lasers, providing customers with new options. This enables flexible product selection according to application, installation environment, and required performance, improving the freedom of system design.
Its applications include:
・ Laser Direct Imaging (LDI)
・ Resin curing
・ Laser welding
・ 3D printing
・ Biomedical
・ Display
・ Alternative light source for mercury lamps, etc.
Nuvoton Technology Corporation Japan (NTCJ) joined the Nuvoton Group in 2020. As a dedicated global semiconductor manufacturer, NTCJ provides technology and various products cultivated over 60 years since its establishment, and solutions that optimally combine them. We value relationships with our customers and partners, and by providing added value that exceeds expectations, we are working as a global solution company that solves various issues in society, industry, and people’s lives.
The post Nuvoton Releases Compact High-Power Violet Laser Diode (402nm, 1.7W) appeared first on ELE Times.
Powering the Chip Chain, Part 03: “AI is Transforming the Semiconductor Value Chain End-to-End,” Says RS Components’ Amit Agnihotri
| India’s semiconductor ambitions are backed by initiatives like the ₹76,000 crore ISM and the ₹1,000 crore DLI scheme, which focuses on fostering a strong design ecosystem. A critical part of this effort is ensuring design engineers get timely access to quality components.
To highlight how distributors are enabling this, we present our exclusive series — “Powering the Chip Chain” — featuring conversations with key industry players. |
As India solidifies its position in the global electronics manufacturing landscape, the role of distribution has evolved from merely supplying components to enabling rapid, AI-driven innovation. This shift demands hyper-efficient inventory, advanced technical support, and flexible commercial policies.
In an exclusive interaction for the ‘Powering the Chip Chain’ series, Amit Agnihotri, Chief Operating Officer at RS Components & Controls (I) Ltd., shares his perspective on the exponential growth of AI-centric component demand and how digital transformation is equipping distributors to accelerate time-to-market for a new generation of Indian engineers.
AI: The New Core of Product Discovery
The integration of AI is no longer a future concept but a foundational element of distribution platforms. Mr. Agnihotri confirms that RS Components India is integrating AI into both its customer-facing systems and internal operations.
The primary objective is to make product discovery simpler, faster, and more intuitive. By leveraging AI-driven analytics, the company analyzes customer trends and buying patterns to anticipate future needs, ensuring the most relevant products are recommended with greater precision and speed. In line with this vision, RS is also investing heavily in enhancing its website recommendation engine through advanced AI, enabling customers to easily find the right products that best suit their specific applications.
“On our digital platform, AI-powered features guide users in identifying the right product based on their specific needs and selection criteria, significantly improving turnaround time and enhancing the overall experience,” says Agnihotri. This capability also extends internally, allowing RS India to optimize inventory management and ensure offerings remain aligned with volatile market demand.
Exponential Demand for Edge Intelligence
The rapid advancement of AI is fundamentally restructuring component demand, particularly accelerating the need for specialized silicon. This is most evident in the shift of high-performance components away from only high-end data centers.
Mr. Agnihotri notes that RS Components is witnessing exponential growth in AI adoption across core sectors such as automotive, electronics manufacturing, and industrial automation.
This growth is driving demand for specialized parts such as edge AI chips, neural network accelerators, and high-performance GPUs. These solutions, which support AI-centric applications across healthcare devices, autonomous systems, and smart mobility, enable customers to achieve higher processing speeds, ultra-low latency, and greater energy efficiency in their designs.
“The scale and speed at which AI technologies are being integrated into these industries indicate a clear shift in product development priorities—towards high-speed processing capabilities, ultra-low latency architectures, and energy-efficient AI hardware,” he explains.
Empowering R&D with Flexibility and Tools
To support this rapid prototyping and iteration, RS Components focuses on providing R&D teams with both technical enablement and commercial flexibility.
The support spans the entire design cycle, from concept to validation, anchored by the DesignSpark platform. This platform provides an integrated suite of free design tools, including PCB design and simulation, which accelerates the transition from concept to prototype.
Furthermore, all product listings are enriched with technical data. “All listings are enriched with datasheets, footprints, 3D models, parametric filters, and application notes so design engineers can perform compatibility checks and Design For Manufacture (DFM) assessments early in the process,” Agnihotri says.
Crucially, the company has adapted its commercial policies to match the low-volume needs of R&D work:
“Recognising that R&D and PoC work often requires small quantities of the latest components, we operate with No MOQ [Minimum Order Quantity] and No MOV [Minimum Order Value] policies on many products, and we add approximately 5,000 NPIs [New Product Introductions] to our portfolio each month.”
These practices ensure that startups, academic labs, and enterprise R&D teams can source cutting-edge parts in small batches without heavy inventory commitments.
The Policy Tailwinds and Supply Chain Agility
Government initiatives, most notably the Semicon India programme and national AI policies, are playing a material role in creating market readiness.
Amit states, “By incentivizing local manufacturing, design centers and skilling, these programs shorten lead times, attract investment and create predictable demand for AI accelerators, advanced chips and supporting components.” This policy support, he adds, allows distributors to implement deeper localization of inventory and expand value-added services.
To ensure supply chain agility in the face of this growing complexity, RS Components utilizes AI and predictive analytics. Machine-learning models ingest purchase history and market signals to produce more accurate short- and medium-term forecasts.
“AI-driven SKU segmentation and safety-stock algorithms prioritize high-demand electronic components, while predictive lead-time modelling and allocation analytics enable proactive vendor coordination,” he explains. This systemic use of AI helps manage potential foundry constraints and allocation volatility, which remains a persistent challenge in the global semiconductor ecosystem.
Conclusion: The Distributor as an Innovation Partner
Mr. Agnihotri concludes by emphasizing that AI will continue to transform the semiconductor value chain end-to-end—from component design (using AI for simulation) to distribution (through predictive analytics and personalized recommendations).
RS Components’ strategy is clear: by embedding AI into its DesignSpark toolchain, leveraging predictive models to localize inventory, and providing flexible commercial terms, the company is positioning itself as a strategic partner. This integrated approach enables engineers and manufacturers to iterate quickly, source the right components, and scale with confidence, fundamentally accelerating innovation across the Indian market.
The post Powering the Chip Chain, Part 03: “AI is Transforming the Semiconductor Value Chain End-to-End,” Says RS Components’ Amit Agnihotri appeared first on ELE Times.
Enhancing Embedded Systems with Automation using CI/CD and Circuit Isolation Techniques
Courtesy: Lokesh Kumar, Staff Engineer, STMicroelectronics and Raunaque Mujeeb QUAISER, Group Manager, STMicroelectronics
To keep pace with constantly advancing technological ecosystems, automation has become a focus area for innovation, efficiency and delivering quality results. One of the significant areas where automation is making impact is in embedded systems.
Embedded systems are characterized by their ability to operate with minimal human intervention, often in real-time, and are integral to the functionality of many devices we rely on daily. The integration of automation into these systems is changing the way they are designed, developed, and deployed, leading to enhanced performance, reliability, and scalability.
Current challenges of automation
A typical embedded system development platform includes multiple jumpers, switches, and buttons for enabling various hardware (HW) configurations like boot modes, programming mode or enabling different HW features. Due to this it becomes difficult to automate and truly validate embedded board and software for all HW configurations remotely or via automation.
While this can be solved using mechanical arms and similar concepts, it is very costly and not time efficient, hence making it not practically a feasible solution. Another alternate solution is using development boards, but it deviates from actual testing scenario.
Proposed Solution:
We will explore the use of circuit isolation techniques and the adoption of the Jenkins ecosystem for continuous integration and continuous deployment (CI/CD). Circuit isolation ensures that test controller can configure the embedded systems under test safely and reliably by preventing electrical faults and interference from affecting the system’s performance while Jenkins provide a robust framework for automating the software development lifecycle of embedded systems. Jenkins, an open-source automation server, enables developers to build, test, and deploy code efficiently and consistently. By integrating Jenkins with embedded system development, teams can achieve faster iteration cycles, improved code quality, and seamless deployment processes.
The proposed solution shares a cost-effective solution with easy to implement method to overcome above mentioned limitations.
Figure 1: Block diagram of Test Automation Setup
Figure 1 shows the block diagram of automation architecture where a small footprint microcontroller can be used to control an isolation circuit connected to the junctions of switch/jumper/buttons of embedded device under test (DUT). Isolation circuit (e.g., Optocoupler) ensures circuit safety. In the current demonstration NUCLEO-F401RE board (MCU) is used to control the optocoupler circuit connected to boot modes switches and reset button of STM32MP157F-DK2 board under test (DUT). Jenkins will issue remote commands to MCU.
Figure 2 : Circuit Diagram for controlling HW configurations
Figure 2 demonstrates the circuit diagram where MCU can control the state of two-way Switch SW1. VDD is connected to one end of output and R1 register is added to limit current. The value of R1 is based on the embedded device specification. In the default state, the physical switch is put in off state. Then depending on which boot combination we want, MCU will enable the output side of Optocoupler circuit.
For example, to put STM32MP157F-DK2 in SD card mode, GPIO pair 1 and GPIO pair 2 will be put in HIGH state. This will short output circuit and achieve 1,0,1 combination for SD card boot configuration. A similar circuit is present at the embedded device Reset button. After changing configuration, one can trigger reset to boot in desired hardware configuration.
Jenkins is used to remotely send commands to NUCLEO-F401RE board through a computer containing automation scripts (python and shell scripts) which translates commands to configurations and sends it to MCU to change the device under test (DUT) state as explained above. Using this setup, one can manage all HW configurations remotely. The above arrangement successfully enables the developer to remotely perform board programming / flashing, toggling boot switches and configurations on STM32MP157F-DK2.
Figure 3: Jenkins pipeline example
Python scripts are present on PC (Jenkins client) controlling MCU and Jenkins is used to invoke these scripts via Jenkins pipeline/scripts.
Scope and Advantages
The proposed solution can be extended to various embedded boards due to the small circuit size that can be easily connected to DUT. Also, a single MCU can control multiple circuits depending on the count of the GPIOs available on a given, multiple DUTs can be controlled. Another advantage being the low cost of circuits and MCU’s and solution is flexible due to multiple options being available to control it (UART, Bluetooth, I2C, SPI, Wi-Fi). Although the solution needs one controlling client which is used to control the Automation system remotely, one client can be used to control multiple Automation systems and thus making the solution scalable and reliable in every aspect as hardware configurations are also tested.
Future work:
Embedded systems can include pin connections at desired switches and buttons, then the need for soldering is eliminated and one can simply connect MCU and isolation circuit to achieve complete and reliable automation.
The post Enhancing Embedded Systems with Automation using CI/CD and Circuit Isolation Techniques appeared first on ELE Times.
Cabinet approves Rare Earth Permanent Magnet Manufacturing Scheme, worth Rs. 7,280 crores
The Cabinet approved the Rs. 7,280 crore Rare Earth Permanent Manufacturing Scheme on November 26, Wednesday. This first-of-its kind initiative aims to establish a six thousand metric tonnes per annum of integrated Rare Earth Permanent Magnet manufacturing in the country.
Cabinet Minister Ashwini Vaishnav, highlighted the importance of Earth Permanent Magnets for electric vehicles, renewable energy, electronics, aerospace, and defence applications at a media briefing in New Delhi. The scheme is expected to support the creation of integrated manufacturing facilities, involving conversion of rare earth oxides to metals, metals to alloys, and alloys to finished Rare Earth Permanent Magnet, the Union Minister added.
As the demand for electric vehicles grows, the demand for permanent magnets is expected to double by 2030. Currently India primarily relies on imports to satisfy its requirements for permanent magnets. This new scheme is expected to play a significant role in generating employment, boosting self-reliance, and accelerating the country’s aim to achieve Net Zero by 2070.
The total duration of the scheme will be seven years from the date of award, including a two-year gestation period for setting up an integrated Rare Earth Permanent Magnet manufacturing facility. The scheme envisions allocating the total capacity to five beneficiaries through a global competitive bidding process.
The post Cabinet approves Rare Earth Permanent Magnet Manufacturing Scheme, worth Rs. 7,280 crores appeared first on ELE Times.
Decoding the Future of Electronics with TI India
In an exclusive conversation with Kumar Harshit, Technology Correspondent, ELE Times, Mr. Kumar details how TI is leading the charge in creating a safer, greener, and smarter world through technology contributions from right here in India and shares his perspective on the future skill sets required for electronics engineers to thrive in the age of artificial intelligence.
Santhosh Kumar, President & Managing Director of Texas Instruments India, discusses the core themes of safety, sustainability, and innovation and outlines the transformative role of AI in shaping the next generation of engineering talent.
Here are the excerpts from the conversation:
ELE TIMES: Given the increasing focus on road safety, particularly for two-wheelers, how is TI leveraging technology to create safer and smarter vehicles?
Santhosh Kumar: Two-wheeler safety is critical, as the rider is the most exposed. We focus on leveraging simple, easy-to-deploy sensing technologies to solve real-world problems. For example, ensuring the side stand is retracted before the vehicle moves is a simple technique, often solved with magnetic sensing connected directly to the engine control. Moving to more advanced safety, we utilize radar and multiple sensors to provide warnings for both sides of the road. We also integrate technology that can help slow or stop a vehicle automatically if the car in front suddenly brakes, even if the rider isn’t fully alert. The goal is to either give the vehicle control to avoid a fatal accident or empower the rider with timely warnings.
ELE Times: TI emphasizes collaboration and community-driven innovation. How important are industry events like Electronica in deepening your relationship with the engineering ecosystem?
Santhosh Kumar: These events are vital for two key interactions. First, our engineers love to work directly with the engineers of our customers. This technical bond is the strongest foundation for creating relevant products. Events like this provide an apt platform for us to build and deepen those relationships. Second, we connect with decision-makers and purchase teams to demonstrate the affordability and worthiness of a product as a feature for mass-market products. Ultimately, innovation doesn’t happen in isolation; it comes from interaction within a community and ecosystem, making these floors essential.
ELE Times: The world is demanding greener technology. Could you outline TI’s commitment to sustainability, both in terms of internal operations and product design?
Santhosh Kumar: We are sensitive to the work we do both inside and outside our factories. Internally, we have a goal to be 100% powered by green energy by 2030, and we are well past 30% today. Furthermore, over 90% of our operational waste is diverted from landfills.
From a product design perspective, we are very sensitive to ensuring our products take up the smallest area and consume the least amount of power. We are always optimizing to create the smallest possible MCU with minimal energy consumption. This approach allows our customers to create products that are inherently greener than they would otherwise have been.
ELE Times: Can you introduce us to TI’s innovative product line in the Power management area?
Santhosh Kumar: We are heavily focused on smart, intelligent systems. A prime example is our Battery Management Systems (BMS), designed and developed in collaboration with engineers in India, ensuring power is used exactly as intended. We have several compelling demos on motor control. India has billions of motors running today, and we are showing how to make them run most efficiently, with the lowest possible cost, without sound, and without losses. You can witness these technologies integrated, such as in our two-wheeler demonstration, combining BMS, motor control, and security features. Given the huge push towards industrial and factory automation in India, we are also showcasing numerous technologies adapted for smarter, more secure, and greener appliances for the 1.4 billion people consuming electricity.
ELE Times: Beyond the automotive and industrial sectors, what are the emerging market segments in India that you see offering significant opportunities for TI’s growth?
Santhosh Kumar: We operate across five core segments, and we see growth in all of them. After automotive and broad industrial (which includes medical electronics, where a lot of Indian innovation is happening), the third key segment is personal electronics. This includes audio, video, and speech, where AI is bringing about a tremendous influx of smarter systems. The large bandwidth and data flow driving these first three segments necessitate our involvement in the communication segment, which forms the backbone. Finally, the enterprise business is the fifth segment. We are seeing customers, product creators, and applications happening in all five areas across India.
ELE Times: Texas Instruments India has been a pioneer for over 40 years. Could you detail the strategic role TIPL plays in driving cutting-edge innovation for TI’s global business units?
Santhosh Kumar: TI India is an important entity to our worldwide business. We have architects, product definers, system engineers, analog and RF designers, and application engineers who manage businesses and contribute to the entire value chain. In fact, many TI’s new products today have a significant contribution from India, including end-to-end product development. We have world-class infrastructure, including labs, benches, and all the necessary equipment in Bangalore, to enable us to contribute to the success of these global products.
ELE Times: Can you share a recent technological breakthrough from TI and define what innovation means to Texas Instruments in the context of solving real-world problems?
Santhosh Kumar: A recent breakthrough we are very excited about is our Gallium Nitride (GaN) solutions. For instance, deploying GaN in two-wheeler chargers can shrink their size by one-third compared to existing technology and significantly reduce power consumption and heat generation due to lower leakage.
To us, innovation extends beyond laboratory research; it is about how we look at real-world problems and use technology to solve them, ultimately enhancing people’s quality of life. We have both an opportunity and a responsibility to improve the lives of eight billion people while preserving the planet.
ELE Times: Talent acquisition is key to sustained innovation. What initiatives does TI India have in place to attract and nurture the next generation of core engineering talent?
Santhosh Kumar: Our strategy is to hire the bright talent directly from campuses, which accounts for 80% of our hiring. To feed this pipeline, we have programs like our Women in Semiconductor and Hardware (WiSH) program, which engages female students beginning in their second year of college. This program provides hands-on experience in core engineering disciplines, including design, testing, verification, and validation. We want to allow a large segment of the population to understand what it means to do core engineering and be a part of world-class product development right here in India.
ELE Times: As a leader who has seen India’s electronics landscape evolve over decades, what is your key message for the industry today?
Santhosh Kumar: The key message is to adopt the technologies happening globally and bring in innovation through new products. This is a tremendous opportunity for India to lead the wave of innovation, to solve real-world problems. We can bring innovation in manufacturing, product development, and applications that lift the quality of life for our people here and for the eight billion people worldwide. With the current energy in the ecosystem and the influx of new players, India can play an important role in driving innovation.
The post Decoding the Future of Electronics with TI India appeared first on ELE Times.
ECMS applications make history, cross Rs. 1 lakh crore in investment applications
Union Minister for Electronics and IT Ashwini Vaishnaw announced that the government has received investment applications worth nearly 1 lakh 15 thousand, 351 crore rupees under the Electronics Component Manufacturing Scheme (ECMS) against the targeted 59 thousand 350 crore rupees. The union minister made the announcement during a media briefing in New Delhi after the last date for applications was closed on November 27, 2025. He credited the 11 years of trust in the system which is now expected to drive investment, employment generation, and production.
Additionally, he added that, against a production target of 4 lakh 56 thousand and 500 crore rupees, production estimates of over 10 lakh crore rupees have been received.
The post ECMS applications make history, cross Rs. 1 lakh crore in investment applications appeared first on ELE Times.
AI-Driven 6G: Smarter Design, Faster Validation
Courtesy: Keysight Technologies
| Key takeaways: Telecom companies are hoping for quick 6G standardization followed by a rapid increase in 6G enterprise and retail customers, with AI being a key enabler:
● Artificial intelligence (AI) and machine learning (ML) are expected to become essential and critical components of the 6G standards, scheduled for release in 2028 or 2029. ● Engineers in 6G and AI could take products to market quickly by understanding the potential benefits of AI / ML for 6G design validation. |
The 6G era is poised to be fundamentally different, it may potentially be the first “AI-native” iteration of wireless telecom networks. With extensive use of AI expected in 6G, engineers face an unprecedented challenge: How do you validate a system that is more dynamic, intelligent, and faster than anything before it?
This blog gives insights into 6G design validation using AI for engineers working in communication service providers, mobile network operators, communication technology vendors, and device manufacturers.
We explain the new applications that 6G and AI could unlock, the AI techniques you’re likely to run into, and how you could use them for designing and testing 6G networks.
What are the key use cases that 6G and AI will enable together?

Figure 1. 6G conservative and ambitious goals versus 5G goals (Image source: How to Revolutionize 6G Research With AI-Driven Design)
The two engines of 6G and AI are projected to power exciting new use cases like real-time digital twins, smart factories, highly autonomous mobility, holographic communication, and pervasive edge intelligence.
These are among the major innovations that the International Telecommunication Union (ITU) and the Third Generation Partnership Project (3GPP) envision from 6G and AI. Let’s examine these key use cases for AI in 6G and 6G for AI in 2030 and beyond.
Real-time digital twins using 6G and AI
With promises of ubiquitous deployment, high data rates, and ultra-low latency, 6G and AI could create precise real-time representations of the physical world as digital twins.
Digital twins will be powerful tools for modeling, monitoring, managing, analyzing, and simulating all kinds of physical assets, resources, environments, and situations in real time.
Digital twin networks could serve as replicas of physical networks, enabling real-time optimization and control of 6G wireless communication networks. Proposed 6G capabilities like integrated sensing and communication (ISAC) could efficiently synchronize these digital and physical worlds.
Smart factories through 6G and AI
6G and AI have the potential to support advanced industrial applications (“industrial 6G”) through reliable low-latency connections for ubiquitous real-time data collecting, sharing, and decision-making. They could enable full automation, control, and operation, leveraging connectivity to intelligent devices, industrial Internet of Things (IoT), and robots. Private 6G networks may effectively streamline operations at airports and seaports.
Autonomous mobility via 6G and AI
6G and AI are set to enhance autonomous mobility, including self-driving vehicles and autonomous transport based on cellular vehicle-to-everything (C-V2X) technologies. This involves AI-assisted automated driving, real-time 3D-mapping, and high-precision positioning.
Holographic communication over 6G
6G and AI data centers could enable immersive multimedia experiences, like holographic telepresence and remote multi-sensory interactions. Semantic communication, where AI will try to understand users’ actual current needs and adapt to them, could help meet the demands of data-hungry applications like holographic communication and extended reality, transmitting only the essential semantics of messages.
Pervasive edge AI over 6G technologies
The convergence of communication and computing, particularly through edge computing and edge intelligence, is likely to distribute AI capabilities throughout the 6G network, close to the data source. This has the potential to enable real-time distributed learning, joint inference, and collaboration between intelligent robots and devices, leading to ubiquitous intelligence.
How will AI optimize 6G network design and operation?
In this section, we look more specifically at how AI is being considered for the design and testing of 6G networks.
At a high level, 6G communications will likely involve:
- physical components, like the base stations, PHY transceivers, network switches, and user equipment (UE, like smartphones or fixed wireless modems)
- logical subsystems, like the radio access network (RAN), core network, network functions, and protocol stacks
Some of these are expected to be designed, optimized, and tested using design-time AI models before deployment. Others are expected to use runtime AI models during their operations to dynamically adapt to local traffic, geographical, and weather conditions.
Let’s look at which aspects of 6G radio and network functions are likely to be enhanced by the integration of AI techniques in their designs.
AI-native air interface
Figure 2. How AI may change the air interface design (Image source: The Integration of AI and 6G)
In the UE-to-RAN air interface, AI models could enhance core radio functions like symbol detection, channel estimation, channel state information (CSI) estimation, beam selection, modulation, and antenna selection.

Figure 3. The three key phases toward a 6G AI-Native Air interface (Image source: The Integration of AI and 6G)
Some of these AI models may run on the UEs, some on the base stations, and some on both.
AI-assisted beamforming
Figure 4. Channel estimation with supervised learning (Image source: How to Revolutionize 6G Research With AI-Driven Design)
AI is envisioned to:
- assist in ultra-massive multiple-input multiple-output (UM-MIMO) using more precise CSI
- predict optimal transmit beams
- reduce beam-pairing complexity
- assist reconfigurable intelligent surfaces (RIS) for environmental optimization
It’s hoped that AI will become instrumental in end-to-end network optimization and dynamically adapting the entire RAN through self-monitoring, self-organization, self-optimization, and self-healing.
Automated network managementAI holds the potential to automate network operation and maintenance as well as enable automated management services like predictive maintenance, intelligent data perception, on-demand capability addition, traffic prediction, and energy management.
Real-time dynamic allocation and scheduling of wireless resources like bandwidth and power for load balancing could be automatically handled by AI. AI-based mobility management could proactively manage handoffs and reduce signaling overhead.
Additionally, analysis of vast network data by AI promises precise threat intelligence, real-time monitoring, prediction, and active defense against network faults and security risks.
What AI techniques are most effective for validating 6G system-level performance?
Figure 5. 6G AI-based model validation (Image source: The Integration of AI and 6G)
AI is a wide field with many techniques, like deep learning, reinforcement learning, generative models, and machine learning. Let’s look at how these different AI algorithms and architectures could be used for 6G design, validation, and network performance testing.
Reinforcement learning (RL)
Figure 6. CSI feedback compression (Image source: How to Revolutionize 6G Research With AI-Driven Design)
RL has the potential to be at the forefront of AI for 6G self-optimization, network design, and testing because it is good at replicating human decision-making, testing on a massive scale, and enabling the recent rise of large reasoning models.
RL and deep RL could be used for the following use cases:
- RAN optimization: RL is already being used for intent-based RAN optimization in 5G, enabling autonomous decision-making in dynamic network environments, particularly for mobility management, interference mitigation, and energy-efficient scheduling. RL can control and optimize complex workflows.
- Enhanced beamforming: Deep RL could be used for beam prediction in the spatial and temporal domains.
- Functional testing: Autonomous agents, trained using RL, could test 6G hardware and software systems, looking for bugs as their rewards. Each agent will be a deep neural network trained using proximal policy optimization or direct preference optimization to do sequences of network actions and favor those sequences that are likely to maximize their rewards (the number of bugs found).
- Performance testing: In a 6G system, performance will be an emergent property of hundreds of interacting network parameters. Manually finding the combinations that lead to poor performance will be nearly impossible. An RL agent could automatically explore these combinations and identify configurations that result in performance bottlenecks.
DNNs could be used for the following:
- Channel estimation: DNNs and other deep learning architectures like Convolutional Neural Networks (CNNs) could estimate channel conditions, which will be crucial for overall system performance, especially in complex, high-noise environments.
- CSI compression: CNN-based autoencoders are poised to become the most commonly used architecture for CSI compression.
Transformer-based autoencoders (like Transnet) have been tested for compressing CSI feedback from UEs to a 5G base station and could be used for 6G too.
Graph neural networks (GNNs)GNNs are used to model the relational structure of network elements. They could learn spatial and topological patterns for tasks like mobility management, interference mitigation, and resource allocation.
They may also be used as physics-informed models for channel estimation reconstruction.
Generative adversarial networks (GANs)GANs will probably be used to learn and create realistic wireless channel data. They could also be used for denoising and anomaly detection.
Large reasoning and action modelsThese models are created from pre-trained large language models or large concept models by using RL to fine-tune them for reasoning and acting. They are the foundations of agentic AI. Agentic AI for 6G is still a very new research topic. Agentic AI’s ability for complex orchestration of smaller AI models, hardware, databases, and tools could make it suitable for testing 6G networks.
How is synthetic data generated by AI used in 6G testing and validation?
Figure 7. Using AI models in System Design (Image source: The Integration of AI and 6G)
A key benefit of AI will be its ability to synthesize test scenarios and data that simulates realistic 6G environments in lockstep with the 6G standards as they emerge and evolve in the coming years. Such synthesis could enhance designs and reduce development risks from day one.
The use of AI in network operations will lead to non-determinism and an explosion of possible outcomes that challenge testability and repeatability.
Design and test engineers will have to worry about how they can test all possible scenarios and edge cases. Physical deployments would not be possible until customer trials start. Even physical prototypes will be initially impossible and become expensive later on.
This is why AI-powered simulations and AI-generated realistic data are projected to become critical for 6G companies. AI could generate any type of large, realistic data needed to train and test the sophisticated AI/ML algorithms of 6G. The key technologies and techniques involved are outlined below:
- Digital twins: A digital twin is an accurate and detailed proxy for a real-world implementation, capable of emulating entire networks and individual components. These virtual representations will be key to simulating ultra-dense 6G environments. They could support integrated modeling of network environments and users to test complex RAN optimization problems.
- Generative AI models: GANs could become crucial for testing 6G wireless channels. A GAN could be trained on data from real-world 5G networks augmented with 6G-specific parameters calculated using known analytical models. The generator network would learn to synthesize realistic 6G data and simulate virtual channels for realistic environments, even accounting for geography. Later, measured data from 6G hardware prototypes could be included to enhance their realism.
- Specialized testbeds: Synthetic data is vital for studying new 6G sub-terahertz bands (100-300 GHz) because physical measurements are not practical. AI-generated scenarios based on data from sub-terahertz testbeds could recreate the complex impairments and nonlinearities expected at these frequencies.
- Simulation tools: Sophisticated visual tools like Keysight Channel Studio (RaySim) could simulate signal propagation and generate channel data in a specific environment, like a selected city area. It could model detailed characteristics like delay spread and user mobility, mimicking real-world conditions needed for training components like 6G neural receivers.
- Systems modeling platforms: An end-to-end system design platform like Keysight’s System Design will have the ability to generate high-quality 6G data for neural network training. It would combine system design budgets, 3GPP-compliant channel models (like clustered delay line models), measured data, and noise to produce diverse samples with varying noise and channel configurations.

Figure 8. Wireless channel estimation (Image source: The Integration of AI and 6G)
AI techniques like anomaly detection and intelligent test automation could help you design and validate all the advanced chips and components that will go into 6G hardware for capabilities like sub-terahertz (THz)frequency bands and UM-MIMO.
Below, we speculate on how 6G and AI could be used for chip and hardware design.
Data-driven AI modelingThe behaviors of 6G technology enablers like UM-MIMO, reconfigurable intelligent surfaces, and sub-terahertz frequency bands will be too complex to fully characterize using analytical methods. Instead, neural networks could create accurate, data-driven, nonlinear AI models.
AI models in electronic design automation (EDA)EDA tools like Advanced Design System and Device Modeling could seamlessly integrate AI models for designing the high-frequency gallium nitride (GaN) radio frequency integrated circuits that’ll probably be needed in 6G. These tools could run artificial neural network models as part of circuit simulations and device modeling.
Validation of AI-enabled components
Figure 9. 6G AI neural receiver design and validation setup (Image source: The Integration of AI and 6G)
Validating the AI-native physical layer blocks (like neural receivers) will be paramount. Only AI-driven testing and automation could effectively tackle the black box nature and non-determinism of AI models.
AI-driven simulationsAI-driven simulation tools like Keysight RaySim could synthesize high-quality, site-specific channel data that — combined with deterministic, stochastic, and measured data — to create highly realistic environments for validating THz and MIMO designs.
Optimized beamforming and CSIAI models could potentially enhance beamforming by improving spectral efficiency. A problem with many antennas is the huge CSI feedback overhead. AI models like autoencoders could compress CSI feedback by as much as 25% without degrading efficiency and reliability.
Hardware-in-the-loop validationAI channel estimation models have the potential to handle multidimensionality and noise levels more robustly than traditional methods. They could be used by system design software and tested in hardware-in-the-loop setups (with channel emulators, signal generators, and digitizers) to assess effectiveness based on metrics like block error rate and signal-to-noise ratio.
Anomaly detectionAnomaly detection could be applied to data generated by AI simulations and models to identify unusual behaviors or deviations that may point to design flaws or operational issues.
What are the challenges and limitations of using AI in 6G design validation?Could AI and its results be trusted? Without careful design, every AI model is prone to out-of-distribution errors, data scarcity, poor model interpretability, overfitting, and hallucinations. A better question that your 6G and AI engineers must keep asking is, “How can we make our AI models, as well as AI-generated tests and data, more accurate and more trustworthy?”
For that, follow the recommendations below.
- Design for seamless integration: AI-based solutions must seamlessly integrate and agree with existing wireless principles built upon decades of tried-and-tested signal processing and communication theories. For example, a fully AI-designed physical layer that can dynamically change the waveform based on ambient conditions poses challenges for traditional measurement and design techniques like digital predistortion and amplifier design.
- Address data scarcity upfront: Real-world wireless data is often sparse. 6G ecosystems will probably be particularly challenging to characterize. Address this by augmenting data from 5G-Advanced networks with data calculated by 6G-specific analytical models. However, plan for extensive manual pre-processing because preparing realistic channel data to train models will not be trivial.
- Aim for model interpretability: To balance the opacity of powerful black-box techniques like deep neural networks, combine them with models that are more interpretable — like decision trees and random forests — through approaches like mixture-of-experts, ensembling, and explainable AI.
- Use physics-informed models: By bounding AI results with data from physics-informed models, engineers will be able to ensure that AI models operate within physical reality, making them robust and trustworthy. For example, reinforcement learning, which could be used for intent-based RAN optimization, can produce different results for the same input, generate out-of-bounds parameters, or fail to constrain to physical reality.
- Prevent overfitting: Sparse data and poor data diversity can lead to overfitting. For example, data generated under severe fading channel conditions is known to result in overfitting. Follow data augmentation and cross-validation best practices to counteract overfitting.
- Plan hardware-in-the-loop testing: Synthetically generated channels can be loaded into channel emulators like PROPSIM to test AI/ML algorithms in base stations and UEs. This will enable model advancements based on real failures and impairments.
- Avoid negative side effects: AI integration should not lead to excessive energy usage, unmanageable training data, or security risks. AI will greatly expand the threat surface, but since AI itself is quite new, cybersecurity risks are not well understood. This means 6G and AI integrations must be carefully designed for resilience and quick recovery from cyber attacks.
The post AI-Driven 6G: Smarter Design, Faster Validation appeared first on ELE Times.
Scaling up the Smart Manufacturing Mountain
Courtesy: Rockwell Automation
| A step-by-step roadmap to adopting smart manufacturing tools, boosting efficiency, and unifying systems for a smoother digital transformation journey. |
Embracing new technology in manufacturing is similar to ascending a mountain since it requires strategy, pacing, and the right gear. Rushing ahead without proper support can strain your systems as well as your people, but with thoughtful planning and timely technology selection, the climb becomes manageable and rewarding.
There are many paths to digital excellence. Integrating new technologies can significantly boost productivity and efficiency, but even the smoothest rollouts can come with hurdles. In our decades of experience working with customers, we’ve learned the importance of taking a measured approach to change to avoid unnecessary disruption. Here, we’ll lay out one way to approach digital transformation.
Beginning the Climb: Laying the Foundation
For many manufacturers, an accessible starting point for digital transformation is real-time production monitoring. Production monitoring enhances visibility and empowers your team to manage performance proactively—without overhauling existing workflows.
By consolidating machine and system data into a single dashboard, production monitoring eliminates silos and simplifies decision-making. It equips your team with actionable KPIs and insights from the shop floor to the executive suite.
With minimal investment in time, budget, and effort, real-time monitoring can deliver immediate value—which makes it an ideal first step on your digital journey.
Climbing Higher: Expanding Capabilities
While production monitoring is a strong foundation, it’s just the beginning. To unlock deeper efficiencies, manufacturers can next implement systems that offer broader control and insight across operations.
A modern manufacturing execution system (MES) is a prime example. By automating routine tasks, a MES reduces errors, cuts costs, and improves profitability. It also provides end-to-end visibility, communication, and traceability throughout the production lifecycle.
Pairing MES with a robust enterprise resource planning (ERP) system further enhances operational oversight. ERP tools help streamline compliance, manage risk, and align financial, operational, and IT strategies under one umbrella.
The real power lies in integrating these systems. When MES, ERP, and other tools work together in harmony, manufacturers can experience transformative results.
Reaching the Peak: Unlocking Full Potential
Even with a unified platform, there’s still room to elevate your operations. Today’s smart tools don’t just optimize—they redefine what’s possible.
Plex MES Automation and Orchestration leverages cutting-edge technology to connect machines and deliver unprecedented transparency and control. With intuitive low-code integration, your team can customize workflows and achieve seamless automation across the plant floor.
Gear Up for Your Digital Climb
According to our 10th Annual State of Smart Manufacturing report, many manufacturers feel they’re falling behind technologically compared to last year. If you’re exploring new solutions but unsure about the path forward, you’re not alone.
Whether you’re just beginning your smart manufacturing journey or seeking advanced, integrated solutions, we’d like to help! Check out our case study library to see how we’ve helped companies just like yours take on projects to advance their digital transformation journey.
The post Scaling up the Smart Manufacturing Mountain appeared first on ELE Times.
Singapore’s largest industrial district cooling system, Now operational at ST’s AMK TechnoPark
The District Cooling System at ST’s Ang Mo Kio (AMK) TechnoPark is operational and on time. Ms. Low Yen Ling, Senior Minister of State, Ministry of Trade and Industry & Ministry of Culture, Community and Youth, who joined ST in announcing this project in 2022, now took part in the inauguration ceremony. This launch is a critical milestone in ST’s goal to achieve carbon neutrality by 2027, and to work with local partners while also serving the communities as we move toward this sustainability objective.
Why a District Cooling System? Composition of a District Cooling System
ST’s Ang Mo Kio TechnoPark in Singapore
In a DCS, one plant cools water before sending it to a network of underground pipes that serve various buildings. The system thus pools resources to increase efficiency, reduce environmental impacts, and save space. Buildings no longer need chillers, saving power and maintenance costs thanks to the central plant. Moreover, a loop sends the water back to the plant to cool it again. The main plant also stores water. Cooling therefore can happen during off-peak periods to improve the efficiency.
According to the Encyclopedia of Energy, the first significant DCS project dates back to 1962 and was installed in the United States. The technology garnered some interest in the 70s before subsiding. DCS became popular again in the 90s as regulators mandated chlorofluorocarbons (CFC) reduction. And now, district cooling systems gain new grounds as the world looks to reduce carbon emissions and recycle water.
Why the ST Ang Mo Kio TechnoPark?
Anatomy of a Unique Project
The AMK TechnoPark is ST’s largest wafer-production fab by volume. Bringing DCS to that particular site will thus have significant ripple effects. Traditionally, projects of this size target urban developments. For instance, the Deep Lake Water Cooling infrastructure in Toronto, Canada, has a similar capacity (40,000 tons), but the distribution network covers a chunk of the downtown area. The ST and SP Group infrastructure is thus unique because it’s one of the first at such a scale to cool an industrial manufacturing plant. It is also a first in the semiconductor industry. Most projects from competing fabs retrofit new chillers. With this new DCS, ST can re-purpose the space in favor of something much more efficient.
The project will cost an estimated USD 370 million, including the construction of the central cooling plant right next to the TechnoPark. Beyond energy savings, removing chillers within the ST plant will free up space for other environmental programs. For instance, the AMK site is looking at water conservation and solar panels, among other things. The SP Group should start construction of the central plant this year and is committed to managing the project for at least the next 20 years. Singapore also hopes that this project will inspire other companies. As Ms Low Yen Ling, Minister of State, Ministry of Culture, Community and Youth & Ministry of Trade and Industry stated, “I hope this initiative will inspire many more innovative decarbonization solutions across other industrial developments, and spur more companies to seek opportunities in sustainability.”
The post Singapore’s largest industrial district cooling system, Now operational at ST’s AMK TechnoPark appeared first on ELE Times.
STMicroelectronics’ new GaN ICs platform for motion control boosts appliance energy ratings
STMicroelectronics unveiled new smart power components that let home appliances and industrial drives leverage the latest GaN (gallium-nitride) technology to boost energy efficiency, increase performance, and save cost.
GaN power adapters and chargers available in the market can handle enough power for laptops and USB-C fast charging to achieve extremely high efficiency to meet stringent incoming eco-design norms. ST’s latest GaN ICs now make this technology applicable to motor drives for products like washing machines, hairdryers, power tools, and factory automation.
“Our new GaNSPIN system-in-package platform unleashes wide-bandgap efficiency gains in motion-control applications by introducing special features that optimize system performance and safeguard reliability,” said Domenico Arrigo, General Manager, Application Specific Products Division, STMicroelectronics. “The new devices enable future generations of appliances to achieve higher rotational speed for improved performance, with smaller and lower-cost control modules, lightweight form factors, and improved energy ratings.”
The first members of ST’s new family, the GANSPIN611 and GANSPIN612, can power motors of up to 400 Watts including domestic and industrial compressors, pumps, fans, and servo drives. Pin compatibility between the two devices ensures designs are easily scalable. GANSPIN611 is in production now, in a 9mm x 9mm thermally enhanced QFN package, from $4.44.
Technical notes on GaNSPIN drivers:
In the new GaNSPIN system-in-package, unlike in general-purpose GaN drivers, the driver controls turn-on and turn-off times in hard switching to relieve stress on the motor windings and minimize electromagnetic noise. The nominal slew rate (dV/dt) of 10V/ns preserves reliability and eases compliance with electromagnetic compatibility (EMC) regulations such as the EU EMC directive. Designers can adjust the turn-on dV/dt of both GaN drivers to fine-tune the switching performance according to the motor characteristics.
The post STMicroelectronics’ new GaN ICs platform for motion control boosts appliance energy ratings appeared first on ELE Times.
Keysight Hosts AI Thought Leadership Conclave in Bengaluru
Keysight Technologies, Inc. announced the AI Thought Leadership Conclave, a premier forum bringing together technology leaders, researchers, and industry experts to discuss the transformative role of artificial intelligence (AI) is shaping digital infrastructure, wireless technologies, and connectivity.
Taking place on December 9, 2025, in Bengaluru, the conclave will showcase how AI is redefining the way networks, cloud, and edge systems are designed, optimized, and scaled for a hyperconnected world. Through keynote sessions, expert panels, and interactive discussions, participants will gain insights into:
- The role of AI in shaping data center architecture, orchestration, and resource optimization
- Emerging use cases across industries, from healthcare and manufacturing to mobility and entertainment
- Ethical, regulatory, and security considerations in large-scale AI infrastructure
- Collaborative innovation models and global standardization efforts
Additional sessions will focus on AI-driven debugging and optimization, data ingestion and software integration for scalable AI, and building secure digital foundations across cloud and edge environments.
“AI is rapidly becoming the backbone of digital transformation, and the ability to integrate intelligence into every layer of infrastructure will define the next decade of innovation,” said Sudhir Singh, Country Manager, Keysight India. “Through the AI Thought Leadership Conclave, Keysight is facilitating an exchange of ideas, showcasing AI-centered advancements, and shaping the connected future.”
In addition to focused discussions and technology presentations, the conclave will host an AI Technology Application Demo Fair, featuring live demonstrations of advanced solutions developed by Keysight and its technology partners. Attendees will also have ample opportunities to connect with industry leaders, participate in business and customer meetings, and engage in discussions with representatives from industry standard bodies.
The post Keysight Hosts AI Thought Leadership Conclave in Bengaluru appeared first on ELE Times.
Government approves 17 projects worth Rs. 7,172 crore under ECMS
The Ministry of Electronics and IT announced for the clearance of 17 additional proposals, worth Rs. 7,172 crore under the Electronics Components Manufacturing Scheme (ECMS). The projects are expected to generate production worth Rs 65,111 crore and 11,808 direct jobs across the country, according to the ministry.
The approved projects are spread across 9 states from Jammu and Kashmir to Tamil Nadu, focusing on the government’s commitment towards a ‘balanced regional growth’ and creation of high-skill jobs beyond metropolitan clusters.
This approval focuses on developing key technologies used in various IT hardware, wearables, telecom, EVs, industrial electronics, defence, medical electronics, and renewable energy, like oscillators, enclosures, camera modules, connectors, Optical Transceiver (SFP), and multi-layered PCBs.
Minister of Electronics and IT, Ashwini Vaishnaw highlighted that the next phase of value chain integration is being unravelled, from devices to components and sub-assemblies which will ensure that India’s electronics sector reaches $500 billion in manufacturing value by 2030–31.
The Minister also launched the 1st Generation Energy-Efficient Edge Silicon Chip (SoC) (ARKA-GKT1), jointly developed by Cyient semiconductors Pvt Ltd and Azimuth AI along with the projects. The Platform-on-a-Chip SoC integrates advanced computing cores, hardware accelerators, power-efficient design, and secure sensing into a single chip, delivering up to 10x better performance while reducing cost and complexity. It supports smart utilities, cities, batteries, and industrial IoT, showcasing India’s shift toward a product-driven, high-performance semiconductor ecosystem.
The post Government approves 17 projects worth Rs. 7,172 crore under ECMS appeared first on ELE Times.
BD Soft strengthens cybersecurity offerings for BFSI and Fintech businesses with advanced solutions
BD Software Distribution Pvt. Ltd. has expanded its Managed Detection and Response (MDR) and Data Loss Prevention (DLP) solutions for the BFSI and Fintech sectors amid rising cyber risks fuelled by digital banking growth and cloud-led transformation. The strengthened suite addresses vulnerabilities linked to sophisticated phishing and ransomware attacks, insecure third-party integrations, and increasing exposure of APIs and financial data across distributed environments.
BD Soft’s cybersecurity portfolio now includes solutions from leading global and Indian innovators; Axidian, headquartered in Dubai (UAE), for identity governance and privileged access management; FileCloud, based in Austin (USA), for hyper-secure EFSS (Enterprise File Sync & Share) capabilities; GTB Technologies, headquartered in California (USA), for advanced Data Loss Prevention (DLP); and Hunto.ai, based in Mumbai (India), for external threat intelligence and monitoring. Together, these solutions enable financial institutions to strengthen data governance, prevent fraud, meet regulatory obligations, and build resilient security frameworks that safeguard customer trust.
The surge in sector-wide threats is driven by the industry’s dependence on digital platforms and sensitive financial data. Over 60% of cyberattacks in India now target BFSI and Fintech, and cloud-related security incidents have risen by more than 45% in the last two years. With rapid mobile banking adoption expanding the attack surface, risks such as unauthorized access, data leakage, credential compromise, and insider-driven breaches continue to intensify, making continuous, intelligence-driven cyber defence essential for financial institutions today.
Commenting on the development, Mr. Zakir Hussain Rangwala, CEO, BD Software Distribution Pvt. Ltd., said, “As financial brands accelerate digital adoption, robust encryption, zero-trust architecture, and continuous monitoring are no longer optional, they are foundational to trust and financial stability. Our focus is enabling institutions to go not just digital, but safely digital.”
The post BD Soft strengthens cybersecurity offerings for BFSI and Fintech businesses with advanced solutions appeared first on ELE Times.
Advancing Quantum Computing R&D through Simulation
Courtesy: Synopsys
Even as we push forward into new frontiers of technological innovation, researchers are revisiting some of the most fundamental ideas in the history of computing.
Alan Turing began theorizing the potential capabilities of digital computers in the late 1930s, initially exploring computation and later the possibility of modeling natural processes. By the 1950s, he noted that simulating quantum phenomena, though theoretically possible, would demand resources far beyond practical limits — even with future advances.
These were the initial seeds of what we now call quantum computing. And the challenge of simulating quantum systems with classical computers eventually led to new explorations of whether it would be possible to create computers based on quantum mechanics itself.
For decades, these investigations were confined within the realms of theoretical physics and abstract mathematics — an ambitious idea explored mostly on chalkboards and in scholarly journals. But today, quantum computing R&D is rapidly shifting to a new area of focus: engineering.
Physics research continues, of course, but the questions are evolving. Rather than debating whether quantum computing can outpace classical methods — it can, in principle — scientists and engineers are now focused on making it real: What does it take to build a viable quantum supercomputer?
Theoretical and applied physics alone cannot answer that question, and many practical aspects remain unsettled. What are the optimal materials and physical technologies? What architectures and fabrication methods are needed? And which algorithms and applications will unlock the most potential?
As researchers explore and validate ways to advance quantum computing from speculative science to practical breakthroughs, highly advanced simulation tools — such as those used for chip design — are playing a pivotal role in determining the answers.

Pursuing quantum utility
In many ways, the engineering behind quantum computing presents even more complex challenges than the underlying physics. Generating a limited number of “qubits” — the basic units of information in quantum computing — in a lab is one thing. Building a large-scale, commercially viable quantum supercomputer is quite another.
A comprehensive design must be established. Resource requirements must be determined. The most valuable and feasible applications must be identified. And, ultimately, the toughest question of all must be answered: Will the value generated by the computer outweigh the immense costs of development, maintenance, and operation?
The latest insights were detailed in a recent preprint, “How to Build a Quantum Supercomputer: Scaling from Hundreds to Millions of Qubits, by Mohseni et. al. 2024,” which I helped co-author alongside Synopsys principal engineer John Sorebo and an extended group of research collaborators.
Increasing quantum computing scale and quality
Today’s quantum computing research is driven by fundamental challenges: scaling up the number of qubits, ensuring their reliability, and improving the accuracy of the operations that link them together. The goal is to produce consistent and useful results across not just hundreds, but thousands or even millions of qubits.
The best “modalities” for achieving this are still up for debate. Superconducting circuits, silicon spins, trapped ions, and photonic systems are all being explored (and, in some cases, combined). Each modality brings its own unique hurdles for controlling and measuring qubits effectively.
Numerical simulation tools are essential in these investigations, providing critical insights into how different modalities can withstand noise and scale to accommodate more qubits. These tools include:
- QuantumATK for atomic-scale modeling and material simulations.
- 3D High Frequency Simulation Software (HFSS) for simulating the planar electromagnetic crosstalk between qubits at scale.
- RaptorQu for high-capacity electromagnetic simulation of quantum computing applications.

Advancing quantum computing R&D with numerical simulation
The design of qubit devices — along with their controls and interconnects — blends advanced engineering with quantum physics. Researchers must model phenomena ranging from electron confinement and tunnelling in nanoscale materials to electromagnetic coupling across complex multilayer structures
Many issues that are critical for conventional integrated circuit design and atomic-scale fabrication (such as edge roughness, material inhomogeneity, and phonon effects) must also be confronted when working with quantum devices, where even subtle variations can influence device reliability. Numerical simulation plays a crucial role at every stage, helping teams:
- Explore gate geometries.
- Optimize Josephson junction layouts.
- Analyze crosstalk between qubits and losses in superconducting interconnects.
- Study material interfaces that impact performance.
By accurately capturing both quantum-mechanical behavior and classical electromagnetic effects, simulation tools allow researchers to evaluate design alternatives before fabrication, shorten iteration cycles, and gain deeper insight into how devices operate under realistic conditions.
Advanced numerical simulation tools such as QuantumATK, HFSS, and RaptorQu are transforming how research groups approach computational modeling. Instead of relying on a patchwork of academic codes, teams can now leverage unified environments — with common data models and consistent interfaces — that support a variety of computational methods. These industry-grade platforms:
- Combine reliable yet flexible software architectures with high-performance computational cores optimized for multi-GPU systems, accessible through Python interfaces that enable programmable extensions and custom workflows.
- Support sophisticated automated workflows in which simulations are run iteratively, and subsequent steps adapt dynamically based on intermediate results.
- Leverage machine learning techniques to accelerate repetitive operations and efficiently handle large sets of simulations, enabling scalable, data-driven research.
Simulation tools like QuantumATK, HFSS, and RaptorQu are not just advancing individual research projects — they are accelerating the entire field, enabling researchers to test new ideas and scale quantum architectures more efficiently than ever before. With Ansys now part of Synopsys, we are uniquely positioned to provide end-to-end solutions that address both the design and simulation needs of quantum computing R&D.
Empowering quantum researchers with industry-grade solutions
Despite the progress in quantum computing research, many teams still rely on disjointed, narrowly scoped open-source simulation software. These tools often require significant customization to support specific research needs and generally lack robust support for modern GPU clusters and machine learning-based simulation speedups. As a result, researchers and companies spend substantial effort adapting and maintaining fragmented workflows, which can limit the scale and impact of their numerical simulations.
In contrast, mature, fully supported commercial simulation software that integrates seamlessly with practical workflows and has been extensively validated in semiconductor manufacturing tasks offers a clear advantage. By leveraging such platforms, researchers are freed to focus on qubit device innovation rather than spending time on infrastructure challenges. This also enables the extension of numerical simulation to more complex and larger-scale problems, supporting rapid iteration and deeper insight.
To advance quantum computing from research to commercial reality, the quantum ecosystem needs reliable, comprehensive numerical simulation software — just as the semiconductor industry relies on established solutions from Synopsys today. Robust, scalable simulation platforms are essential not only for individual projects but for the growth and maturation of the entire quantum computing field.
“Successful repeatable tiles with superconducting qubits need to minimize crosstalk between wires, and candidate designs are easier to compare by numerical simulation than in lab experiments,” said Qolab CTO John Martinis, who was recently recognized by the Royal Swedish Academy of Sciences for his groundbreaking work in quantum mechanics. “As part of our collaboration, Synopsys enhanced electromagnetic simulations to handle increasingly complex microwave circuit layouts operating near 0K temperature. Simulating future layouts optimized for quantum error-correcting codes will require scaling up performance using advanced numerical methods, machine learning, and multi-GPU clusters.”
The post Advancing Quantum Computing R&D through Simulation appeared first on ELE Times.
Overcoming BEOL Patterning Challenges at the 3-NM Node
Courtesy: Lam Research
| ● Controlling critical process parameters is key to managing edge placement error (EPE)
● Simulations revealed that only 9.75% of runs met the minimum line CD success criteria |
As the complementary metal-oxide semiconductor (CMOS) area shrinks 50% from one node to the next, interconnect critical dimensions (CD) and pitch (or spacing) are under tight demands.
At the N3 node, where metal pitch dimensions must be at or below 18 nm,1,2 one of main interconnect challenges is securing sufficient process margins for CD and edge placement error (EPE).
- Achieving the CD gratings for future technology nodes requires multi-patterning approaches, such as self-aligned double/quadruple/octuple patterning (SADP/SAQP/SAOP) and multiple litho-etch (LE) patterning, combined with 193i lithography or even EUV lithography.
SEMulator3D virtual fabrication, part of Semiverse Solutions, was used in a Design of Experiments (DOE) to evaluate EPE and demonstrate the ability to successfully pattern an advanced 18- and 16-nm metal pitch (MP) BEOL.
Using a process model, we explored the impact of process variations and patterning sensitivities on EPE variability. The simulation identified significant process parameters and corresponding process windows that need to be controlled for successful EPE control.
Simulation of 18-nm BEOL Process
A self-aligned litho-etch litho-etch (SALELE) scheme with self-aligned blocks was proposed for an 18-nm MP BEOL process flow used at the N3 node. The advantage of this scheme is that no dummy metal is used in the BEOL, which helps reduce parasitic capacitance.

Figure 2 highlights the selected process parameters and corresponding range values used in the DOE simulation. Multiple process parameters that could affect the dimensions of the lines and blocks were varied during simulation using a uniform Monte Carlo distribution.
Parameters such as BL1 Litho bias and LE2 overlay, with range of variation. Simulation results included EPE simulations and Minimum line CD calculation.

In this study, three challenging EPE measurements were evaluated:
- EPE1: EPE calculation of the gap between the printed silicon mandrels for litho etch 1 (LE1) and printed silicon oxycarbide lines for litho etch 2 (LE2)
- EPE2: EPE calculation of the gap between the printed BL1 (block 1) mask after BL1 etch and printed LE1 lines
- EPE3: EPE calculation of the gap between the printed BL2 (block 2) mask after BL2 etch and the printed LE2 lines
Monte Carlo simulations were performed in SEMulator3D across 800 runs using a uniform distribution. For each simulation event, the EPE was extracted using virtual measurements. Process sensitivity analysis was performed in the simulation to investigate the impact of process variations (Figure 2) on the EPE challenges.
The most important process parameters that could impact line dimensions and EPE were automatically identified using the SEMulator3D® Analytics module. Process sensitivity analysis was performed to explore the impact of the most significant parameters on each EPE challenge.
DOE Results
Figure 3 displays an EPE sensitivity analysis plot of EPE1: LE2 on LE1. Spacer thickness defines the gap between the LE2 and LE1 line segments. EPE1 is significantly dependent on spacer thickness variation and less sensitive to LE1 and LE2 litho bias variations.
Three graphs and corresponding virtual representations:-

The same EPE sensitivity analysis methodology used in Figure 3 was applied to EPE2 and EPE3. The process sensitivity analysis plots of EPE allowed us to identify acceptable process windows for all three (EPE1, EPE2, and EPE3).
Figure 4 summarizes EPE process windows that were extracted from our process model for the significant process parameters identified earlier.
EPE sensitivity analysis with the extracted process window for all important parameters such as spacer thickness, LE1 litho bias, and BL1 litho bias

Along with evaluating process windows needed to avoid the EPE challenges, minimum line CD was virtually measured for each simulated run. Figure 5 depicts the minimum line CD process window needed to meet the line CD success criteria (8nm<CD<10nm).
Our simulation results indicated that only 9.75% of runs displayed a minimum line CD between 8 and 10 nm. Thus, in addition to the EPE challenge, minimum line CD control is critical and should be considered as part of the process window definition.
Dots on a graph showing a concentration of runs below 10 nm for minimum CD
This study demonstrates that virtual fabrication is a powerful tool for identifying process windows and margins essential for next-generation interconnect technologies. By simulating and analyzing critical process parameters, engineers can proactively address yield-limiting failures and optimize both minimum line CD and EPE control. These insights are vital for advancing semiconductor manufacturing at the 3-nm node and beyond.
The post Overcoming BEOL Patterning Challenges at the 3-NM Node appeared first on ELE Times.
Driving Innovation with High-Performance but Low-Power Multi-Core MCUs
Courtesy: Renesas
Over the last decade, the number of connected Internet of Things (IoT) devices has grown exponentially across markets, ranging from medical devices and smart homes to industrial automation. These smart, connected endpoints implement complex and compute-intensive applications; run real-time control loops; analyze large amounts of data; integrate sophisticated graphics and user interfaces; and run advanced protocols to communicate with the cloud. Machine learning has enabled voice and vision AI capabilities at the edge, empowering devices to make intelligent decisions and trigger actions without cloud intervention. This reduces latency and power consumption, saves bandwidth, and increases privacy. Advanced security has also become an integral part of these systems, as users demand advanced cryptographic accelerators on-chip, robust system isolation, and security for data at rest as well as in motion. Applications also require lower power and fast wake-up times to prolong battery life and reduce overall system power consumption.
These new and emerging requirements drive the need for greater processing power in embedded systems. Fast, reliable, and low-power non-volatile storage of code and data is also needed for the implementation of these sophisticated applications. Traditional systems used either powerful MPUs or multiple MCUs to perform these functions. Recently, we are seeing the emergence of high-performance multi-core MCUs with performance approaching that of MPUs – some with hardware AI accelerators – that can handle all these functions with a single chip. These high-performance MCUs provide a powerful alternative to MPUs for applications where power consumption and cost are critical concerns. MCUs have several advantages over MPUs that make them particularly well-suited for these power-sensitive IoT applications – better real-time performance, integration of power management and non-volatile memory, single voltage rail, lower power, typically lower cost, and ease of use.
The need for higher performance and functionality while keeping power and costs low has driven the movement towards the use of finer process technology nodes like 28nm or 22nm. This movement has driven innovations in embedded memory technology, since embedded flash does not scale well below 40nm. Alternative memory technologies like Magnetoresistive Random Access Memory (MRAM) are being used to replace embedded flash for non-volatile storage. A number of silicon manufacturers like TSMC now provide embedded MRAM that can be integrated into an MCU.
Performance and System Design Flexibility with Dual-Core MCUs
The single-core RA8M2 and RA8D2 MCUs feature the Cortex-M85 core running up to 1GHz, with Helium vector extensions to enable compute-intensive applications. On the dual-core option, a second Cortex-M33 core is added to allow efficient system partitioning, lower power wake-up, and operation. Either core can be configured to be a master or slave CPU and can be independently powered. Inter-processor communication is through flags, interrupts, and message FIFOs.
Dual-core MCUs significantly enhance the performance of IoT applications by enabling greater processing power, efficient task partitioning between the two cores, and improved real-time performance. System tasks can be assigned to different cores, leading to concurrent and more efficient system operation. One core can handle real-time control tasks, sensor interfaces, and communications, while the other, typically a higher-performance core, can handle the compute-intensive tasks, such as execution of neural network model operators, pre-processing of audio or image data, graphics, or motor control. This partitioning between the two cores can enable faster execution and overall improvement in system performance.
With optimal system task partitioning, dual cores can enable lower power consumption. Each core can be independently controlled, allowing them to run at frequencies optimal for the assigned tasks. The lower-performance Cortex-M33 core can be the housekeeping CPU, which enables low-power wake-up and operation. The higher-performance core can stay in low-power mode most of the time to be woken up only when high-performance processing is required. This segregation of tasks with the appropriate use of MCU resources results in overall lower system power consumption.
Dual cores also enable more robust system design. The high-compute tasks can be separated from the more time-critical and safety-critical tasks and real-time control loops, thus allowing a more robust system design. Any interruption on one core will not have an impact on the tasks of the second core. Dual cores can also enable functional safety with core isolation, redundancy, and optimal resource usage by safety-critical and non-critical tasks.
Performance and Power Efficiency with Embedded MRAM
The move to lower process geometries has opened the doors to newer memory alternatives to flash, such as MRAM, which is making its entry into the embedded space as embedded MRAM (eMRAM) and is now included on the RA8M2 and D2 MCUs.
As MCU manufacturers scale to higher performance and functionality with finer process geometries, eMRAM is increasingly being used to replace embedded flash as the non-volatile memory of choice. One key factor that is making MRAM a viable option for use on MCUs is its ability to retain data through solder reflow, which allows the devices to be pre-programmed prior to being soldered on the board. One additional advantage is that MRAM can be used to replace not just the embedded flash (for code) but also SRAM (for data storage) due to the byte addressability (except in cases with security concerns with retention of data across power cycles). Embedded MRAM is now supported by major silicon foundries, which allow MCU manufacturers to incorporate MRAM into new chip designs with minimal cost overhead.
Special care needs to be taken to prevent MRAM from being corrupted by external magnetic fields present in close proximity to the device. MCU manufacturers typically specify magnetic immunity in idle, power off, and operational modes. In order to avoid corruption of MRAM bits, designers will need to provide adequate spacing between an external magnetic field source and the MRAM-based device, so as not to exceed this magnetic immunity specification. Shielding the MRAM with specialized materials is another option to protect the MRAM from a strong magnetic field.
Key Advantages of MRAM Technology:
- True random access non-volatile memory with higher endurance and retention as compared to flash
- Faster write speeds as compared to flash, with no erase needed
- Byte-addressable and simpler to access than flash, similar to SRAM, improving both performance and power
- No leakage when in standby, making them much lower power than SRAM
- Reads are non-destructive and do not need to be refreshed, so it is an alternative to DRAM
- Magnetic layers are not susceptible to radiation, similar to flash
- Fewer mask layers are needed for MRAM as compared to flash, thus lowering costs
- Scales well for lower process technology nodes and, as such, provides a viable alternative to embedded flash
MRAM Applications
MRAM’s low power and non-volatile characteristics make it ideal for various IoT applications as a unified memory, replacing embedded flash, SRAM, or battery-backed SRAM. It can also be used to replace DRAM for data logging applications that require high density, low power, and non-volatile data storage. The immunity to radiation makes it ideal for medical applications in clinical settings and space applications. MRAM can also be used in industrial control and robotics, data center, and smart grid applications, for real-time data storage, fast data retrieval, and replacing battery backup SRAM.
Not forgetting the Edge AI market using machine learning – here, the MRAM can be used to store the AI neural network models and weights, which are retained across power cycles and do not need to be reloaded for execution each time. MRAM, with its fast read/writes, low power, and high endurance, is particularly suited for these applications requiring high processing performance.
In summary, the new and emerging use cases drive the need for high-performance and specialized feature sets on MCUs. Powerful CPU cores, multi-core architecture, new memory technologies like MRAM, and rich peripherals are integrated on RA8M2 and RA8D2 MCUs to support the needs of these cutting-edge applications.
The post Driving Innovation with High-Performance but Low-Power Multi-Core MCUs appeared first on ELE Times.
Evolving from IoT to edge AI system development
Courtesy: Avnet
The advancement of machine learning (ML) along with continued breakthroughs in artificial intelligence (AI) are funnelling billions of dollars into cutting-edge research and advanced computing infrastructure. In this high-stakes atmosphere, embedded developers face challenges with the evolution of Internet of Things (IoT) devices. Teams must now consider how to implement ML and edge AI into IoT systems.
Connected devices at the edge of the network can be divided into two broad categories: gateways and nodes. Gateways (including routers) have significant computing and memory resources, wired power and reliable high-bandwidth connectivity. Edge nodes are much smaller devices with constrained resources. They include smartphones, health wearables, and environmental monitors. These nodes have limited (often battery) power and may have limited or intermittent connectivity. Nodes are often the devices that provide real-time responses to local stimuli.
Most edge AI and ML applications will fall into these two categories. Gateways have the capacity to become more complex, enabled by access to resources such as wired power. What we currently call IoT nodes will also evolve functionally, but are less likely to benefit from more resources. This evolution is inevitable, but it has clear consequences on performance, energy consumption, and, fundamentally, the design.
Is edge AI development more challenging than IoT?
There are parallels between developing IoT devices and adopting edge AI or ML. Both involve devices that will be installed in arbitrary locations and unattended for long periods of time. Data security and the cost of exposure are already key considerations in IoT when handling sensitive information. Edge gateways and nodes retain more sensitive data on-device, but that doesn’t eliminate the need for robust security. Beyond their size and resources, there are significant differences between the two paradigms, as outlined in the table.
Comparing IoT and edge AI applications
| Aspect | IoT Embedded System | Edge AI Embedded System |
| Data Processing | Cloud-based | Local/on-device |
| Intelligence Location | Centralized (cloud/server) | Decentralized (embedded device) |
| Latency | High (depends on network) | Very low/real-time |
| Security Risks | Data exposed in transit | On-device privacy, device-level risks |
| Application Scope | Large networks, basic data | Local analytics, complex inference |
| Hardware Optimization | For connectivity, sensing | For AI model runtime, acceleration |
What’s at stake in edge AI?
IoT devices are data-centric, providing local control along with actionable data that is typically passed to a cloud application. AI and ML at the edge replace cloud dependency with local inferencing. Any AI/ML application starts with sourcing and validating enough data to train a model that can infer meaningful and useful insights.
Once the data is gathered and the source model has been trained, its operation must be optimized through processes such as model pruning, distillation and quantization, to drive simpler inference algorithms on more limited edge AI hardware.
Every time a new set of model parameters is passed to the edge AI device, the investment in training is at risk. Every time an edge node delivers a result based on local inference, it reveals information about the local model. Edge AI devices are also open to physical tampering, as well as adversarial attacks intended to corrupt their operation or even poison the inferencing models they rely upon.
Adaptive models and federated learning
While IoT devices can be maintained using over-the-air (OTA) updates, edge AI systems use adaptive models to adjust their inferencing algorithms and decision-making processes locally in response to external stimuli. This adaptation is valuable because it helps systems deliver resilient performance in evolving situations, such as predictive maintenance for manufacturing processes or patient monitoring in healthcare devices.
Adaptive models also enable edge AI devices to evolve functionally without exposing input data or output decisions. Over time, a source model implemented on an edge AI device should become steadily more valuable as it adapts to real-world stimuli.
This value becomes more apparent when adaptive models are used as part of a federated learning strategy. While edge AI devices using adaptive models keep their learning to themselves, under federated learning strategies, they share.
Each edge AI device adapts its model to better serve its local situation and then, periodically, sends its updated model parameters to the cloud. The submitted parameters are averaged and used to update the source model to reflect this experience. The upside is a source model that is regularly enhanced by field experience from multiple contexts. The challenge is that increasingly valuable model parameters must traverse a network, making them vulnerable to theft or misuse.
Balancing security and model performance
Security measures, such as encryption, help protect model parameters but may be at odds with the design goal of creating a low-resource, highly responsive edge AI device. This challenge can be made more difficult by some strategies used to compress source models to run on low-resource devices.
For example, quantization works by reducing the resolution with which model parameters are expressed, for example, from 32-bit floating-point representations during training to 8- or even 4-bit integers in the compressed version. Pruning disregards nodes whose parameters have a negligible influence on the rest of the model. Distillation implements “teacher–pupil” learning, in which the embedded model learns by relating the inputs and outputs of the source model.
Each of these techniques simplifies the model and makes it more practical for running on resource-constrained hardware, but at the cost of protection against security challenges. For example, quantized or pruned models may offer less redundancy than the source model, making them more efficient, but leave them less resilient to adversarial attacks.
The implementation of security features, such as encrypted parameter storage and communication, or periodic re-authentication, can create processing overheads that undercut the gains achieved by model compression. As the value at risk in edge AI devices rises, embedded engineers will need to weave security concerns more deeply into their development processes. For example, pruning strategies may have to be updated to include “adversarial pruning” in which different parts of a model are removed to see what can be discarded without making the remainder more vulnerable.
Keeping edge AI systems up to date
Embedded edge AI developers will need to be extremely flexible to accommodate rapidly changing ML algorithms, rapidly evolving AI processor options, and the challenge of making the hardware and software work together.
On the hardware side, multiple semiconductor start-ups have been funded to develop edge AI chips. Although the architectures differ, the design goals are often similar:
- Minimize data movement during computation to save power
- Implement extremely efficient arrays of multipliers to do the matrix math involved in ML inference
- Wrap the inferencing engine up in an appropriate set of peripherals for the end application
On the algorithmic side, although ML inferencing involves many standard operations, such as calculating weights and biases in layers or handling back-propagation, the exact way these operations are deployed is evolving rapidly.
Not every combination of ML operations will run well on every processor architecture, especially if the toolchains provided for new chips are immature.
The verification challenge
Design verification may also be a challenge, especially in situations where the implementation of security forces changes to the inferencing models, to the point that they must be retrained, retested and revalidated.
This may be especially important in highly regulated sectors such as healthcare and automotive. Engineers may have to rely heavily on hardware-in-the-loop testing to explore real-world performance metrics such as latency, power, accuracy, and resilience that have been formulated during the development process.
Embedded system designers must be ready to adapt rapidly. Edge AI algorithms, hardware strategies and security issues threatening efficacy are all evolving simultaneously. For edge AI devices, the well-understood challenges of IoT device design will be overlaid with concerns about the value of AI data at rest and on the move.
The impact of security features on inferencing performance must be balanced with the rate of progress in the field. Embedded AI system designers should mitigate these challenges by adhering to standards, choosing established toolchains where possible, and picking well-proven tools to handle issues such as scalable deployment and lifecycle management.
The post Evolving from IoT to edge AI system development appeared first on ELE Times.
From the grid to the gate: Powering the third energy revolution
Courtesy: Texas Instruments
A significant change is unfolding before us. In the 18th and 19th centuries, the Great Britain used coal to power the Industrial Revolution, propelling the transition to machine manufacturing – the first energy revolution. Next, came the second energy revolution, in the United States, where the oil boom of the 20th century fuelled unprecedented advancements in vehicles and electricity.
Today, artificial intelligence (AI) is ushering in the third energy revolution through its rapid growth. Focus is changing on the the generation, conversion, and distribution of the energy needed to power the massive amounts of data we’re consuming. Today, the biggest question is, ‘how to generate the necessary energy required to power data centers and how to efficiently move that energy down the power path – from the grid to the gates of the processors’, which is quickly becoming the most exciting challenge of our times.
Changing distribution levels
As the computing power required by AI data centers scales, data center architectures are undergoing a major change. Typically, servers stack on top of each other in data center computing racks, with power-supply units (PSUs) at the bottom. Alternating current (AC) is distributed to every server rack, where a PSU converts it to 48V and then down to 12V. Point-of-load converters in the server then take it down to the processor gate core voltages.
With the advent of generative AI and the subsequent addition of more servers to process information, racks now need significantly more power. For example, entering a question into a large language model (LLM) requires 10 times the amount of power as entering the same question into a search engine. These increased power levels are pushing power architectures to the limit.

Meeting power demands with solar energy
As data centers require more power to support growing and evolving workloads, renewable energy might just be the answer. Solar is becoming an increasingly viable and affordable energy source in many parts of the world. Coincidentally, data center customers are committing to 100% renewable energy within their companies, and this commitment must be reflected in the data centers they use. Solar can not only help data center customers meet their sustainability goals, but also offers a fast way to deploy more energy generation.
Semiconductors are at the center of the solar power conversion process, making these technologies key to meeting data center power demands. Efficient power conversion and accurate sensing technologies are crucial to making solar a reliable source of energy for the grid.
Energy storage to maximize solar output
Even though data centers operate every hour of every day, solar energy is only available during day time. So how will solar energy help power data centers when the sun isn’t shining? That’s where battery energy storage systems (ESS) become a critical piece of the puzzle, making sure the energy is available and can be used at any time when needed.
Batteries are already an essential component of the grid, effectively storing and releasing large amounts of electricity throughout the grid, and now they’re being used specifically for data centers. Battery management systems within an ESS directly monitor battery cells and assess the amount of energy within, measuring the voltage and determining the state of charge and state of health of the battery to help ensure there is the necessary power available.
In the age of artificial intelligence, data is the new currency, and it’s more valuable than ever. As such, something must power – and sustain – it. We used coal to kickstart factories, and oil to advance automobiles, and now, renewable energy can help us address the growing power needs of data centers in the future.
The post From the grid to the gate: Powering the third energy revolution appeared first on ELE Times.



