ELE Times

Subscribe to ELE Times feed ELE Times
latest product and technology information from electronics companies in India
Updated: 1 hour 29 min ago

Microchip Unveils Industry’s Highest Performance 64-bit HPSC Microprocessor (MPU) Family for a New Era of Autonomous Space Computing

Fri, 07/12/2024 - 09:59

New technology ecosystem is also launched as Microchip collaborates with over a dozen system and software partners to accelerate PIC64-HPSC adoption

The world has changed dramatically in the two decades since the debut of what was then considered a trail-blazing space-grade processor used in NASA missions such as the comet-chasing Deep Impact spacecraft and Mars Curiosity rover vehicle. A report released by the World Economic Forum estimated that the space hardware and the space service industry is set to grow at a CAGR of 7% from 2023’s $330 billion dollars to $755 billion dollars by 2035. To support a diverse and growing global space market with a rapidly expanding variety of computational needs, including more autonomous applications, Microchip Technology has launched the first devices in its planned family of PIC64 High-Performance Spaceflight Computing (PIC64-HPSC) microprocessors (MPUs).

Unlike previous spaceflight computing solutions, the radiation- and fault-tolerant PIC64-HPSC MPUs, which Microchip is delivering to NASA and the broader defense and commercial aerospace industry, integrate widely adopted RISC-V CPUs augmented with vector-processing instruction extensions to support Artificial Intelligence/Machine Learning (AI/ML) applications. The MPUs also feature a suite of features and industry-standard interfaces and protocols not previously available for space applications. A growing ecosystem of partners is being assembled to expedite the development of integrated system-level solutions. This ecosystem features Single-Board Computers (SBCs), space-grade companion components and a network of open-source and commercial software partners.

“This is a giant leap forward in the advancement and modernization of the space avionics and payload technology ecosystem,” said Maher Fahmi, corporate vice president, Microchip Technology’s communications business unit. “The PIC64-HPSC family is a testament to Microchip’s longstanding spaceflight heritage and our commitment to providing solutions built on industry-leading technologies and a total systems approach to accelerate our customers’ development process.”

The Radiation-Hardened (RH) PIC64-HPSC RH is designed to give autonomous missions the local processing power to execute real-time tasks such as rover hazard avoidance on the Moon’s surface, while also enabling long-duration, deep-space missions like Mars expeditions requiring extremely low-power consumption while withstanding harsh space conditions. For the commercial space sector, the Radiation-Tolerant (RT) PIC64-HPSC RT is designed to meet the needs of Low Earth Orbit (LEO) constellations where system providers must prioritize low cost over longevity, while also providing the high fault tolerance that is vital for round-the-clock service reliability and the cybersecurity of space assets.

PIC64-HPSC MPUs offer a variety of capabilities, many of which were not previously available for space computing applications, including:
  • Space-grade 64-bit MPU architecture: Includes eight SiFive RISC-V X280 64-bit CPU cores supporting virtualization and real-time operation, with vector extensions that can deliver up to 2 TOPS (int8) or 1 TFLOPS (bfloat16) of vector performance for implementing AI/ML processing for autonomous missions.
  • High-speed network connectivity: Includes a 240 Gbps Time Sensitive Networking (TSN) Ethernet switch for 10 GbE connectivity. Also supports scalable and extensible PCIe Gen 3 and Compute Express Link (CXL) 2.0 with x4 or x8 configurations and includes RMAP-compatible SpaceWire ports with internal routers.
  • Low-latency data transfers: Includes Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCEv2) hardware accelerators to facilitate low-latency data transfers from remote sensors without burdening compute performance, which maximizes compute capabilities by bringing data close to the CPU.
  • Platform-level defense-grade security: Implements defense-in-depth security with support for post-quantum cryptography and anti-tamper features.
  • High fault-tolerance capabilities: Supports Dual-Core Lockstep (DCLS) operation, WorldGuard hardware architecture for end-to-end partitioning and isolation, and an on-board system controller for fault monitoring and mitigation.
  • Flexible power tuning: Includes dynamic controls to balance the computational demands required by the multiple phases of space missions with tailored activation of functions and interfaces.

“Microchip’s PIC64-HPSC family replaces the purpose-built, obsolescence-prone solutions of the past with a high-performance and scalable space-grade compute processor platform supported by the company’s vibrant and growing development ecosystem,” said Kevin Kinsella, Architect – System Security Engineering with Northrop Grumman. “This innovative and forward-looking architecture integrates the best of the past 40-plus years of processing technology advances. By uniquely addressing the three critical areas of reliability, safety and security, we fully expect the PIC64-HPSC to see widespread adoption in air, land and sea applications.”

In 2022, NASA selected Microchip to develop a High-Performance Spaceflight Computing processor that could provide at least 100 times the computational capacity of current spaceflight computers. This key capability would advance future space missions, from planetary exploration to lunar and Mars surface missions. The PIC64-HPSC is the result of that partnership. Representatives from NASA, Microchip and industry leaders like Northrop Grumman will share insights about the HPSC technology and ecosystem at the IEEE Space Compute Conference 2024, July 15–19 in Mountain View, California:

  • Conference Keynote – Dr. Prasun Desai, Deputy Associate Administrator, Space Technology Mission Directorate, NASA: Dr. Desai will speak about the agency’s strategy for advanced computing and investment in HPSC technology.
  • HPSC Workshop, “HPSC: Redefine What’s Possible for the Future of Space Computing”: Prasun Desai will join Microchip and JPL speakers to provide an overview of HPSC program and platform. Invited aerospace industry partner Kevin Kinsella from Northrop Grumman will also share insights on the significance of HPSC for spaceflight computing. A Q&A session will follow.

Microchip’s inaugural PIC64-HPSC MPUs were launched in tandem with the company’s PIC64GX MPUs that enable intelligent edge designs in the industrial, automotive, communications, IoT, aerospace and defense segments. With the launch of its PIC64GX MPU family, Microchip has become the only embedded solutions provider actively developing a full spectrum of 8-, 16-, 32- and 64-bit solutions.

Microchip has a broad portfolio of solutions designed for the aerospace and defense market including processing with Radiation-Tolerant (RT) and Radiation-Hardened (RH) MCUs, FPGAs and Ethernet PHYs, power devices, RF products, timing, as well as discrete components from bare die to system modules. Additionally, Microchip offers a wide range of components on the Quality Products List (QPL) to better serve its customers.

Comprehensive Ecosystem

Microchip’s new PIC64-HPSC MPUs will be supported by a comprehensive space-grade ecosystem and innovation engine that encompasses flight-capable, industry-standard SBCs, a community of open-source and commercial software partners and the implementation of common commercial standards to help streamline and accelerate the development of system-level integrated solutions. Early members in the ecosystem include: SiFive, Moog, IDEAS-TEK, Ibeos, 3D PLUS, Micropac, Wind River, Linux Foundation, RTEMS, Xen, Lauterbach, Entrust and many more. For information visit Microchip’s PIC64-HPSC MPU ecosystem partners webpage.

Microchip will also offer a comprehensive PIC64-HPSC evaluation platform that incorporates the MPU, an expansion card and a variety of peripheral daughter cards.

Pricing and Availability

PIC64-HPSC samples will be available to Microchip’s early access partners in 2025. For additional information, please contact a Microchip sales representative.

The post Microchip Unveils Industry’s Highest Performance 64-bit HPSC Microprocessor (MPU) Family for a New Era of Autonomous Space Computing appeared first on ELE Times.

Transforming Devices into Smart Innovations: NeoMesh Sensor Modules Power Endrich Bauelemente’s Smart Fridge Concept

Thu, 07/11/2024 - 13:45

According to a recent Statista report, global spending on Internet of Things (IoT) products reached $805 billion in 2023, with expectations for continued growth in the coming years. For several years, Endrich Bauelemente GmbH, the German distributor for NeoCortec, has been engaged in developing smart IoT-related products, such as a concept for a smart fridge. They have incorporated NeoCortec’s ultra-low-power, scalable NeoMesh wireless sensor modules into their solutions.

One major advantage of a mesh solution like NeoMesh technology is that it requires only one central gateway to collect and transmit all sensor data to the Cloud. Connecting different parts of the network does not require repeaters or additional gateways. Zoltan Kiss, Head of the R&D Department at Endrich Bauelemente, explains, “By integrating NeoMesh sensor modules into our solution, we can easily capture necessary data and wirelessly transmit it to our own IoT ecosystem or any other suitable cloud service.” Kiss adds, “With its extremely low power consumption and ability to establish scalable local wireless networks, NeoMesh is the ideal product for our smart IoT solution.”

In partnership with NeoCortec, Endrich Bauelemente GmbH has been actively developing a smart fridge concept to enable manufacturers to incorporate intelligent features, allowing end users to monitor various parameters of their refrigerators via a mobile application.

The NeoMesh wireless sensor modules facilitate seamless and efficient integration of multiple IoT functionalities. These functionalities include monitoring temperature and humidity inside the refrigerator, as well as tracking interior light brightness and door status. Data collection on the frequency and duration of door openings can provide valuable insights for marketing or commercial purposes, especially in settings like gas stations or retail stores. One of Endrich’s initial clients, Audax Electronics in Brazil, has successfully integrated NeoMesh modules into the LED lighting units of their fridges to monitor temperature and door status. These LED units can be installed in both smart-capable and conventional refrigerators.

Key requirements for IoT sensor modules in smart devices include compact size, easy installation, independence from electrical and wired communication networks, and straightforward commissioning. Battery-powered, wireless communication technology like NeoMesh, with all necessary sensors integrated into the module, allows for installation without a specialist. “This user-friendliness is what makes our NeoMesh technology so appealing,” comments Thomas Steen Halkier, CEO of NeoCortec. Sensor measurements are wirelessly transmitted to the appropriate cloud service for data analysis via the self-forming mesh network. This eliminates the need for extensive cabling, reducing infrastructure costs and providing greater flexibility in network deployment, as sensor nodes can be installed anywhere. “Our NeoMesh technology is especially suited for wireless sensor networks where sensors don’t need to transmit data frequently and where the data payload size is small,” adds Halkier.

The NeoMesh wireless communication protocol is supported by a variety of fully-integrated, pre-certified ultra-low-power bi-directional sensor modules. These modules come in various versions, all incorporating the core NeoMesh protocol stack across different frequency bands (868 MHz, 915 MHz, and 2.4 GHz) and preloaded with the proprietary NeoCortec protocol stack.

The post Transforming Devices into Smart Innovations: NeoMesh Sensor Modules Power Endrich Bauelemente’s Smart Fridge Concept appeared first on ELE Times.

What’s next in on-device generative AI?

Thu, 07/11/2024 - 13:27

Upcoming generative AI trends and Qualcomm Technologies’ role in enabling the next wave of innovation on-device

The generative artificial intelligence (AI) era has begun. Generative AI innovations continue at a rapid pace and are being woven into our daily lives to offer enhanced experiences, improved productivity and new forms of entertainment. So, what comes next? This blog post explores upcoming trends in generative AI, advancements that are enabling generative AI at the edge and a path to humanoid robots. We’ll also illustrate how Qualcomm Technologies’ end-to-end system philosophy is at the forefront of enabling this next wave of innovation on-device.

Upcoming trends and why on-device AI is key Generative AI capabilities continue to increase in severaldimensions.Generative AI capabilities continue to increase in several
dimensions.

Transformers, with their ability to scale, have become the de facto architecture for generative AI. An ongoing trend is transformers extending to more modalities, moving beyond text and language to enable new capabilities. We’re seeing this trend in several areas, such as in automotive for multi-camera and light detection and ranging (LiDAR) alignment for bird’s-eye-view or in wireless communications where global position system (GPS), camera and millimeter wave (mmWave) radio frequency (RF) are combined using transformers to improve mmWave beam management.

Another major trend is generative AI capabilities continuing to increase in two broad categories:

  • Modality and use case
  • Capability and key performance indicators (KPIs)

For modality and use cases, we see improvements in voice user interface (UI), large multimodal models (LMMs), agents and video/3D. For capabilities and KPIs, we see improvements for longer context window, personalization and higher resolution.

In order for generative AI to reach its full potential, bringing the capabilities of these trends to edge devices is essential for improved latency, pervasive interaction and enhanced privacy. As an example, enabling humanoid robots to interact with their environment and humans in real time requires on-device processing for immediacy and scalability.

Advancements in edge platforms for generative AI

How can we bring more generative AI capabilities to edge devices?

We are taking a holistic approach to advance edge platforms for generative AI through research across multiple vectors.

We aim to optimize generative AI models and efficiently run them on hardware through techniques such as distillation, quantization, speculative decoding, efficient image/video architectures and heterogeneous computing. These techniques can be complementary, which is why it is important to attack the model optimization and efficiency challenge from multiple angles.

Consider quantization for large language models (LLMs). LLMs are generally trained in floating-point 16 (FP16). We’d like to shrink an LLM for increased performance while maintaining accuracy. For example, reducing the FP16 model to 4-bit integer (INT4), reduces the model size by four times. That also reduces memory bandwidth, storage, latency and power consumption.

Quantization-aware training with knowledge distillation helps to achieve accurate 4-bit LLMs, but what if we need an even lower number of bits per value? Vector quantization (VQ) can help with this. VQ shrinks models while maintaining desired accuracy. Our VQ method achieves 3.125 bits per value at similar accuracy as INT4 uniform quantization, enabling even bigger models to fit within the dynamic random-access memory (DRAM) constraints of edge devices.

Another example is efficient video architecture. We are developing techniques to make generative video methods efficient for on-device AI. As an example, we optimized FAIRY, a video-to-video generative AI technique. In the first stage of FAIRY, states are extracted from anchor frames. In the second stage, video is edited across the remaining frames. Example optimizations include: cross-frame optimization, efficient instructPix2Pix and image/text guidance conditioning.

A path to humanoid robots

We have expanded our generative AI efforts to study LLMs and their associated use cases, and in particular the incorporation of vision and reasoning for large multimodal models (LMMs). Last year, we demonstrated a fitness coach demo at CVPR 2023, and recently investigated the ability of LMMs to reason across more complex visual problems. In the process, we achieved state-of-the-art results to infer object positions in the presence of motion and occlusion.

However, open-ended, asynchronous interaction with situated agents is an open challenge. Most solutions for LLMs right now have basic capabilities:

  • Limited to turn-based interactions about offline documents or images.
  • Limited to capturing momentary snapshots of reality in a Visual Question Answering-style (VQA) dialogue.

We’ve made progress with situated LMMs, where the model is able to process a live video stream in real time and dynamically interact with users. One key innovation was the end-to-end training for situated visual understanding — this will enable a path to humanoids.

More on-device generative AI technology advancements to come

Our end-to-end system philosophy is at the forefront of enabling this next wave of innovation for generative AI at edge. We continue to research and quickly bring new techniques and optimizations to commercial products. We look forward to seeing how AI ecosystem leverages these new capabilities to make AI ubiquitous and to provide enhanced experiences.

DR. JOSEPH SORIAGA
Senior Director of Technology,
Qualcomm Technologies PAT LAWLORDirector, Technical Marketing,
Qualcomm Technologies, Inc.PAT LAWLOR Director, Technical Marketing, Qualcomm Technologies, Inc.

 

The post What’s next in on-device generative AI? appeared first on ELE Times.

Advanced Material Handling: The Flexible Transport System (FTS) from Rexroth

Thu, 07/11/2024 - 12:54

The Flexible Transport System (FTS) from Bosch Rexroth is an innovative solution designed for the precise transport and positioning of materials and workpieces. It is a magnetically propelled transport platform designed to enhance pallet speed and positioning accuracy, especially for heavier loads like battery modules.

Traditional rollers, chains, or belt systems often fall short in demanding applications, but the FTS overcomes these limitations with exceptional accuracy, programmable movements, and superior speed. The non-contact drive concept ensures particle-free transport, even in vacuum environments.

FTS Features at a Glance
  • Extremely Precise

The FTS achieves remarkable positioning accuracy and high repeatability, thanks to its advanced sensors placed between individual motors and a sophisticated motion control system. This precision is critical in applications requiring meticulous material handling.

  • Individually Scalable

The system’s scalability allows it to meet various production size requirements. It can be easily expanded with multiple motors to accommodate longer production lines. The carriers are designed to handle both heavy and light objects with equal precision, making the FTS a versatile solution for diverse industries.

  • Flexibly Adaptable

Offering maximum flexibility, the FTS allows for the free programmability of all carrier movements, including I/O synchronization if needed. This adaptability facilitates quick conversions to different products, ensuring seamless transitions and reducing downtime. The mechanical components of the system are designed to integrate effortlessly with various machines.

System Description

The FTS provides the flexibility to build a system tailored to specific needs, whether as a standalone unit or integrated into an existing production line. The open system design enables carriers to transition smoothly between external conveyor belts and the FTS. Additionally, robots can be strategically placed along the tracks to perform assembly tasks in conjunction with the FTS. It features interfaces in C/C++ or PLC with standard Ethernet, and since the software operates on a standard PC, other interfaces can also be incorporated.

Hardware

The FTS’s hardware is based on the embedded control YM, offering unparalleled design freedom. This next-generation hardware is engineered to handle complex operations, and its open software architecture supports the creation of customized motion solutions that integrate seamlessly into existing automation landscapes. The compact modular multi-axis controller contains all necessary control and drive hardware, facilitating precise control and high-speed operations.

High-level programming languages enable the development of intricate motion control programs, while high-speed control loops with 32 kHz bandwidth ensure pinpoint accuracy and performance. This hardware configuration supports complex, high-precision tasks with ease.

Software

The core of the FTS’s powerful and flexible technology is its intelligent motion control system from Rexroth. This system combines high-performance hardware capable of managing complex processes with open software structures that allow for customized movements. The software integrates effortlessly into existing automation systems, providing a robust and adaptable solution for various applications.

The control platform includes advanced diagnostics, error analysis, and maintenance capabilities. It continuously monitors carrier positions, current, position errors, and motion profiles, providing real-time data visualization through its toolset. This comprehensive monitoring ensures optimal performance and facilitates prompt issue resolution, maintaining smooth and efficient operations.

 

Rexroth’s FTS stands out as a versatile, precise, and adaptable solution for advanced material transport and positioning, meeting the needs of demanding applications with unparalleled efficiency and reliability.

The post Advanced Material Handling: The Flexible Transport System (FTS) from Rexroth appeared first on ELE Times.

AI-Powered Battery System on Chip: A Masterstroke in Battery Management System

Thu, 07/11/2024 - 12:32

Eatron Technologies has introduced its latest breakthrough in battery management technology—a state-of-the-art AI-powered Battery Management System on Chip, developed in partnership with Syntiant. This ground-breaking solution merges Eatron’s sophisticated Intelligent Software Layer with Syntiant’s ultra-low power NDP120 Neural Decision Processor, delivering unmatched battery performance, safety, and longevity.

The AI-BMS-on-chip marks a major advancement in battery management. This powerful yet energy-efficient system unlocks an additional 10% of battery capacity and extends battery life by up to 25%. By integrating their pre-trained AI models, the solution offers state-of-health, state-of-charge, and remaining useful life assessments with remarkable accuracy right out of the box.

Key Benefits
  • Enhanced Performance: This solution optimizes available battery power by providing precise state-of-charge and health estimations.
  • Improved Safety: Early detection of potential issues through predictive diagnostics ensures operational safety and prevents failures.
  • Increased Longevity: By effectively managing battery health and usage, this solution extends the lifespan of batteries.
Real-Time Edge Processing

A standout feature of the AI-BMS-on-chip is its capability to perform real-time analysis and decision-making directly on the device. By harnessing the efficient processing capabilities of Syntiant’s NDP120, this solution operates at the edge, thereby obviating the requirement for intricate cloud infrastructure. This results in reduced latency, lower power consumption, and overall system costs.

Versatile and Easy to Integrate

Designed for seamless integration, the AI-BMS-on-chip enhances performance, safety, and longevity across a wide range of battery-powered applications, including light mobility, industrial, and consumer electronics. In addition to expediting time-to-market, this plug-and-play solution offers customization capabilities through an intuitive toolchain, tailoring it precisely to individual applications. Existing BMS hardware can be easily upgraded to benefit from this best-in-class performance, providing a cost-effective solution for businesses striving to stay competitive.

Pioneering Collaboration: Transforming Battery Management

Eatron Technologies and Syntiant have been collaborating since 2022, merging their expertise in battery management and AI technologies. Amedeo Bianchimano, Chief Product Delivery Officer at Eatron Technologies, highlighted, “The AI-BMS-on-chip empowers the safe and efficient deployment of any battery-powered application, optimizing battery energy usage.” In agreement, Mallik P. Moturi, Chief Business Officer at Syntiant Corp., emphasized, “Through our NDP120, Eatron’s software processes all data at the edge, boosting battery life, safety, and overall performance. This makes it ideal for everything from consumer electronics to commercial vehicles.”

This collaboration represents a new chapter in battery management, providing unprecedented performance, safety, and longevity to a variety of applications. Eatron Technologies and Syntiant are proud to lead the way in this innovative field, offering cutting-edge solutions that address the evolving needs of the industry.

The post AI-Powered Battery System on Chip: A Masterstroke in Battery Management System appeared first on ELE Times.

Key Design Considerations for Offline SMPS Applications

Thu, 07/11/2024 - 12:12

Courtesy: Onsemi

Every electronic device that is powered from a wall outlet uses some form of offline switch mode power supply (SMPS) that converts the AC grid voltage to a DC voltage used by the device. An offline SMPS is a switched power supply with an isolation transformer and covers power range from a few watts to multi-kilowatt solutions. Offline SMPS is widely deployed and indispensable in providing reliable and safe power to electronic devices in various applications ranging from consumer electronics, industrial power supply, datacenters to telecom base stations.

When designing an offline SMPS there are many factors to be considered for a successful design including power level, voltages, safety requirements, size, and several more.

Understanding Offline SMPS & Popular Topologies

Fundamentally, an online SMPS uses a two-stage conversion. Firstly, the mains grid voltage is rectified and shaped by the first stage – the power factor corrector (PFC). The output voltage of the PFC stage is set to be a bit higher than the expected input peak voltage. For single phase solutions   this is usually around 380-400 VDC. Since the output of the PFC stage is stable and relatively well-regulated DC voltage, the following DC-DC stage can be less complex. In most offline SMPS, the PFC is single-phase, but for higher power units (multi-kilowatt) it can be 3-phase.

 Key Elements of an Offline SMPSFigure 1: Key Elements of an Offline SMPS

The PFC stage aims to improve efficiency by reducing the apparent power in the system. It corrects the phase difference between the current and voltage (the ‘Power Factor’) to maintain as little difference as possible, as well as shaping the current waveform to be as near as it can be to a pure sinusoid, minimizing total harmonic distortion (THD).

The DC-DC stage (often an LLC converter) takes the PFC output and converts this to the desired voltage, bearing in mind there may be several independent outputs. This stage also includes the galvanic isolation transformer that provides safety isolation as well as level shifting the voltage. Due to the transformer’s inability to accommodate direct current, the incoming DC from the PFC stage is converted back into an alternating current and then rectified for the output.

Efficiency (the ratio between the power delivered at the output and the power consumed by the input) is a crucial parameter for any SMPS. It affects the operating cost, but more importantly, it also defines the internal losses that manifest as heat. This, in turn, determines how much cooling is required when the SMPS is operating. The higher the amount of cooling in terms of fans and/or heatsinks is needed, the larger, heavier and more expensive the solution will be.

Advancements in Offline SMPS Technology

Striving for the highest levels of performance, there is ongoing advancement in the technologies used within offline SMPS.

Boost PFC is nowadays commonly used for a wide range of power due to its simple structure and straightforward control strategy. The inductor current is continuous, electromagnetic interference (EMI) is lower, and the current waveform is less distorted, which leads to a better power factor. A single-phase boost PFC will have a regulated DC output of around 380 V, which will then be converted by the DC/DC converter.

Furthermore, LLC converters are becoming increasingly popular for the DC-DC stage. These resonant converters regulate their output by altering the operating frequency of the resonant tank across a relatively narrow range, thereby operating in a soft-switching mode. This improves efficiency and reduces EMI. They operate at higher frequencies compared to allowing the use of smaller passive components.

 A Simple LLC ConverterFigure 2: A Simple LLC Converter

Synchronous or active rectification is a technique for improving efficiency and reducing conduction losses by replacing rectifier diodes with active switches. While semiconductor diodes exhibit a relatively fixed voltage drop (typically 0.5 to 1 V), MOSFET switches act as resistances and therefore can have very low drop. If further improvements are needed, MOSFET switches can be paralleled in order to handle higher output currents. In such a case the conduction losses are reduced, because the RDSON of the paralleled devices is equal to the inverse sum of their respective RDSON.

Semiconductor materials are also evolving as traditional silicon (Si) has reached its limit for further significant performance gains. New wide-bandgap (WBG) materials such as silicon carbide (SiC) are increasingly preferred in power designs for their ability to operate efficiently at higher switching frequencies and higher operating voltages.

WBG devices exhibit lower losses due to better reverse recovery, significantly contributing to enhanced conversion efficiency. As a result, and due to their ability to operate at higher temperatures, thermal mitigation requirements are reduced when using WBG devices.

onsemi Solutions

onsemi has one of the broadest portfolios of solutions for offline SMPS currently available. At the heart of the range are controllers for the PFC and DC/DC converter stages, power MOSFETs, rectifiers, and diodes. This is supported with MOSFET gate drivers (including for synchronous rectification), optocouplers, low dropout (LDO) regulators, and other devices.

Leading the way in modern high-performance devices, the range includes many SiC devices (diodes and MOSFETs) for use in the most challenging offline SMPS applications.

Using the onsemi range (and a few passive components), offline SMPS from a few watts to several kilowatts can be designed. onsemi’s experience in this area assures designers that their solution will have industry-leading performance and reliability.

Conclusion

Offline SMPS are one of the most common sub-systems, present in almost every mains-connected device. However, to create a successful design, safety, and EMI regulations must be met while performance, especially in terms of efficiency, is an ever-increasing requirement.

While several companies manufacture some of the devices necessary for these designs, few (if any) have a comprehensive range that covers all the components (excluding passives) needed to execute a complete design. There are significant benefits to sourcing components from a single supplier, including knowing that devices have been designed and tested to work together.

The post Key Design Considerations for Offline SMPS Applications appeared first on ELE Times.

TOPS of the Class: Decoding AI Performance on RTX AI PCs and Workstations

Thu, 07/11/2024 - 11:51

Courtesy: Nvidia

What is a token? Why is batch size important? And how do they help determine how fast AI computes?

The era of the AI PC is here, and it’s powered by NVIDIA RTX and GeForce RTX technologies. With it comes a new way to evaluate performance for AI-accelerated tasks, and a new language that can be daunting to decipher when choosing between the desktops and laptops available.

While PC gamers understand frames per second (FPS) and similar stats, measuring AI performance requires new metrics.

Coming Out on TOPS

The first baseline is TOPS, or trillions of operations per second. Trillions is the important word here — the processing numbers behind generative AI tasks are absolutely massive. Think of TOPS as a raw performance metric, similar to an engine’s horsepower rating. More is better.

Compare, for example, the recently announced Copilot+ PC lineup by Microsoft, which includes neural processing units (NPUs) able to perform upwards of 40 TOPS. Performing 40 TOPS is sufficient for some light AI-assisted tasks, like asking a local chatbot where yesterday’s notes are.

But many generative AI tasks are more demanding. NVIDIA RTX and GeForce RTX GPUs deliver unprecedented performance across all generative tasks — the GeForce RTX 4090 GPU offers more than 1,300 TOPS. This is the kind of horsepower needed to handle AI-assisted digital content creation, AI super resolution in PC gaming, generating images from text or video, querying local large language models (LLMs) and more.

Insert Tokens to Play

TOPS is only the beginning of the story. LLM performance is measured in the number of tokens generated by the model.

Tokens are the output of the LLM. A token can be a word in a sentence, or even a smaller fragment like punctuation or whitespace. Performance for AI-accelerated tasks can be measured in “tokens per second.”

Another important factor is batch size, or the number of inputs processed simultaneously in a single inference pass. As an LLM will sit at the core of many modern AI systems, the ability to handle multiple inputs (e.g. from a single application or across multiple applications) will be a key differentiator. While larger batch sizes improve performance for concurrent inputs, they also require more memory, especially when combined with larger models.

The more you batch, the more (time) you save.

RTX GPUs are exceptionally well-suited for LLMs due to their large amounts of dedicated video random access memory (VRAM), Tensor Cores and TensorRT-LLM software.

GeForce RTX GPUs offer up to 24GB of high-speed VRAM, and NVIDIA RTX GPUs up to 48GB, which can handle larger models and enable higher batch sizes. RTX GPUs also take advantage of Tensor Cores — dedicated AI accelerators that dramatically speed up the computationally intensive operations required for deep learning and generative AI models. That maximum performance is easily accessed when an application uses the NVIDIA TensorRT software development kit (SDK), which unlocks the highest-performance generative AI on the more than 100 million Windows PCs and workstations powered by RTX GPUs.

The combination of memory, dedicated AI accelerators and optimized software gives RTX GPUs massive throughput gains, especially as batch sizes increase.

Text-to-Image, Faster Than Ever

Measuring image generation speed is another way to evaluate performance. One of the most straightforward ways uses Stable Diffusion, a popular image-based AI model that allows users to easily convert text descriptions into complex visual representations.

With Stable Diffusion, users can quickly create and refine images from text prompts to achieve their desired output. When using an RTX GPU, these results can be generated faster than processing the AI model on a CPU or NPU.

That performance is even higher when using the TensorRT extension for the popular Automatic1111 interface. RTX users can generate images from prompts up to 2x faster with the SDXL Base checkpoint — significantly streamlining Stable Diffusion workflows.

ComfyUI, another popular Stable Diffusion user interface, added TensorRT acceleration last week. RTX users can now generate images from prompts up to 60% faster, and can even convert these images to videos using Stable Video Diffuson up to 70% faster with TensorRT.

TensorRT acceleration can be put to the test in the new UL Procyon AI Image Generation benchmark, which delivers speedups of 50% on a GeForce RTX 4080 SUPER GPU compared with the fastest non-TensorRT implementation.

TensorRT acceleration will soon be released for Stable Diffusion 3 — Stability AI’s new, highly anticipated text-to-image model — boosting performance by 50%. Plus, the new TensorRT-Model Optimizer enables accelerating performance even further. This results in a 70% speedup compared with the non-TensorRT implementation, along with a 50% reduction in memory consumption.

Of course, seeing is believing — the true test is in the real-world use case of iterating on an original prompt. Users can refine image generation by tweaking prompts significantly faster on RTX GPUs, taking seconds per iteration compared with minutes on a Macbook Pro M3 Max. Plus, users get both speed and security with everything remaining private when running locally on an RTX-powered PC or workstation.

The Results Are in and Open Sourced

But don’t just take our word for it. The team of AI researchers and engineers behind the open-source Jan.ai recently integrated TensorRT-LLM into its local chatbot app, then tested these optimizations for themselves.

The researchers tested its implementation of TensorRT-LLM against the open-source llama.cpp inference engine across a variety of GPUs and CPUs used by the community. They found that TensorRT is “30-70% faster than llama.cpp on the same hardware,” as well as more efficient on consecutive processing runs. The team also included its methodology, inviting others to measure generative AI performance for themselves.

From games to generative AI, speed wins. TOPS, images per second, tokens per second and batch size are all considerations when determining performance champs.

The post TOPS of the Class: Decoding AI Performance on RTX AI PCs and Workstations appeared first on ELE Times.

Accenture Acquires Cientra to Expand Silicon Design Capabilities

Thu, 07/11/2024 - 10:23

Accenture has acquired Cientra, a silicon design and engineering services company, offering custom silicon solutions for global clients. The terms of the acquisition were not disclosed.

Founded in 2015, Cientra is headquartered in New Jersey, U.S. and has offices in Frankfurt, Germany as well as in Bangalore, Hyderabad and New Delhi, India. The company brings consulting expertise in embedded IoT and application-specific integrated circuit design and verification capabilities, which augments Accenture’s silicon design experience and further enhances its ability to help clients accelerate semiconductor innovation required to support growing data computing needs.

“Everything from data center expansion to cloud computing, wireless technologies, edge computing and the proliferation of AI, are driving demand for next-generation silicon products,” said Karthik Narain, group chief executive—Technology at Accenture. “Our acquisition of Cientra is our latest move to expand our silicon design and engineering capabilities and it underscores our commitment to helping our clients maximize value and reinvent themselves in this space.”

Cientra has deep experience in engineering, development and testing across hardware, software and networks, in the automotive, telecommunications and high-tech industries. The company brings approximately 530 experienced engineers and practitioners to Accenture’s Advanced Technology Centers in India.

“Since inception, Cientra has been dedicated to building top talent and fostering continuous innovation, developing product solutions that drive value for our clients,” said Anil Kempanna, CEO, Cientra. “Joining Accenture provides exciting opportunities to expand globally and scale our capabilities to create new avenues of growth for our clients as well as our people.”

This acquisition follows the addition of Excelmax Technologies, a Bangalore, India-based semiconductor design services provider, earlier this week, and XtremeEDA, an Ottawa, Canada-based silicon design services company, in 2022.

The post Accenture Acquires Cientra to Expand Silicon Design Capabilities appeared first on ELE Times.

E-Fill Electric Presents Wish List for EV Charging Industry in the Upcoming Union Budget

Thu, 07/11/2024 - 10:11

E-Fill Electric (EFEV Charging Solutions Pvt. Ltd.), a pioneering name in India’s EV charging sector, has outlined its wish list for the forthcoming Union Budget, emphasizing key actions to promote the adoption and infrastructure development of electric vehicles (EVs) across the country.

Mayank Jain, Founder & CEO of E-Fill Electric, highlighted several key priorities aimed at fostering a robust EV charging ecosystem:

  1. Increased Allocation for FAME Scheme: E-Fill Electric urges the government to enhance allocation under the FAME (Faster Adoption and Manufacturing of Electric Vehicles) scheme to accelerate EV adoption and manufacturing capabilities in India.
  2. Tax Incentives for EV Charging Businesses: Lowering GST on EV charging equipment and operational costs shall ensure affordability and promote widespread deployment of charging infrastructure.
  3. Investment in Skilled Workforce: E-Fill Electric stresses the importance of investing in training programs to develop a skilled workforce proficient in EV charging installation, maintenance, and repair, vital for sustaining the industry’s growth.
  4. Streamlined Land Acquisition Procedures: The Company recommends measures to streamline land acquisition procedures for EV charging companies, potentially through designated zones or expedited approvals, to facilitate speedy infrastructure expansion.
  5. Public-Private Partnerships: E-Fill Electric advocates for incentivising partnerships between public and private entities to expedite the development and deployment of EV charging infrastructure nationwide.
  6. Research and Development Incentives: The budget should incentivise research and development in EV charging technology, including support for indigenous manufacturing of charging equipment to encourage innovation and self-reliance.
  7. Subsidies for EV Chargers: E-Fill Electric suggests introducing subsidies or low-interest loan schemes to encourage individuals and businesses to install EV chargers at homes and workplaces, enhancing convenience and accessibility.
  8. Grid Modernization: Prioritizing grid modernisation projects is essential to accommodate the increased electricity demand from EVs, ensuring reliable and sustainable power supply.

Mr. Mayank Jain expressed confidence that these measures, if implemented, will not only boost the EV ecosystem but also align with India’s vision of sustainable and inclusive mobility solutions.

The post E-Fill Electric Presents Wish List for EV Charging Industry in the Upcoming Union Budget appeared first on ELE Times.

ROHM Offers LogiCoA: the Industry’s First* Analog-Digital Fusion Control Power Supply Solution

Thu, 07/11/2024 - 09:11

Provides functions equivalent to a fully digital control power supply with low power consumption

ROHM has established LogiCoA, a power supply solution for small to medium-power industrial and consumer equipment (30W to 1kW class). It provides the same functionality as fully digital control power supplies at low power consumption and cost equivalent to analog power types.

Analog controlled power supplies are commonly used in industrial robotics and semiconductor manufacturing equipment that operate in the medium power range. However, in recent years these power supplies are also required to provide a high level of reliability and precise control that make it difficult to meet market demands with analog-only configurations. On the other hand, while fully digitally controlled power supplies enable fine control and settings, they are not widely adopted in the small to medium power range due to the high power consumption and cost of the digital controller. To address this issue, ROHM developed the LogiCoA power solution that leverages the strengths of both analog and digital technologies. High-performance low power LogiCoA MCUs are utilized to facilitate control of a variety of power supply topologies.

The LogiCoA brand embodies a design philosophy of fusing digital elements to maximize the performance of analog circuits. ROHM’s LogiCoA power solution is the industry’s first* “analog-digital fusion control” power supply that combines a digital control block centered around the LogiCoA MCU with analog circuitry comprised of silicon MOSFETs and other power devices.

In a fully digital control power supply, the functions handled by digital controllers such as high-speed CPUs or DSPs can be processed by low-bit MCUs, making it possible to achieve increased functionality that is difficult to realize with an analog control power supply at low power consumption and cost. This solution allows for the correction of performance variations in peripheral components according to the power supply circuit by storing various settings such as current and voltage values in the LogiCoA MCU. As a result, there is no need to consider design margins unlike with analog control power supplies, contributing to smaller power supplies that provide greater reliability. On top, as operation log data can be recorded in the MCU’s nonvolatile memory, it is ideal for power supplies in industrial equipment that require logging as a backup in case of malfunction.

The evaluation reference design REF66009 allows users to experience the LogiCoA power supply solution in a non-isolated buck converter circuit. Various tools necessary for evaluation are also offered, including circuit diagrams, PCB layouts, parts lists, sample software, and support documents, while actual device evaluation is possible using the optional LogiCoA001-EVK-001 evaluation board.

Going forward, ROHM will continue to develop LogiCoA MCUs to support various power supply topologies, contributing to achieving a sustainable society by making the power supply block (which accounts for the majority of power loss in applications) more energy-efficient and compact.

LogiCoA Brand

LogiCoA is a brand that embodies a design philosophy of fusing digital elements to maximize the performance of analog circuits. By combining the advantages of analog circuitry with those of digital control, it is possible to maximize the potential of circuit topologies, contributing to more efficient power utilization. As LogiCoA is a design concept that can be applied not only to the power supply field, but also to power solutions as a whole, ROHM is considering its application in future products and solutions.

Details of the LogiCoA Power Supply Reference Design

The REF66009 evaluation reference design offered on ROHM’s website allows users to verify the functionality of the LogiCoA MCU along with the basic operation of the LogiCoA power supply solution using a non-isolated 12V buck converter circuit. Sample software available on the reference design page makes it possible to confirm the sequence control of execution tasks and the monitoring of various parameters in the actual set using the LogiCoA001-EVK-001 reference board. For more information on the reference board, please contact a sales representative or the contact page on ROHM’s website.

Application Examples

  • Industrial robots
  • Semiconductor manufacturing equipment
  • Gaming applications

Supports mounting in a wide range of general industrial equipment and consumer devices (30W to 1kW).

About the LogiCoA MCU

ROHM is developing LogiCoA MCUs optimized for integrated analog-digital control such as LogiCoA power supply solutions. Features include a built-in 3ch analog comparator that can be linked to a timer and D/A converter that enables digital control of various parameters to support different power supply topologies.

■ LogiCoA MCU Specifications (Tentative)

Availability: Now (LogiCoA MCU samples)

Terminology

Fully Digital Control Power Supply

A power supply controlled using digital technology. High-speed CPUs and DSPs can be used to precisely monitor and control various parameters such as voltage and current, improving power supply efficiency and reliability. What’s more, functions that are difficult to perform with analog control can be achieved, such as acquiring operation log data. However, CPUs and DSPs are expensive and consume a large amount of power, which can be a bottleneck in terms of costs and energy efficiency.

Analog Control Power Supply

A power supply configuration consisting of analog components. This type has become mainstream for power supplies 1kW and below due to its simplicity and low power consumption. On the other hand, implementing advanced functionality such as setting arbitrary parameters and logging data is difficult, requiring fully digital control that entails high costs and power consumption.

CPU (Central Processing Unit)

Responsible for executing programs and processing data. Handles calculations and processing as well as carrying out instructions according to a program.

DSP

A device that digitizes analog signals and performs operations such as analysis, filtering, and amplification on the converted digital signals. Flexible enough for high-speed processing and various applications, it plays an important role in circuits that handle digital signals, such as audio and image processing in addition to power supplies.

The post ROHM Offers LogiCoA: the Industry’s First* Analog-Digital Fusion Control Power Supply Solution appeared first on ELE Times.

AI Smartphones: The Era of the Super Companion in Your Pocket

Wed, 07/10/2024 - 14:16

It has been an exciting year for mobile technology with the advent of AI Smartphones. Each year, like clockwork, I find myself eagerly lining up for the latest smartphone launch, driven by an insatiable curiosity and a bit of a tech addiction. My friends might jest that I switch phones more often than my single malt preferences, but through this annual ritual, I gain a front-row seat to the rapid evolution of technology. Each unboxing becomes a discovery of what’s newly possible at the intersection of hardware and software, particularly as smartphones grow not just smarter but seemingly wiser. The innovation of integrating generative AI in smartphones raises the customer experience bar exponentially.

This fascination isn’t merely about indulging in the latest bells and whistles; it’s about experiencing firsthand how intelligent operating systems are revolutionizing our interactions with mobile devices. As generative AI migrates from vast data centers to the palms of our hands, it transforms smartphones into central hubs of personalized technology and AI-driven companions, reshaping the foundations of mobile user interaction.

At the heart of this revolution is Micron Technology. Our advanced memory and storage products support the immense data demands of generative AI, turning what once seemed like a futuristic vision into today’s reality. These technological advancements are crucial as smartphones begin to transition from passive tools to active personal companions, deeply integrated into the fabric of our daily lives. They offer insightful recommendations and enhance our experiences in ways we are only beginning to imagine.

To truly appreciate the impact of these technologies, one must understand the intricate play between large language models (LLMs) like Llama 2Google Gemini, and ChatGPT, as well as the advanced hardware that supports them. These AI models, which thrive on billions of parameters, demand unprecedented levels of memory capacity and speed—requirements that Micron’s innovative products are designed to meet. Integrating high-capacity, efficient memory systems is not just an improvement; it’s necessary to support the sophisticated AI functions that modern users will come to expect from their devices.

As we stand on the brink of this new era, our relationship with our devices is set to change profoundly. Smartphones will transition from passive tools to active personal companions, deeply integrated into the fabric of our daily lives, making insightful recommendations and enhancing our experiences in ways we are only beginning to imagine. This blog explores how generative AI is driving this monumental shift, redefining the possibilities of smartphone technology and ensuring that users can enjoy a seamless, intuitive, and highly personalized digital experience.

The generative AI advantage: Unlocking the ultimate smartphone companion experience

Generative AI is revolutionizing the capabilities of smartphones by introducing features that were once the domain of science fiction. At its core, generative AI involves using algorithms and models to generate text, images, and even predictions based on extensive data sets on which they have been trained. This transformative technology is making smartphones, not just tools for consumption but instruments of creation and personal assistance.

One of the key features enabled by generative AI is the ability to generate real-time content directly related to user inputs. For example, through AI-powered apps, users can request the generation of digital artwork or manipulate photos and videos in sophisticated ways that go far beyond the current filters and editing tools. Another significant capability is real-time language translation, which is advancing beyond simple text translation to include voice and even real-time video call translations. This allows for a seamless communication experience with almost no language barrier, effectively shrinking the global divide in personal and professional interactions.

Moreover, generative AI enhances personalized recommendations by analyzing user behaviour, preferences, and previous interactions. This data-driven approach allows smartphones to anticipate needs and offer suggestions for everything from daily tasks to complex decision-making processes. It can also guide users through interactive educational content, adapting to their learning pace and style, thus personalizing the educational experience more effectively than ever before.

These features, powered by generative AI, require advanced computational power and significant memory and storage capabilities. The processing occurs on the device itself to ensure responsiveness and data privacy. As these technologies continue to evolve, they promise to enhance how users interact with their devices further, making each smartphone a truly personalized digital companion that learns and grows with its user.

Smartphones that care: How AI is humanizing the mobile experience

The future of AI-enabled smartphones promises a landscape where the line between digital and physical realities blurs, ushering in a new era of interactive and immersive experiences that are currently difficult to imagine. As generative AI continues to evolve, the potential for creating features that transform everyday activities and expand our capabilities is immense.

One of the most exciting prospects is developing extended reality (XR) and spatial computing which is integrated seamlessly with AI. Future smartphones could leverage XR to overlay digital information onto the physical world in real time. Imagine pointing your smartphone at a restaurant and seeing menu recommendations tailored to your taste and dietary preferences pop up in your vision or looking at a piece of furniture and seeing how it would look in your home, configured to your space and color scheme instantly.

Health monitoring is another area ripe for transformation. Future AI smartphones could become proactive health advisors, tracking physical activity and health metrics and predicting potential health issues before they arise. These devices could use advanced sensors and AI-driven analytics to monitor changes in voice tone, breathing patterns, and even eye movements to provide early warnings about health risks such as heart disease or diabetic changes, potentially coordinating directly with medical professionals to provide timely interventions.

Moreover, integrating AI could redefine mobile security, transforming smartphones into highly secure devices that use biometric data like facial recognition, retinal scans, and even behavioural patterns to ensure that access to the device and its applications is intensely personal and completely secure. This could eliminate the need for passwords or traditional security measures, which are vulnerable to breaches.

The concept of an AI companion will likely mature into a fully interactive assistant capable of sophisticated conversation and decision-making assistance. This companion could manage schedules, suggest content, handle mundane tasks, and even offer psychological support, learning continuously from interactions to become more effective and personalized. Furthermore, as generative AI capabilities grow, so will the ability to create and simulate complex virtual environments directly from the device, allowing users to interact with virtual spaces for entertainment, education, or social interaction in unprecedented ways.

Now what does it mean to smartphones’ memory and storage capacities? And what does a phone need to take full advantage of AI applications? As generative AI grows, it becomes even more of a primary innovation driver in the mobile ecosystem. And to support flagships phone’s advanced sensors, cameras, and form factors, high capacity and bandwidth memory and storage is critical. Data is collected and stored on the handset memory and storage, calculated, and processed on the edge (not in the cloud) and translated to valuable and predictive insights.

The future of smartphones equipped with AI technologies offers enhancements of current features and a revolution in how we perceive and interact with our environment. This future is not only about technological advancements but about significantly enhancing human capabilities and experiences, making life more convenient, connected, and healthy. These developments, while complex, require the continued advancement of AI technology paired with significant improvements in hardware, like those provided by Micron, to make these unimagined features a reality.

Memory matters: How Micron’s solutions are unlocking the full potential of AI smartphones and super companions

Micron is at the forefront of defining the future capabilities of AI smartphones, leveraging its leading-edge UFS 4.0 and LPDDR5X DRAM technologies. These innovations are vital for meeting the increasingly complex demands of on-device AI applications, pushing the boundaries of what smartphones can achieve.

The UFS 4.0 technology introduced by Micron sets new standards for storage performance, essential for the fast processing speeds required by AI-driven applications. It achieves a remarkable 4300 megabytes per second (MBps) in sequential read and 4000 MBps in sequential write speeds, doubling the performance of the previous UFS 3.1 standards. This significant increase in data throughput ensures that AI applications can access and process large datasets much faster, reducing latency and enhancing overall device responsiveness​​.

Additionally, Micron’s UFS 4.0 features a compact design with a footprint of just 9×13 millimeters, supporting the development of slimmer and more aesthetically pleasing smartphone designs without compromising performance. The storage solution also includes innovative features like the One-button Refresh, which helps maintain long-term device performance by automating data defragmentation, ensuring that the storage performance remains like-new even after extended use​​.

On the memory side, Micron’s LPDDR5X DRAM is engineered to meet the requirements of advanced AI processing by delivering top speeds of up to 9600 megabits per second (Mbps), which is crucial for handling AI’s extensive computational demands. This speed enhancement, combined with the high-density packaging that allows for increased memory capacity within the same form factor, is critical for AI applications that require rapid access to large volumes of data. ​ It also features 13% Gain with faster Peak Bandwidth and up to 27% power reduction on day of use.

Micron’s advancements enhance smartphones’ raw computational and storage capabilities and enable new AI features by providing the necessary infrastructure to support real-time AI processing on the edge. This strategic focus on developing high-performance and efficient memory and storage solutions firmly positions Micron as a key enabler in the rapidly evolving landscape of AI mobile technology, facilitating the emergence of smartphones that can perform complex AI tasks directly on the device without relying on cloud processing.

The ethical compass: Navigating the moral landscape of AI smartphones

As AI smartphones continue to revolutionize our lives, it’s crucial to acknowledge the ethical considerations that come with these powerful devices. Like a moral compass, we must navigate the complexities of AI technology to ensure it aligns with our values and principles. Privacy and data security are paramount concerns. How will AI smartphones collect, store, and protect our personal information? Transparency and accountability are essential to prevent data breaches and cyber-attacks. Users must be informed about data usage and sharing practices, and measures must be taken to prevent biases and discrimination in AI decision-making. Transparency and explainability are vital in AI-driven processes. Users deserve to understand how AI arrives at its conclusions and make informed decisions. Autonomous decision-making raises questions about free will and moral agency, and AI smartphones must balance user autonomy and AI-driven actions.

The environmental impact of AI smartphones cannot be ignored. Sustainable manufacturing, reduced electronic waste, and energy efficiency are crucial to minimize their ecological footprint. Finally, human-AI collaboration must prioritize human well-being and dignity, enhancing our capabilities without replacing them. By acknowledging these ethical considerations, we can harness the potential of AI smartphones while upholding our values and principles. Like a compass guiding us through uncharted territory, ethical awareness will ensure AI technology serves humanity, not the other way around.

The future in focus: AI smartphones and the dawn of a new era

Imagine this: It’s a crisp Wednesday morning in the not-too-distant future. Your day begins not with a jarring alarm but with a gentle wake-up nudge from your AI-enhanced smartphone, which has analyzed your sleep patterns and knows the exact moment to wake you. As you stir, your phone has already started your coffee maker, selected a nutritious breakfast based on your health goals for the week, and displayed your optimized route to work, avoiding a traffic jam it predicted from historical data and real-time sensors.

While you eat, your smartphone reviews your calendar prioritizes tasks based on urgency and personal productivity patterns and seamlessly integrates your work commitments with personal ones. It reminds you of your daughter’s recital in the evening and schedules a reminder to leave work early. It even suggests a perfect gift for her performance tonight, which you can pick up on your route home—all curated from understanding your past purchases and her current interests.

This scenario isn’t just a futuristic dream; thanks to companies like Micron, it’s on the verge of becoming reality. By advancing AI capabilities through memory and storage solutions innovations like UFS 4.0 and LPDDR5X DRAM, Micron is turning smartphones into personal assistants that manage our digital tasks and enhance our human experiences.

Micron’s vision to “enrich life for all” is deeply embedded in these advancements. With AI on the edge, smartphones are evolving into devices that think, react, predict, and adapt to our needs in more personalized ways. This new generation of smartphones promises to enhance our productivity and leisure, making each interaction more meaningful by staying seamlessly connected to our loved ones and passions while navigating the complexities of our daily lives.

As we embrace these changes, let’s ponder the profound impact of having a device that does more than execute commands—it collaborates, advises, and supports our every decision. With Micron’s commitment to pushing the boundaries of what’s possible, the future is not just about technological advancement but about creating deeper, more meaningful connections with the world around us. How will you harness this power to reshape your day-to-day life? The possibilities are as boundless as your imagination.

The post AI Smartphones: The Era of the Super Companion in Your Pocket appeared first on ELE Times.

Breakthrough 3D-Printed Material Revolutionizes Soft Robotics and Biomedical Devices

Wed, 07/10/2024 - 14:14

Researchers at Penn State have developed a new 3D-printed material designed to advance soft robotics, skin-integrated electronics, and biomedical devices. This material is soft, stretchable, and self-assembled, overcoming many limitations of previous fabrication methods, such as lower conductivity and device failure. According to Tao Zhou, an assistant professor at Penn State, the challenge of developing highly conductive, stretchable conductors has persisted for nearly a decade. While liquid metal-based conductors offered a solution, they required secondary activation methods—like stretching or laser activation—which complicated fabrication and risked device failure.

Zhou explained that their method removes the necessity for secondary activation to attain conductivity. The innovative approach combines liquid metal, a conductive polymer mixture called PEDOT: PSS, and hydrophilic polyurethane. When printed and heated, the liquid metal particles in the material’s bottom layer self-assemble into a conductive pathway, while the top layer oxidizes in an oxygen-rich environment, forming an insulated surface. This structure ensures efficient data transmission to sensors—such as those used for muscle activity recording and strain sensing—while preventing signal leakage that could compromise data accuracy.

“This materials innovation allows for self-assembly that results in high conductivity without secondary activation,” Zhou added. The ability to 3D print this material also simplifies the fabrication of wearable devices. The research team is exploring various potential applications, focusing on assistive technology for individuals with disabilities.

The research, supported by the National Taipei University of Technology-Penn State Collaborative Seed Grant Program, included contributions from doctoral students Salahuddin Ahmed, Marzia Momin, Jiashu Ren, and Hyunjin Lee.

The post Breakthrough 3D-Printed Material Revolutionizes Soft Robotics and Biomedical Devices appeared first on ELE Times.

mSiC Diode Technology: Ruggedness and Reliability

Wed, 07/10/2024 - 13:52

Courtesy: Microchip

Silicon Carbide (SiC) Schottky Barrier Diodes (SBDs) increase efficiency and ruggedness to help create faster and more reliable applications.

Better Efficiency and Reliability Through Silicon Carbide

Silicon Carbide (SiC) Schottky Barrier Diodes (SBDs) increase efficiency and create reliable high-voltage applications. Our rich history and experience allow us to deliver highly reliable SBDs that are designed with high repetitive Unclamped Inductive Switching (UIS) capability at a rated current, which exhibits no degradation. Our mSiC diodes are designed with balanced surge current, forward voltage, thermal resistance and thermal capacitance ratings at low reverse current for lower switching loss to create more efficient power systems.

Because of differences in material properties between SiC and silicon, silicon Schottky diodes are limited to a lower voltage range with higher on-state resistance (RDS(on)) and leakage current. However, SiC Schottky diodes can obtain a much higher breakdown voltage while maintaining low on-resistance and low switching losses, improving ruggedness over traditional silicon Schottky diodes. Our portfolio of mSiC products covers 700V, 1200V, 1700V and 3300V (3.3 kV) SiC Schottky diodes.

In summary, SiC offers the following advantages over silicon:

  • Better reverse current capability
  • Higher temperature stability
  • Higher radiation resistance
Breakdown Voltage

The breakdown voltage of a diode is the voltage at which the diode breaks down and starts conducting current. The breakdown voltage determines the maximum voltage that the diode can withstand before it fails. SiC SBDs exhibit higher breakdown voltages than silicon diodes because of the SiC material’s higher bandgap. This higher breakdown voltage rating allows SiC diodes to withstand higher voltages without damage.

The higher breakdown voltage of SiC diodes is important for several applications including power converters, inverters and motor drives. In these applications, the diodes are often exposed to high voltages. The higher breakdown voltage of SiC diodes allows them to withstand these high voltages without damage, which can lead to improved reliability and performance.

Forward Voltage Drop

The forward voltage drop of a diode is the voltage drop that occurs when the diode is conducting current. This parameter determines the efficiency of the diode. SiC diodes have a lower forward voltage drop than silicon diodes. The higher bandgap means it takes less energy for an electron to move through the material. This lower forward voltage drop allows SiC diodes to be more efficient than silicon.

The lower forward voltage drop is important for several applications including power converters, inverters and motor drives. In these applications, the diodes are often used to convert power from one form to another. The lower forward voltage drop of SiC diodes allows them to be more efficient in these applications, which can lead to reduced costs and improved performance.

Reverse Recovery

Reverse recovery is a phenomenon that occurs when a diode is switched from conducting current to non-conducting current. During reverse recovery, a small amount of current flows in the reverse direction. This current can cause a voltage drop across the diode, which can damage the diode if it is not properly managed.

SiC diodes have a much shorter reverse recovery time, allowing them to switch from conducting current to non-conducting current more quickly, which can reduce the risk of damage. Reverse recovery is an important consideration for any application that uses diodes.

Reverse Current

The reverse current of a diode is the current that flows in the reverse direction when the diode is biased in the reverse direction. This current is a major factor that limits the performance of SiC diodes in high-voltage applications. The reverse current of SiC diodes is typically much higher than that of silicon diodes because the SiC material has a higher bandgap, which causes it to take more energy to break an electron free from its atom. This higher bandgap also means that there are fewer free electrons available to carry current in the reverse direction.

High reverse current can cause several problems in high-voltage applications, causing the diode to overheat and fail. It can also cause the diode to emit noise and interference. There are a few ways to reduce the reverse current of SiC diodes. One way is to use a diode with a higher breakdown voltage. Another way is to use a diode with a lower doping level. However, these techniques can reduce the performance of the diode in other ways.

High Temperature and High Current Stability

High temperature and high current stability are crucial because SiC diodes are often used in a variety of applications that require high currents and temperatures of up to 150°C. The stability of SiC diodes is important for their use in applications with more demanding conditions.

Stability at high temperatures and currents is due to the higher bandgap, which makes SiC more resistant to damage from heat and high current conditions. SiC diodes have a lower concentration of impurities than silicon diodes, making SiC diodes less likely to experience recombination, the process by which an electron and a hole combine to form an atom. Recombination can cause the diode to lose its ability to conduct current, leading to failure.

These attributes make SiC diodes well suited for applications that require high temperatures and currents, such as power converters and inverters, leading to improved reliability and performance in the end equipment.

Start Designing with SiC

Getting started with designing with Silicon Carbide (SiC) involves understanding its benefits and applications. We offer a range of Silicon Carbide (SiC) power products which are the key to faster, more efficient energy solutions.

The post mSiC Diode Technology: Ruggedness and Reliability appeared first on ELE Times.

Improving Line Edge Roughness Using Virtual Fabrication

Wed, 07/10/2024 - 13:34

Courtesy: Lam Research

Line edge roughness (LER) is a variation in the width of a lithographic pattern along one edge of a structure inside a chip. Line edge roughness can be a critical variation source and defect mechanism in advanced logic and memory devices, and can lead to poor device performance or even device failure [1~3]. Deposition-etch cycling is an effective technique to reduce line edge roughness. In this study, we demonstrate how virtual fabrication can provide guidance on how to perform deposition/etch cycling in order to reduce LER.

A typical line and via array pattern with a pitch of 40 nm was established as a test structure in the virtual fabrication software. Pattern critical dimensions (CD) and LER amplitude and correlation length (measures of line edge roughness) were then explored under different experimental conditions.

Figure 1:The virtual process flow of a deposition/etch cycle process used to improve LER. Figure 1 (a-c) 3D view,(d-f) top view of the incoming structure after deposition and etch cycling.

A deposition/etch cycling process was applied in a virtual model to improve the line edge roughness (LER) and critical dimension uniformity (CDU) of the pattern (Figure 1). Virtual metrology was used to measure LER standard deviation (LERSTD), LER correlation length (C) and Via CD range (VCDR) to evaluate the impact of the selected process changes on LER and CDU improvement.

We ran 1,500 virtual experiments using the incoming pattern CD, LER amplitude (A), LER correlation length (C), etch/deposition amount (THK), and number of deposition/etch cycles (NC) as experimental variables. Part of the results of our experiment are shown in Figure 2.

Figure 2 - improving line edge roughnessFigure 2 – improving line edge roughness

Figure 2 shows the trend in the Via CD ranges (VCDR), LER standard deviation (LERSTD), and LER correlation length (C) values with respect to the number of deposition / etch cycles (bottom axis) at different LER A and LER C conditions (top and right axis). Our goal is to minimize VCDR, LERSTD and CL values at the lowest number of deposition / etch cycles. We can draw 3 conclusions from Figure 2.

1) Most of the improvement to LER/VCDR occurs in the first deposition/etch cycle.

2) An increase in the deposition amount (THK, shown in color on Figure 2) has a greater impact on the LER/VCDR improvement than an increase in the number of deposition/etch cycles.

3) The LER correlation length (C) becomes larger after a deposition/etch cycle, but the LER/VCDR improvement is not obvious when the LER correlation length (C) increases.

Figure 3 - improving line edge roughnessFigure 3 – improving line edge roughness

As we mentioned earlier, most of the LER improvement happened in the first deposition/etch cycle, with the remaining deposition/etch cycles producing a much smaller improvement. Contour plots displaying the LER/VCDR improvement on the 1st cycle was fitted and illustrated in Figure 3. From Figure 3, we can draw 2 conclusions:

1). Although less improvement was noticed with a larger incoming LER correlation length (C), improvement still occurred at the via patterns if a thicker film was used during the deposition portion of the cycle when there was a larger LER correlation length (LER C) and lower LER amplitude (LER A).

2). LER/VCDR can be improved using a relatively thicker deposition film at larger incoming LER C conditions.

In this study, a deposition/etch cycling process was simulated to improve LER and CDU performance at advanced nodes by virtual fabrication. The results indicate that most of the LER/VCDR improvement seen during deposition/etch cycling processes occurred during the first deposition/etch cycle. The deposition/etch cycling process is very effective in reducing high frequency noise (when there is a smaller LER correlation length). LER improvements are larger at the via patterns than at the line patterns when a thicker film is deposited, exhibiting as larger LER correlation length values and lower LER amplitude. These results provide quantitative guidance on the optimal selection of deposition/etch amounts and the number of cycles needed, to both reduce LER and lower defects and variability in the production of advanced semiconductor devices.

The post Improving Line Edge Roughness Using Virtual Fabrication appeared first on ELE Times.

Grid Modernization is Integrating Multiple Industries

Wed, 07/10/2024 - 13:14

Change may be a constant in any industry, but the grid and energy industries are experiencing a revolutionary change they have never seen. The grid and energy industries are in the midst of a significant transformation, referred to as grid modernization, driven by the integration of cutting-edge technologies like telecommunications, distributed energy resources, battery storage, solar power, and the ever-present concern of cybersecurity. This fundamental shift presents both challenges and opportunities that will reshape how the world generates, distributes, and consumes electricity.

History of grid modernization

The traditional process that the grid and energy industries have utilized goes back to the late 19th century with the establishment of the first industrial power plants. In the early 20th century, the grid rapidly expanded, but with a focus on centralized power generation using fossil fuels and long-distance transmission lines. The power sources for energy expanded throughout the 20th century to include nuclear, hydroelectric, and some renewables, but the grid and energy industries continued to be separate, isolated entities until the 21st century.

The push for grid modernization came as concerns rose about aging infrastructure, increasing blackouts, and rising environmental impact. In 2003, the United States Department of Energy created dedicated offices to address grid reliability and security. The past two decades have seen a dramatic rise of renewable energy sources like wind and solar and pushes for grid upgrades to handle fluctuating power generation. Grid modernization strives to tackle these problems to ensure a better system moving forward.

Key aspects of grid modernization

Grid modernization is focused on transforming the current electricity delivery system to meet the demands of the 21st century and beyond. Key aspects of this transformation include:

  • Integration of renewables: A core focus is on smoothly bringing renewable energy sources like solar and wind into the power generation mix. This often requires upgrades to handle the variable nature of these power sources.
  • Smart grid technologies: The Smart Grid concept involves using digital technology to monitor, control, and optimize the flow of electricity. This includes smart meters for consumers and advanced grid management systems for utilities.
  • Infrastructure improvements: Aging grid systems need upgrades to improve reliability and efficiency. This can involve replacing outdated equipment, strengthening transmission lines, and investing in new technologies for power distribution.
  • Consumer management: Modernization aims to give consumers more control over their energy use. This might involve tools for monitoring consumption, participating in demand-response programs, and even generating their own power.
  • Resilience and security: The grid needs to be more resistant to outages caused by weather events, cyberattacks, and other threats. This involves building redundancy and implementing advanced security measures.

Overall, grid modernization is a complex undertaking with far-reaching impacts. The goal of this process is to pave the way for a more reliable, efficient, secure, and environmentally sound electricity system for the future.

Challenges of grid modernization

Grid modernization is a necessary step towards a more sustainable and efficient energy future, but it is not without its hurdles to overcome. Some of the challenges that come with this transformation include:

  • Cost: Upgrading the power grid requires significant investment in new technologies, infrastructure, and cybersecurity measures. Utilities need to find ways to finance these improvements while keeping electricity affordable for consumers.
  • Variability of renewables: Renewable energy sources like solar and wind are variable in their output. The grid needs to be able to handle these fluctuations without compromising reliability.
  • Interoperability: Modernization often involves integrating equipment from new technology sources. Ensuring seamless communication and utilization between new and legacy systems requires common standards and protocols, which are still being developed.
  • Cybersecurity: A more digital grid with new data sources creates increasing vulnerabilities to cyberattacks. Robust security measures are essential to protect critical infrastructure.
  • Regulation: The regulatory framework needs to adapt to support grid modernization efforts and incentivize investment and adoption of new technologies
Opportunities with grid modernization

While grid modernization presents a complex challenge, the potential benefits are significant. Overcoming the hurdles and capitalizing on these opportunities creates numerous advantages, including:

  • Clean energy integration: A modernized grid can efficiently integrate renewable energy sources, reducing global reliance on fossil fuels and combating climate change.
  • Consumer empowerment: Consumers can gain more control over their energy use through smart meters and demand-response programs, leading to increased participation in the energy market, potentially even selling excess power back to the grid.
  • Improved grid reliability and efficiency: Modernization can lead to fewer power outages, reduced energy losses, and a more efficient overall system.
  • Economic growth: Investment in a modern grid will drive economic growth and the creation of new jobs in areas like renewable energy technologies, grid construction, and cybersecurity solutions.
  • Innovation: Modernization opens doors for innovation in areas like energy storage, distributed generation, data analytics, cybersecurity, and telecommunications.
The Road Ahead

The transformation of the grid and energy industry is complex and ongoing. Collaboration between utilities, technology companies, policymakers, and consumers are essential to overcome the challenges and seize the opportunities presented by grid modernization. By investing in infrastructure upgrades, developing innovative technologies, and prioritizing cybersecurity, the world can create a more resilient, efficient, and sustainable energy future.

Key to this transition will be the integration of five key technology areas: telecommunications, distributed energy resources, battery storage, solar power, and cybersecurity.

The post Grid Modernization is Integrating Multiple Industries appeared first on ELE Times.

Microchip Technology Expands Processing Portfolio to Include Multi-Core 64-Bit Microprocessors

Wed, 07/10/2024 - 12:53

PIC64GX MPU is the first of several product lines planned for Microchip’s PIC64 portfolio

Real-time, compute intensive applications such as smart embedded vision and Machine Learning (ML) are pushing the boundaries of embedded processing requirements, demanding more power-efficiency, hardware-level security and high reliability at the edge. With the launch of its PIC64 portfolio, Microchip Technology is expanding its computing range to meet the rising demands of today’s embedded designs. Making Microchip a single-vendor solution provider for MPUs, the PIC64 family will be designed to support a broad range of markets that require both real-time and application class processing. PIC64GX MPUs, the first of the new product line to be released, enable intelligent edge designs for the industrial, automotive, communications, IoT, aerospace and defense segments.

“Microchip is a leader in 8- 16- and 32-bit embedded solutions and, as the market evolves, so must our product lines. The addition of our 64-bit MPU portfolio allows us to offer low-, mid- and high-range compute processing solutions,” said Ganesh Moorthy, CEO and President of Microchip Technology. “The PIC64GX MPU is the first of several 64-bit MPUs designed to support the intelligent edge and address a broad range of performance requirements across all market segments.”

The intelligent edge often requires 64-bit heterogenous compute solutions with asymmetric processing to run Linux, real-time operating systems and bare metal in a single processor cluster with secure boot capabilities. Microchip’s PIC64GX family manages mid-range intelligent edge compute requirements using a 64-bit RISC-V quad-core processor with Asymmetric Multiprocessing (AMP) and deterministic latencies. The PIC64GX MPU is the first RISC-V multi-core solution that is AMP capable for mixed-criticality systems. It is designed with a quad-core, Linux-capable Central Processing Unit (CPU) cluster, fifth microcontroller class monitor and 2 MB flexible L2 Cache running at 625 MHz.

The PIC64GX family boasts pin-compatibility with Microchip’s PolarFire SoC FPGA devices, offering a large amount of flexibility in the development of embedded solutions. Additionally, the 64-bit portfolio will leverage Microchip’s easy-to-use ecosystem of tools and supporting software, including a host of powerful processes to help configure, develop, debug and qualify embedded designs.

The PIC64 High-Performance Spaceflight Computing (PIC64-HPSC) family is also being launched as part of Microchip’s first wave of 64-bit offerings. The space-grade, 64-bit multi-core RISC-V MPUs are designed to increase compute performance by more than 100 times while delivering unprecedented radiation and fault tolerance for aerospace and defense applications. NASA’s Jet Propulsion Laboratory (NASA-JPL) announced in August 2022 that it had selected Microchip to develop an HPSC processor as part of its ongoing commercial partnership efforts. The PIC64-HPSC family represents a new era of autonomous space computing for NASA-JPL and the broader defense and commercial aerospace industry.

With the introduction of its PIC64 portfolio, Microchip has become the only embedded solutions provider actively developing a full spectrum of 8-, 16-, 32- and 64-bit microcontrollers (MCUs) and microprocessors (MPUs). Future PIC64 families will include devices based on RISC-V or Arm architectures and embedded designers will be able to take advantage of Microchip’s end-to-end solutions—from silicon to embedded ecosystems—for faster design, debug and verification and a reduced time to market. To learn more, visit the Microchip 64-bit web page.

Development Tools

The PIC64GX family is supported by the PIC64GX Curiosity Evaluation Kit and will feature integration with Microchip’s MPLAB Extensions for VS Code. The PIC64 MPUs are also supported by Linux4Microchip resources and Linux distributors such as Canonical Ubuntu OS, the Yocto Project and Buildroot with support for Zephyr RTOS and associated software stacks.

Pricing and Availability

The PIC64GX Curiosity Kit is now available for designers to get started with evaluation— for additional information and to purchase, contact a Microchip sales representative, authorized worldwide distributor or visit Microchip’s Purchasing and Client Services website, www.microchipdirect.com.

The post Microchip Technology Expands Processing Portfolio to Include Multi-Core 64-Bit Microprocessors appeared first on ELE Times.

Working of SIM & eSIM Remote SIM Provisioning

Wed, 07/10/2024 - 12:14

Courtesy: Infineon

Do you wonder how a traditional SIM works? Today, through this blog, I will talk about the working process of SIM as well as eSIM Remote SIM Provisioning (RSP). So, let’s jump into the techy details.

Working of physical SIM cards

Let’s first take a look at the figure 1 below:

 Cases explaining the working of physical SIM cardsFigure 1: Cases explaining the working of physical SIM cards

Did you understand anything from this given figure? Well, I’ll explain it now.

Traditional SIM cards were owned and issued by a particular network operator. The Figure 1.1 above showcases that an end user signs up a contract with their selected network operator, they pay the amount for the service and gets the physical SIM card (Case (a)).

Later, the same end user signs-up a contract with a different network operator, pays the service charges and gets the new physical SIM card (Case (b)).

Here, we see that if the end user has to use (a) network or (b) network, he needs to swap the SIM cards on their own.

eSIM remote SIM provisioning

After reading about how physical SIM works, you must be wondering how an eSIM differs from traditional SIMs?

Take a look at the image below:

 Remote SIM ProvisioningFigure 2: Remote SIM Provisioning

For remote SIM provisioning, no physical SIM card is required, but an embedded SIM in your handset/device (also called eUICC) – a single eSIM can accommodate and securely store multiple profiles in a single device and each profile comprises operators as well as subscriber’s data.

Let’s see what this figure 2 explains. ­­

At first, in step (a) the end user signs-up a contract with their preferred network operator, pays the required charges, and instead of getting a physical SIM, he receives instructions to connect to operator’s Remote SIM Provisioning system (RSP) [e.g., QR code]. This QR code contains the address of RSP system (SM-DP+ (Subscription Manager Data Preparation) server within the GSMA specifications), which allows the end user to download and install a SIM profile (as shown in step (b)). Once the profile is active, the user can connect to the network successfully (as shown in step (c)).

Important note: In Figure 2, the end user can repeat the process to install more profiles on a single device as shown below in Figure 3. This allows users to switch between profiles 1 and 2 as per their needs.

 Multiple Installed Profiles on eSIMFigure 3: Multiple Installed Profiles on eSIM

Some important terms:

Profile: A profile comprises of the operator data related to a subscription. It includes data like – operator’s credentials and provided third-party applications.

eUICC: Embedded Universal Integrated Circuit Card (eUICC) is a secure element in the eSIM solution which can accommodate multiple profiles.

Profiles are always remotely downloaded over-the-air into a eUICC. Although the eUICC is an integral part of the device, the profile remains the property of the operator as it contains items “owned” by the operator (International Mobile Subscriber Identity (ISMI), Integrated Circuit Card ID (ICCID), security algorithms, etc.) and is supplied under licence.

Hence, the eUICC acts as a secure element to store the eSIM Profiles in the device.

We now know how traditional SIM cards VS embedded SIMs (eSIMs) functions differently. In the next blog, I’ll discuss about GSMA M2M solution – The first RSP solution developed by GSM Association (GSMA) for Machine to Machine (M2M) connectivity. 

The post Working of SIM & eSIM Remote SIM Provisioning appeared first on ELE Times.

Congatec modules set new benchmarks for secure edge AI applications

Wed, 07/10/2024 - 10:57

congatec – a leading provider of embedded and edge computing technology – presents new high-performance computer-on-modules (COMs) with i.MX 95 processors from NXP, thereby expanding its extensive module portfolio with low-power NXP i.MX Arm processors. In doing so, congatec underlines its strong partnership with NXP. Customers benefit from straightforward scalability and reliable upgrade paths for existing and new energy-efficient edge AI applications with high security requirements.

In these applications the new modules offer the advantages of up to three times the GFLOPS computing performance compared to the previous generation with i.MX8 M Plus processors. The new neural processing unit from NXP called ‘eIQ Neutron’ doubles the inference performance for AI accelerated machine vision. In addition, the hardware-integrated EdgeLock® secure enclave simplifies the implementation of in-house cyber security measures.

The new conga-SMX95 SMARC modules are designed for an industrial temperature range of -40°C to +85°C, are robust in mechanical terms and optimised for cost- and energy-efficient applications. The integrated high-performance eIQ Neutron NPU makes it possible for AI accelerated workloads to be performed even closer to the local device level. Specific applications for the new SMARC modules can be found in AI accelerated low-power applications in sectors such as industrial production, machine vision and visual inspection, rugged HMIs, 3D printers, robotics controllers in AMR and AGV, as well as medical imaging and patient monitoring systems. Other target applications include passenger seat back entertainment in buses and aircraft, along with fleet management in transportation, and construction and farming applications.

img-pr-image-smx95-freigestellt.

The feature set in detail

The new conga-SMX95 SMARC 2.1 modules are based on the next generation of the NXP i.MX 95 application processors with 4-6 Arm Cortex-A55 cores. NXP is now using the new Arm Mali 3D graphics unit for the first time, which delivers up to three times the GPU performance compared to predecessors based on i. MX8 M Plus. Also new is the image signal processor (ISP) for hardware accelerated image processing. Particularly noteworthy is the NXP eIQ Neutron NPU for hardware accelerated AI inference and machine learning (ML) on-the-edge in the new SMARC modules. The corresponding eIQ® software development environment from NXP offers OEMs a high-performance development environment which simplifies the implementation of in-house ML applications.

In addition, the new SMARC modules integrate a real-time domain for real-time controllers. The conga-SMX95 SMARC modules offer 2x Gbit Ethernet with TSN for synchronised and deterministic network data transmission, LPDDR5 (with inline ECC) for data security. For display connectivity the new modules offer DisplayPort as the standard interface and the still widely used LVDS display interface. For direct camera connectivity the modules have 2x MIPI-CSI.

congatec also offers an extensive hardware and software ecosystem as well as comprehensive design-in-services for simplified and accelerated application development. These include, among other things, evaluation- and production-ready application carrier boards and custom-tailored cooling solutions. In terms of services, congatec offers comprehensive documentation, training and signal integrity measurements for application development.

The post Congatec modules set new benchmarks for secure edge AI applications appeared first on ELE Times.

Reimagine Enterprise Data Center Design and Operations

Wed, 07/10/2024 - 10:14

Ever feel like the only constant in the data center industry is that things are always changing? You’re not alone. From rising densities to evolving environmental policies, there’s never a shortage of change in our field. Navigating this constant change is especially difficult for large enterprises with legacy infrastructures.

We believe that every data center should have its own digital twin to ensure data center teams are ready to adapt to these rapid changes. We put together an eBook that details several case studies from large enterprises in various industries with different pain points and needs that have found great success using Cadence data center digital twin solutions.

Aerospace Enterprise Dramatically Improves Data Center Performance

One of the world’s largest aerospace companies uses Cadence Reality DC Digital Twin for data center performance efficiency modeling and asset management. They initially needed a data center solution to help address cooling, compliance, and low operating efficiency issues. Before implementing Cadence Reality DC Digital Twin, the data center management team was using a manual, trial-and-error approach to IT installation planning, which was both time consuming and risky. At one point, they even experienced an outage.

This large aerospace enterprise began using Cadence Reality DC Digital Twin to perform engineering simulations. They built and calibrated models to form a digital twin model of their data halls, enabling them to see what would happen in different scenarios by testing them in the virtual model. Using Cadence’s built-in library items, which include cabinets, IT devices, and more, the aerospace enterprise could easily simulate how new deployments would perform in their data center environment. They also used Cadence Reality DC Digital Twin to examine cooling and power capacities and search for greater efficiency gains.

With help from Cadence tools and services, the company was able to simulate the changes in IT equipment in a virtual environment to understand the performance impact. Cadence Reality DC Digital Twin enabled the company to be more proactive in their approach to data center management. This simulation-based methodology for IT installation planning enabled the data center management team to adjust environments for maximum performance before installation. The company has quantified that it has been able to reduce power consumption and increase performance by 30-40% (depending on the data center).

This large enterprise now operates its data centers more reliably and sustainably, reducing power consumption without increasing environmental compliance risk, so much so that the management team was able to drop PUE at one of their data centers from PUE 4 to 1.6. They continue to use Cadence data center software to assess new deployments and efficiently perform monthly IT asset audits. Implementing Cadence Reality DC Digital Twin into their workflow for these purposes saves their engineering department a significant amount of time. Cadence Reality DC Digital Twin helps this aerospace enterprise overcome the challenge of fulfilling the competing objectives of compliance and efficiency at the same time.

Effectively Design and Operate Enterprise Data Centers

Large global companies in automotive, healthcare, finance, and aerospace are using Cadence data center solutions to effectively design and operate their data centers.

Danielle Gibson | Cadence SystemsDanielle Gibson | Cadence Systems

The post Reimagine Enterprise Data Center Design and Operations appeared first on ELE Times.

Advanced Logic and Memory Need New Tools for Optical Wafer Inspection

Wed, 07/10/2024 - 09:48

Ganga Sivaraman | Product Marketing Director, Optical Patterned Wafer Inspection | Applied Materials

Semiconductor production is an expensive and complex endeavor. The journey from R&D to high-volume manufacturing is a race, and whoever crosses the finish line first wins competitive advantage in terms of revenue, market share and profitability. Advanced chips are built up one layer at a time, and each of the billions of individual features must be perfectly patterned and aligned to create working transistors and interconnects with the best performance and power characteristics.

In both advanced logic and memory, the number of processing steps is increasing as we add more and more complexity to the latest and greatest chips. Defects introduced in between the process steps directly impact wafer yields and ultimately slow down an economy that runs on silicon. Patterned wafer inspection – the scientific study of defects across the entire wafer manufacturing lifecycle – has always been critical to controlling and perfecting the chipmaking process. However, as chip structures become ever smaller and the process grows in complexity, the way we inspect leading-edge chips needs to evolve.

More Complexity Calls for More Inspection

Management guru Peter Drucker is credited as saying, “what gets measured, gets managed.” Often, fab inspection strategies analyze data from a limited number of intermediate and end-of-module steps. But as process complexity increases, and techniques like multipatterning invite minor defects to become magnified, we need to gather data from all key process modules. Otherwise, defects and process drift may not become visible until engineers are faced with costly and inexplicable yield issues.

When determining where and how often to inspect, the right technical answer is, “more is better.” At the same time, fab managers need to control costs, which is why they must deploy an optimized approach that uses the most cost-efficient tools for the job. A mix and match of optical inspection approaches – both brightfield and darkfield – is the key for cost-effective yield monitoring and control.

Inspection

Brightfield and darkfield wafer inspection technologies are complementary and typically used for addressing different application needs. Brightfield primarily collects reflected light, with the source of illumination oriented perpendicular to the wafer’s surface. Light bounces off the surface and returns a “bright” image showing a realistic view of the patterned features on the wafer, similar to the way a mirror shows a clear and precise reflection of a person’s face. With darkfield, the wafer can be imaged using either normal illumination or oblique illumination, where the light is at an angle to the wafer surface. Darkfield focuses primarily on collecting scattered light. When a beam of light encounters angled or rough surface features within a chip’s nanoscale patterns, its trajectory is altered. Collecting this scattered light produces images of the edges of 3D structures against a dark background.

optical-wafer-inspection-fig2-650wb

Brightfield inspection primarily targets high-sensitivity applications and delivers lower inspection throughput. Darkfield is suited to lower-sensitivity applications — typically targeting defects of 20nm or greater in size — and delivers very high inspection throughput.

optical-wafer-inspection-fig3-650

Wafer Inspection at a Crossroads

New challenges in advanced logic and memory are calling for a new playbook for optical wafer inspection. Chipmakers are telling us that they need new capabilities which maintain the high throughput and low cost-of-ownership characteristic of darkfield inspection while delivering optimal sensitivity for both 3D surface defects and surface pattern defects.

For example, defects in the sub-20nm range have traditionally been considered too small to have a significant impact on wafer yield and therefore have not been a priority for optical inspection. As the critical dimensions of devices continue to shrink, defects in this size range become more problematic. If left undetected, these small particles can block etching and cause pattern defects in subsequent steps. Traditional darkfield tools do not have the resolution to detect these critical defects of interest.

Likewise, when creating the vias that connect vertical layers of metal interconnects, tiny micro-scratches can be left behind in the oxide layer after chemical mechanical planarization (CMP) steps. These scratches must be detected early before they turn into bridge defects when the vias are filled with metal.

optical-wafer-inspection-fig4-650

Applied Materials has a strong presence in the optical inspection market with its Enlight wafer inspection system which offers brightfield and darkfield modes. We believe chipmakers who are pushing the leading edge of logic and memory will eventually need a next-generation darkfield tool that can deliver a new combination of darkfield application sensitivity and throughput.

Based on extensive customer engagements, we are preparing to introduce a state-of-the-art wafer inspection system designed to deliver the industry’s highest darkfield application sensitivity at higher throughput. Our solution is designed to make it cost-effective for chipmakers to inspect more inter-module process steps, enabling them to effectively monitor and control wafer yield. With more than 10 customer engagements in 2023, we have successfully demonstrated our capabilities in high-throughput wafer inspection in a variety of processing modules such as deposition, CMP, lithography, etch, implant and a few custom modules.

The post Advanced Logic and Memory Need New Tools for Optical Wafer Inspection appeared first on ELE Times.

Pages