ELE Times

Subscribe to ELE Times feed ELE Times
latest product and technology information from electronics companies in India
Updated: 2 hours 3 min ago

Infineon to complete limited Share Buyback Program serving fulfillment of obligations under existing employee participation programs

Wed, 03/20/2024 - 12:12

Infineon Technologies AG has successfully completed its Share Buyback Program 2024, announced on 26 February 2024 in accordance with Article 5(1)(a) of Regulation (EU) No 596/2014 and Article 2(1) of Delegated Regulation (EU) No 2016/1052. As part of the Share Buyback Program 2024, a total of 7,000,000 shares (ISIN DE0006231004) were acquired. The total purchase price of the repurchased shares was € 232,872,668. The average purchase price paid per share was € 33.27.

Alexander Foltin, Head of Finance, Treasury and Investor Relations of Infineon

The buyback was carried out on behalf of Infineon by an independent credit institution via Xetra trading on the Frankfurt Stock Exchange, serving the sole purpose of allocating shares to employees of the company or affiliated companies, members of the Management Board of the company as well as members of the management board and the board of directors of affiliated companies as part of the existing employee participation programs.

The post Infineon to complete limited Share Buyback Program serving fulfillment of obligations under existing employee participation programs appeared first on ELE Times.

UiPath Unveils New Family of LLMs at AI Summit to Empower Enterprises to Harness Full Capabilities of GenAI

Wed, 03/20/2024 - 08:16

Company introduces Context Grounding to augment GenAI models with business-specific data, an IBM watsonx.ai connector, and updates for Autopilot

UiPath, a leading enterprise automation and AI software company, recently announced several new generative AI (GenAI) features in its platform designed to help enterprises realize the full potential of AI with automation by accessing powerful, specialized AI models tailored to their challenges and most valuable use cases. UiPath showcased its latest capabilities at the virtual AI Summit that took place on March 19th, 2024.

The UiPath Business Automation Platform offers end-to-end automation for business processes. There are four key factors that business leaders seeking to embed AI in their automation program must keep top of mind: business context, AI model flexibility, actionability, and trust. The new AI features of the UiPath Platform address these key areas to ensure customers are equipped with the tools necessary to enhance the performance and accuracy of GenAI models and tools and more easily tackle diverse business challenges with AI and automation.

“Businesses need an assortment of AI models, the best in class for every task, to achieve their full potential. Our new family of UiPath LLMs, along with Context Grounding to optimize GenAI models with business-specific data, provide accuracy, consistency, predictability, time to value, and empower customers to transform their business environments with the latest GenAI capabilities on the market,” said Graham Sheldon, Chief Product Officer at UiPath. “These new features ensure that AI has the integrations, data, context, and ability to take action in the enterprise with automation to meet our customers’ unique needs.”

At the AI Summit, UiPath announced:

Generative Large Language Models (LLMs) 

The new LLMs, DocPATH and CommPATH, give businesses LLMs that are extensively trained for their specific tasks, document processing and communications. General-purpose GenAI models like GPT-4 struggle to match the performance and accuracy of models specially trained for a specific task. Instead of relying on imprecise and time-consuming prompt engineering, DocPATH and CommPATH provide businesses with extensive tools to customize AI models to their exact requirements, allowing them to understand any document and a huge variety of message types.

Context Grounding to augment GenAI models with business-specific data

Businesses need a safe, reliable, low-touch way to use their business data with AI models. To address this need, UiPath is introducing Context Grounding, a new feature within the UiPath AI Trust Layer that will be entering private preview in April. UiPath Context Grounding helps businesses improve the accuracy of GenAI models by providing prompts and a foundation of business context through retrieval augmented generation. This system extracts information from company-specific datasets, like a knowledge base or internal policies and procedures to create more accurate and insightful responses.

Context Grounding makes business data LLM-ready by converting it to an optimized format that can easily be indexed, searched, and injected into prompts to improve GenAI predictions. Context Grounding will enhance all UiPath Gen AI experiences in UiPath Autopilots, GenAI Activities, and intelligent document processing (IDP) products like Document Understanding.

GenAI Connectors & IBM watsonx.ai

IBM used the UiPath Connector Builder to create a unique watsonx.ai connector. The new connector provides UiPath customers with access to multiple foundational models currently available in watsonx.ai. GenAI use cases, such as summarization, Q&A, task classification, and optimization for chat, are quickly integrated and infused into new and existing UiPath workflows and frameworks. IBM Watsonx customers can also access broader UiPath platform capabilities, such as Test Automation, Process Mining and Studio workflows, all within a low/no-code UX environment. IBM’s industry-leading consulting capabilities, coupled with the UiPath Business Automation Platform, will help support successful GenAI adoption, including the right strategy for infusing AI into more powerful, and complex automated workflows.

“IBM and UiPath strongly believe that AI and GenAI are rapidly changing the entire landscape of business globally,” said Tom Ivory, Senior Partner, Vice President, Global Leader of Global Automation at IBM. “We are excited that IBM’s watsonx.ai and UiPath’s Connector Builder together now help create insights, and efficiencies that result in real value for our customers.”

The IBM Watson Connector is now generally available through the Integration Service Connector Catalog.

Autopilot for Developers and Testers

UiPath Autopilot is a suite of GenAI-powered experiences across the platform that make automation builders and users more productive. Autopilot experiences for Developers and Testers are now available in preview with a targeted general availability in June. Over 1,500 organizations are using UiPath Autopilot resulting in over 7,000 generations and over 5500 expressions generated per week.

Autopilot for Developers empowers both professional and citizen automation developers to create automation, code, and expressions with natural language, accelerating every aspect of building automation.

Autopilot for Testers transforms the testing lifecycle, from planning to analysis, reducing the burden of manual testing and allowing enterprise testing teams to test more applications faster. Autopilot for Testers empowers testing teams to rapidly generate step-by-step test cases from requirements and any other source documents, generate automation from test steps, and surface insights from test results, allowing testers to identify the root cause of issues in minutes, not hours or days.

Prebuilt GenAI Activities for faster time-to-value

New prebuilt GenAI Activities utilize the UiPath AI Trust Layer and are easy to access, develop with, and leverage high-quality AI predictions in automation workflows that deliver faster time to value. GenAI Activities provides access to a growing collection of GenAI use cases, such as text completion for emails, categorization, image detection, language translation, and the ability to filter out personally identifiable information (PII) enabling enterprises to do more with GenAI. With GenAI Activities, enterprises can reduce the time to build and achieve a competitive edge using GenAI to help customize the customer experience, optimize supply chains, forecast demands, and make informed decisions.

The post UiPath Unveils New Family of LLMs at AI Summit to Empower Enterprises to Harness Full Capabilities of GenAI appeared first on ELE Times.

Expanded Semiconductor Assembly and Test Facility Database Tracks OSAT and Integrated Device Manufacturers in 670 Facilities, SEMI and TechSearch International Report

Wed, 03/20/2024 - 07:58

New edition of database tracks 33% more facilities and highlights advanced packaging and factory certifications

The new edition of the Worldwide Assembly & Test Facility Database expands coverage to 670 facilities, 33% more than the previous release, including 500 outsourced semiconductor assembly and test (OSAT) service providers and 170 integrated device manufacturer (IDM) facilities, SEMI and TechSearch International announced today. The database is the only commercially available listing of assembly and test suppliers that provides comprehensive updates on packaging and testing services offered by the semiconductor industry.

The updated database includes factory certifications in critical areas such as quality, environmental, security and safety as well as data reflecting automotive quality certifications obtained by each site. The new edition also highlights advanced packaging offerings by each factory, defined as flip chip bumping and assembly, fan-out and fan-in wafer-level packaging (WLP), through silicon via (TSV), 2.5D and 3D capability.

“Understanding the location of legacy packaging as well as advanced packaging and test is essential to effective supply-base management,” said Jan Vardaman, President at TechSearch International. “The updated Worldwide Assembly & Test Facility Database is an invaluable tool in tracking the packaging and assembly ecosystem.”

“The database increases its focus on advanced packaging while highlighting conventional packaging capabilities and new test capabilities to support innovations in key end markets including automotive,” said Clark Tseng, Senior Director of SEMI Market Intelligence.

Combining the semiconductor industry expertise of SEMI and TechSearch International, the Worldwide Assembly & Test Facility Database update also lists revenues of the world’s top 20 OSAT companies and captures changes in technology capabilities and service offerings.

Covering facilities in the Americas, China, Europe, Japan, Southeast Asia, South Korea and Taiwan, the database highlights new and emerging packaging offerings by manufacturing locations and companies. Details tracked include:

  • Plant site location, technology, and capability: Packaging, test, and other product specializations, such as sensor, automotive and power devices
  • Packaging assembly service offerings Ball grid array (BGA), specific leadframe types such as quad flat package (QFP), quad flat no-leads (QFN), small outline (SO), flip-chip bumping, WLP, Modules/System in Package (SIP), and sensors
  • New manufacturing sites announced, planned or under construction

Key Report Highlights

  • The world’s top 20 OSAT companies in 2022 with financial comparisons to 2021, as well as preliminary comparisons to 2023
  • 150-plus facility additions compared to the 2022 report
  • 200-plus companies and more than 670 total back-end facilities
  • 325-plus facilities with test capabilities
  • 100-plus facilities offering QFN
  • 85-plus bumping facilities, including more than 65 with 300mm wafer bumping capacity
  • 90-plus facilities offering WLCSP technology
  • 130-plus OSAT facilities in Taiwan, more than 150 in China, and more than 60 in Southeast Asia
  • 50-plus IDM assembly and test facilities in Southeast Asia, about 45 in China, nearly 20 in Americas and more than 12 in Europe
  • More than 30% of global factories offering advanced packaging capabilities in one of the following areas: flip chip bumping and assembly, fan-out and fan-in WLP, TSV, 2.5D and 3D

Worldwide Assembly & Test Facility Database licenses are available for single and multiple users. SEMI members save up to 25% on licenses. Download a sample of the report and see pricing and ordering details.

For more information on the database or to subscribe to SEMI market data, visit SEMI Market Data or contact the SEMI Market Intelligence Team (MIT) at mktstats@semi.org.

The post Expanded Semiconductor Assembly and Test Facility Database Tracks OSAT and Integrated Device Manufacturers in 670 Facilities, SEMI and TechSearch International Report appeared first on ELE Times.

STM32 Summit: 3 important embedded systems trends for 2024

Wed, 03/20/2024 - 07:36

Author: STMicroelectronics

Where are embedded systems heading in 2024, and how can makers stay ahead of the curve? Few people used to ask these questions a decade ago. Today, the answers can make or break entire companies. Indeed, once relegated to a few niche applications, embedded systems are now everywhere. From factories to home appliances or from expensive medical devices in hospitals to ubiquitous wearables, every time we become more connected or more sustainable, an embedded system is usually at the heart of innovations. ST will thus hold the STM32 Summit on March 19 to introduce our community to the latest technologies shaping our industry. In the meantime, let’s step back to see where 2024 is taking us.

Computational efficiency or doing more with less

Avid readers of the ST Blog know that greater efficiency is often a key driver of our innovations. However, we may need to broaden our understanding of “efficiency”. In essence, efficiency is the ratio of work done per amount of energy spent. In the microcontroller world, it refers to electrical efficiency. Hence, improving efficiency means lowering the power consumption while offering the same or more computational throughput. However, as embedded systems applications become vastly more optimized, a new efficiency ratio shapes the industry: application complexity for a given computational throughput.

To illustrate this point, let’s use a simple thought experiment. Imagine bringing today’s high-performance MCU back in time just five years ago. That device could not run the neural network or rich UIs it can run today because frameworks and machine learning algorithms were far cruder. The reason is that embedded systems aren’t just more powerful but that new applicative optimizations have made them more capable. Consequently, the same amount of computational power yields far greater results today.

Trained vs. pruned and quantized with TAO Toolkit

For instance, the quantization of neural networks enabled more powerful edge AI systems. In the case of a recent demo with Schneider Electric, a deeply quantized neural network meant that a people-counting application ran on an STM32H7. And NVIDIA featured the same MCU when running a network optimized with its TAO Toolkit and STM32Cube.AI. Similarly, new motor control algorithms, like ZeST, mean MCUs drive motors more accurately and efficiently, and new UI framework optimizations mean richer graphics while needing less memory. For instance, the latest version of TouchGFX supports vector fonts, and our latest STM32U5 has an IP accelerating vector graphics, which wouldn’t have been as impressive without the graphical framework to help developers take advantage of it.

Consequently, engineers must not only ensure their embedded processing solutions is reducing their power consumption but that it also runs the latest optimizations. In many instances, a real-time application is no longer just basic code running in a while loop. Developers must find new ways to leverage the cloud, machine learning, sensor fusion, or graphical interfaces. Hence, it is critical to find the right MCU supported by an entire ecosystem that can bring these new optimizations to them. Engineers must ask how fast a device runs and how well it can support the complexity and richness of the application.

Multiple wireless protocol support or talking more with the world A wireless utility metering system

The idea that an embedded system connects to a network is far from new. The industry even coined the term “Internet of Things” because so many applications rely on the network of networks. However, until now, applications have primarily chosen one mode of communication, either wired or wireless. And if the latter, it used to settle on one wireless protocol, such as cellular, Wi-Fi, or Bluetooth. Over the years, the industry has seen the multiplication of wireless protocols. From 6LoWPAN to LoRaWAN, Zigbee, Thread, NB-IoT, and more, there’s no shortage of new protocols. Interestingly, there has also been the absence of a clear winner. Instead of a traditional consolidation, many technologies seem to prosper concomitantly.

Let’s take the 2.4 GHz spectrum as an example. While Bluetooth is still dominant, Zigbee and Thread have grown in popularity. Many companies also work on a custom IEEE 802.15.4 protocol for competitive or regulatory reasons. In fact, the proliferation of network protocols is so rampant that Matter, the latest initiative unifying home automation under one standard, runs over multiple wireless technologies like Wi-Fi, Thread, and Bluetooth and supports many 2.4 GHz bridges, including Zigbee and Z-Wave instead of settling on just one wireless technology.

As a result, engineers face a relatively new challenge: create a system that must support multiple wireless protocols to stay competitive. Indeed, by adopting a device that supports multiple technologies, a company can qualify one MCU and adapt to the needs of the market. For instance, a developer could work on a proprietary IEEE 802.15.4 protocol in one region, and then adopt Thread in another while keeping the exact same hardware. It would only require a change to the code base. Engineers would thus reduce their time to market and enjoy far greater flexibility. Put simply, embedded systems developers in 2024 must design with multi-protocol support in mind and choose devices that will meet current and future needs.

Security or protecting future investments Security must be a top priority for smart home products

One positive trend in embedded systems has been recognizing that security is not optional. For the longest time, many joked that IoT stood for “Internet of Threats”. Today, developers know it is imperative to protect servers, code, end-user data, and even physical devices from attacks. In a nutshell, a failure to secure an embedded system could have catastrophic effects on the product and its brand. However, a new security challenge has emerged in the form of regulatory interventions. The European Union, the United States, and many other countries and standardizing bodies have enacted new rules mandating features and protections. The problem is that they aren’t always clear or final, as some are still being worked on.

The industry has been answering this new challenge with more formal security standards. For instance, the Platform Security Architecture (PSA) and the Security Evaluation Standard for IoT Platforms (SESIP) certifications offer an extensive methodology to help engineers secure their embedded systems. These certifications thus provide a path to future-proof designs and ensure they meet any stringent requirements. However, it also means that developers can’t treat security as an afterthought or work toward those certifications after designing their system. It is becoming critical to think of security as soon as the first proof of concept and adopt a microcontroller that can meet the proper certification level.

Let’s take the example of a smart home application that shares private and sensitive data with a cloud. Increasingly, governments require encrypted communications, protections against physical attacks, safeguards against software intrusions, the ability to securely update a system over-the-air, and monitoring capabilities to detect a breach. In many instances, a SESIP Level 3 certification would help guarantee that a system could meet those requirements. Unfortunately, engineers who fail to choose an MCU capable of targeting such a certification could end up compromising the entire project. As there are hardware and platform considerations that ensure a product can meet a certain security certification, developers must adopt a new mindset when choosing an MCU.

See what the future holds at the STM32 Summit See how the STM32 Summit can help you anticipate upcoming trends

As we look at the trends that will shape 2024 and beyond, we see that it is critical to find an ecosystem maker. Computational efficiency depends on the MCU as well as the framework, middleware, and algorithms that run on it. Similarly, supporting multiple wireless protocols demands new development tools, and securing embedded systems requires practical software solutions on top of hardware IPs. That’s why we are excited to host the STM32 Summit on March 19. Join us as we showcase how ST is bringing solutions to help teams stay ahead of upcoming trends.

Viewers will get to learn more about exciting devices that are shaping new trends while also discovering entirely new products. Attendees will also be able to ask questions to ST experts and receive answers live. Registering to this event thus grants a unique access to our teams. Moreover, the STM32 Summit will feature some of our customers who will share real-world experiences. Instead of ST telling the industry how to meet the challenges ahead, we wanted our partners to show viewers how they do it. Put simply, the STM32 Summit isn’t only here to inform but to inspire.

The post STM32 Summit: 3 important embedded systems trends for 2024 appeared first on ELE Times.

u-blox launches new GNSS platform for enhanced positioning accuracy in urban environments

Tue, 03/19/2024 - 14:25

The u-blox F10 platform increases positioning accuracy by reducing multipath effects, simplifying the process of promptly locating a vehicle.

u-blox, a global provider of leading positioning and wireless communication technologies and services, has announced F10, the company’s first dual-band GNSS (Global Navigation Satellite Systems) platform combining L1 and L5 bands to offer enhanced multipath resistance and meter-level positioning accuracy. The platform caters to urban mobility applications, such as aftermarket telematics and micromobility.

Applications that use GNSS receivers for accurate positioning are on the rise. Yet, current receivers do not fully perform in urban areas. Accurate and reliable positioning in dense urban environments, where buildings or tree foliage can reflect satellite signals, requires GNSS receivers to mitigate multipath effects. The L5 band’s resilience to these effects significantly improves positioning accuracy. Combined with the well-established L1 band, an L1/L5 dual-band GNSS receiver can deliver < 2 m positioning accuracy (CEP50), against about 4 m with the L1 band only. The u-blox team has conducted driving tests in several urban areas, confirming a significant improvement over GNSS L1 receivers.

The F10’s firmware algorithm prioritizes L5 band signals in weak signal environments, ensuring reliable positioning accuracy even when paired with small antennas. The platform is also equipped with protection-level technology that provides a real-time trustworthy positioning accuracy estimate.

When a cellular modem is extremely close to a GNSS receiver, it can interfere with the receiver’s reception. Some F10 module models (NEO-F10N, MAX-F10S, and MIA-F10Q) are equipped with a robust RF circuit that allows the GNSS and the cellular modem to operate without interference.

The u-blox F10 platform is pin-to-pin compatible with the previous u-blox M10 generation for easy migration. It also supports u-blox AssistNow, which offers real-time online A-GNSS service with global availability to reduce GNSS time-to-first-fix and power consumption.

The u-blox EVK-F101 evaluation kit will be available in April 2024.

The post u-blox launches new GNSS platform for enhanced positioning accuracy in urban environments appeared first on ELE Times.

Looking into CDN Traffic in the Network

Tue, 03/19/2024 - 14:03

A CDN or Content Delivery Server, is a geographically distributed network of interconnected servers. CDNs are a crucial part of the modern internet infrastructure which solves the problem of latency (delay before transfer of data begins from a web server) by speeding up the webpage loading time for data-heavy (like multimedia) web applications.

The usage of CDN has significantly increased with the rise of data volumes in web applications in the last few years. As per the Sandvine Global Internet Phenomena Report 2023, different popular CDN providers are included in the list of top 10 video applications for APAC region for their increased volume of application traffic.

 Without CDN and with CDN scenarioFigure 1: Without CDN and with CDN scenario Network Traffic Analysis

The ATI team in Keysight has analyzed the network traffic of different popular CDN like Amazon CloudFront, Cloudflare, Akamai, Fastly and has seen some interesting information from the decrypted traffic which can be useful for other researchers.

Inside HTTP Request Header:

When a website decides to use CDN, then sometimes it typically integrates the CDN service name like CloudFront, Cloudflare, akamai etc. at the DNS level which changes the DNS records like CNAME records to point into the CDN’s domain. The same behavior is also seen inside the “Host” or “: authority” header inside the HTTP request. For example, if the original website is “www. popularOTT.com”, then after the CDN name integration the URL looks like www.popularOTT.cdnprovider.com as shown below –

 Sample CDN request headerFigure 2: Sample CDN request header Inside HTTP Response Header:

When a response is sent from the Content Delivery Server (CDN) server, it often includes some specific headers inside the HTTP response packet which provide some information about the CDN server as shown below –

  • X-Cache: This header indicates whether a request is a hit, miss or bypass in the CDN cache. If its value is set as “HIT” (“HIT from cloudfront” for CloudFront) inside the HTTP response that means the request is served by the CDN server, not the origin server.
 Sample response header from CDN server containing X-Cache header.Figure 3: Sample response header from CDN server containing X-Cache header.
  • X-Cache-Status: It is similar to “X-Cache” header which provides some detailed information about the caching process. Sometimes we also see the CDN provider information inside the header name. As example when a response is sent from Cloudflare CDN, then sometimes we see this “cf-cache-status” (here cf refers to Cloudflare) header inside the response packet.
 Sample response header from CDN server containing X-Cache-Status header.Figure 4: Sample response header from CDN server containing X-Cache-Status header.
  • Via: This repones header indicates if any intermediate proxy or CDN presents through which the request has passed. As example when a request has passed through Amazon CloudFront CDN, then sometimes we see information about that like “1 2b14bcf8de4af74db0f6562ceac643f8.cloudfront.net (CloudFront)” inside the “via” response header.
 Sample response header from CDN server containing Via header.Figure 5: Sample response header from CDN server containing Via header.
  • Server: In some cases, we can see the CDN server name in the “server” header inside the HTTP response packet as shown below –
 Sample response header from CDN server containing Server header.Figure 6: Sample response header from CDN server containing Server header.
  • Sometimes, we see other custom headers like “x-akamai-request-id”, “x-bdcdn-cache-status” etc. inside the HTTP response which indicates that the response is sent from a CDN server.
 Sample response header from CDN server containing other CDN related headers.Figure 7: Sample response header from CDN server containing other CDN related headers.

CDN in Keysight ATI

At Keysight Technologies, our Application and Threat Intelligence (ATI) team, researchers have examined the traffic pattern of various leading CDN service providers based on their application traffic from the world’s top 50 most popular websites and they have published the network traffic pattern of 2 popular CDNs (Amazon CloudFront and Cloudflare) in ATI-2024-03 Strike Pack released on February 15, 2024. So please stay tuned for the other popular CDN application traffic which will be released in the upcoming ATI releases.

 

The post Looking into CDN Traffic in the Network appeared first on ELE Times.

Digital Twins and AI Acceleration Are Transforming System Design

Tue, 03/19/2024 - 13:48

We are at a global inflection point as we cope with the limitations of energy supply and the consequences of climate change. Regional conflicts are elevating risks in the traditional crude oil supply chain. Changes in rainfall patterns and disputes over water use priorities are limiting hydroelectric power generation. Moreover, extreme weather events have intensified the threat to lives and property. These challenges are compelling us to focus on energy efficiency requirements in almost everything we do. As a result, there is a significant trend towards designing more energy-efficient transportation and generation equipment.

Designing Energy-Efficient Machinery

Each industry has its goals to respond to these trends. The automotive industry is investing in electric vehicles and enhancing the aerodynamic efficiency of all their vehicles. The aerospace industry aims to reduce the cost and time required to design new aircraft models that are efficient and durable. In the same vein, the turbomachine industry benefits significantly from every efficiency and extension improvement of the product lifecycle.

fig1Figure 1: OEM Design Goals Automotive Design

The automotive industry must comply with the new CAFÉ standards for 2028 and 2032. These standards will have an impact on their fleet, meaning they will need to build electric vehicles and improve the average fuel efficiency for their internal combustion engine models. A 10% reduction in the aerodynamic drag coefficient can lead to a 5% improvement in fuel economy. Simulation is a crucial tool to ensure that the design will perform well once manufactured and tested in the wind tunnel.

fig2Figure 2: Automotive Design for Fuel Efficiency

To achieve this kind of leap forward, the industry must be able to do the following:

  • Simulate turbulent air in fine detail
  • Evaluate 100s of precise aerodynamic design changes
  • Simulate entire car design for net impact
Aircraft Design

The commercial aircraft industry is highly regulated with a focus on safety and environmental impact. The process of designing a new aircraft involves several steps that must meet requirements for safe function, performance, and operation, and the aircraft must be certified for the entire flight envelope. Simulation is the only way to ensure the aircraft will perform as intended before building and flight-testing a prototype.

fig3Figure 3: Aerospace Flight Envelope Performance

To simulate all operating conditions, designers must:

  • Simulate lift in turbulent air in fine detail
  • Simulate the entire aircraft design for net impact
  • Evaluate all operating conditions (see chart)
Turbomachinery Design

Turbomachinery includes energy generators, large turbine aircraft engines, marine engines, and other machines with rotating motion. Improving energy efficiency can yield significant returns because of the scaled impact of the machine over its lifetime. Similarly, designing machines to last longer and require less maintenance can have a significant economic impact. Simulation is the best way to analyze various design changes to optimize the final design outcome.

fig4Figure 4: Turbomachinery Design for Efficiency and Durability

To achieve this kind of leap forward, the industry must be able to:

  • Evaluate multiple design optimization tradeoffs
  • Simulate combustion dynamics in fine detail
  • Simulate a full engine design for net impact
Announcing the Millennium Enterprise Multiphysics Platform

To address these needs, we are announcing the world’s first accelerated digital twin, delivering unprecedented performance and energy efficiency—the Cadence Millennium Enterprise Multiphysics Platform. Targeted at one of the biggest opportunities for greater performance and efficiency, the first-generation Cadence Millennium M1 CFD Supercomputer accelerates high-fidelity computational fluid dynamics (CFD) simulations. Available in the cloud or on-premises, this turnkey solution includes graphics processing units (GPUs) from leading providers, extremely fast interconnections, and an enhanced Cadence high-fidelity CFD software stack optimized for GPU acceleration and generative AI. By fusing Millennium M1 instances into a unified cluster, customers can achieve an unprecedented same-day turnaround time and near-linear scalability when simulating complex mechanical systems.

The Millennium Platform addresses the performance and efficiency needs of the automotive, aerospace and defense (A&D), energy, and turbomachinery industries with critical advances in multiphysics simulation technology. Performance, accuracy, capacity, and accelerated computing are all essential to enabling digital twin simulations that explore more design innovations, providing confidence that they will function as intended before undertaking prototype development and testing.

Highlights and benefits include:

  • Performance: Combines best-in-class GPU-resident CFD solvers with dedicated GPU hardware to provide supercomputer-equivalent throughput per GPU of up to 1000 CPU cores
  • Efficiency: Reduces turnaround time from weeks to hours with 20X better energy efficiency compared to its CPU equivalent
  • Accuracy: Leverages Cadence Fidelity CFD solvers to provide unmatched accuracy to address complex simulation challenges
  • High-Performance Computing: Built with extensible architecture and massively scalable Fidelity solvers to provide near-linear scalability on multiple GPU nodes
  • AI Digital Twin: Rapid generation of high-quality multiphysics data enables generative AI to create fast and reliable digital twin visualizations of the optimal system design solution
  • Turnkey Solution: The industry’s first solution that couples GPU compute with modern and scalable CFD solvers, providing an optimized environment for accelerated CFD and multidisciplinary design and optimization

Flexibility: Available with GPUs from leading vendors, in the cloud with a minimum 8-GPU configuration or on-premises with a minimum 32-GPU configuration—providing a flexible and scalable solution to fit each customer’s deployment needs

The post Digital Twins and AI Acceleration Are Transforming System Design appeared first on ELE Times.

The Critical Role of Constraint-Based PCB Design in Modern Electronics (PCB Design)

Tue, 03/19/2024 - 13:30

Welcome to the intricate realm of PCB (Printed Circuit Board) design, where what begins as a simple circuit board evolves into a sophisticated masterpiece of electronic engineering. As the backbone of modern electronics, PCBs breathe life into our everyday devices, from smartphones to laptops. Crafting a reliable and functional PCB extends beyond merely connecting components. It demands a meticulous understanding of various aspects to achieve optimal performance and manufacturability. Central to this endeavor is constraint-based PCB design—a strategic methodology that meticulously governs the physical and electrical characteristics of a PCB. Such constraints not only safeguard against manufacturing pitfalls but also ensure electrical prowess, culminating in a product that doesn’t just meet the mark but sets new standards. In this post, we explore PCB constraints and how they play a crucial role in ensuring a successful design.

Grasping Constraint-Based PCB Design

Constraint_1

Constraint-based design involves defining parameters that dictate how a PCB should be constructed. These constraints encompass multiple aspects, including electrical, physical, and manufacturing considerations. Considering constraints early in the design process is crucial, as it sets the groundwork for a successful design that aligns with the project requirements and end goals.

Constraint-based PCB design is akin to a maestro orchestrating a symphony. It balances numerous requirements to shape the overall design process, ensuring a harmonious outcome. These constraints can vary:

Electrical Constraints:
  • Trace Width and Spacing:Defines the width and spacing of traces to ensure proper current carrying capacity and avoid short-circuits.
  • Via Sizes and Types:Specifies dimensions and types of vias, based on design requirements and manufacturing capabilities.
  • Impedance Control:Ensures traces are designed to have specific impedance values, crucial for high-speed designs.
  • Clearance:Defines the minimum distance between different electrical entities (like traces, pads, vias) to avoid short circuits.
  • High-speed Constraints:Rules related to the design of high-speed circuits, including length matching, differential pair routing, and phase control.
Physical Constraints:
  • Board Dimensions:Specifies the size and shape of the PCB.
  • Layer Stackup:Defines the number and arrangement of copper and insulating layers in the PCB.
  • Component Placement:Provides guidelines for placing components on the board, ensuring they don’t interfere with each other and adhere to thermal and mechanical considerations.
  • Thermal Constraints:Ensures areas generating high heat have sufficient thermal relief, including the use of heat sinks or thermal vias.
Manufacturability Constraints (Design for Manufacturability – DFM):
  • Solder Mask Clearance:Ensures that solder masks are appropriately applied to avoid short circuits during the soldering process.
  • Silkscreen Overlap:Ensures that component labels or other silkscreen elements do not overlap with pads or vias.
  • Hole Sizes:Specifies the minimum and maximum sizes for drilled holes based on manufacturing capabilities.
  • Annular Ring Size:Defines the minimum width of the copper ring around a drilled hole.
  • Copper-to-Edge Clearance:Defines the minimum distance required between the edge of the PCB and any copper feature.
Assembly Constraints (Design for Assembly – DFA):
  • Component Orientation:Ensures components are correctly oriented for automated assembly.
  • Component-to-Component Clearance:Ensures sufficient space between components to allow for assembly and avoid interference.
  • Polarity and Pin 1 Indicators:Guidelines for marking components to ensure they are placed correctly during assembly.
Reliability Constraints:
  • Flex and Bend: Defines regions that can and cannot be bent in flex PCBs.
  • Vibration and Shock: Constraints to ensure components can withstand specific vibration and shock levels, especially in rugged applications.
  • Testing Constraints (Design for Test – DFT):
    • Test Point Requirements:Specifies the number and placement of test points for in-circuit testing.
    • Access for Probing:Ensures test equipment can access critical nodes during testing.
  • Environmental and Regulatory Constraints:
    • RoHS/Lead-Free Design:Ensures PCBs are designed to adhere to environmental regulations, like the Restriction of Hazardous Substances (RoHS).
    • Electromagnetic Compatibility (EMC):Ensures designs adhere to electromagnetic interference (EMI) and susceptibility requirements.
Advantages of Constraint-Based PCB Design

A. Enhanced Signal Integrity and Reliability

In the world of electronics, signal integrity is paramount. Constraint-based design minimizes electromagnetic interference (EMI) and ensures proper trace routing for impedance control. By optimizing ground and power planes, noise is reduced, leading to improved signal reliability.

B. Improved Thermal Management

Efficient heat dissipation is a challenge in compact electronics. Constraint-based design tackles this by strategically placing components, utilizing thermal relief, and integrating sensors for real-time temperature monitoring. This ensures that devices maintain optimal operating temperatures.

C. Streamlined Manufacturing and Assembly

Designing for manufacturability (DFM) is a key concept. Constraint-based design includes component placement rules that facilitate automated assembly, reducing errors. By considering various soldering and assembly techniques, manufacturing becomes more seamless.

D. Faster Time-to-Market

Time is of the essence in the competitive electronics market. Constraint-based design reduces the need for countless design iterations by identifying flaws early through simulations. Collaborative design involving cross-functional teams also expedites the process.

E. Cost Savings

Design re-spins are expensive and time-consuming. Constraint-based design minimizes these by ensuring the initial design aligns with requirements. Efficient layouts optimize material usage and eliminate the need for costly post-production modifications.

F. Compliance and Standards

Electronic products must adhere to regulatory standards. Constraint-based design aids in designing with EMC, safety, and other industry standards in mind. This simplifies the certification process and ensures products meet legal requirements.

Implementing the Methodology

Design Rule Check (DRC) is a fundamental step in the PCB design process. It involves checking the design against a set of predefined rules to ensure the PCB will be functional, manufacturable, and reliable. Implementing DRC in your PCB design process helps catch errors before manufacturing, reducing costly re-spins, and potential functional issues.

Here’s a step-by-step guide on how to implement DRC in PCB design:
  1. Understand Manufacturing Capabilities:
    • Begin by gathering the capabilities and constraints from your PCB manufacturer. This might include rules related to trace width and spacing, via sizes, hole sizes, annular ring sizes, and whatever you need to set your design up for success.
  2. Set Up the Design Rules in Your PCB Design Software:
    • Most modern PCB design tools include a design rules setup or configuration section;
    • Enter the manufacturer’s constraints and any additional rules you need for your specific design. This might include electrical rules, high-speed rules, thermal rules, etc.
  3. Layer-specific Rules:
    • Some rules are specific to certain layers. For example, the top and bottom layers might have different trace width and spacing rules compared to inner layers. Make sure to define these layer-specific rules.
  4. Run the DRC:
    • Once your rules are set up, you can run the DRC. This will usually generate a list of violations or errors based on the rules you’ve set;
    • Some common violations might include trace width violations, clearance violations, unconnected nets, and overlapping components.
  5. Review and Address Violations:
    • For each violation, the PCB design software typically provides a description and a visual indication of where the issue is on the board;
    • Go through each violation and correct the issue in the design. This might involve moving components, rerouting traces, or adjusting the design rules if they were set up incorrectly.
  6. Iterative Process:
    • After correcting known violations, run the DRC again to ensure that no new issues have been introduced and all previous ones have been resolved;
    • This might need to be repeated several times until no violations are found.
  7. Additional Checks:
    • Beyond standard DRC, consider running other checks like Electrical Rule Check (ERC) to catch logical and connectivity errors, or a Differential Pair Routing Check for high-speed designs.
  8. Document Any Deliberate Violations:
    • In some cases, you might choose to violate a rule deliberately for a specific design requirement. In such cases, it’s essential to document this decision, explaining the rationale and ensuring the manufacturer is aware of it.
  9. Collaborate with the Manufacturer:
    • Before finalizing the design, it can be beneficial to send the design files to the manufacturer for review. They might run their own DRC and provide feedback based on their specific manufacturing processes.
  10. Stay Updated:
    • Manufacturing capabilities and standards can change over time. Periodically review and update your design rules to ensure they align with the latest capabilities and industry best practices.
Wrapping Up

The world of electronics is in perpetual flux, with innovations emerging at breakneck speeds. Amidst this, constraint-based PCB design emerges as a beacon, illuminating the path for designers. By meticulously defining, applying, and validating constraints, designers can craft PCBs that aren’t just functional but also efficient, cost-effective, and superior in quality. In an age where precision and speed are paramount, can you afford to design any other way?

DavidSr. Technical Marketing Engineer AltiumDavid
Sr. Technical Marketing Engineer
Altium

The post The Critical Role of Constraint-Based PCB Design in Modern Electronics (PCB Design) appeared first on ELE Times.

What is an NPU? And why is it key to unlocking on-device generative AI?

Tue, 03/19/2024 - 13:15

The generative artificial intelligence (AI) revolution is here. With the growing demand for generative AI use cases across verticals with diverse requirements and computational demands, there is a clear need for a refreshed computing architecture custom-designed for AI. It starts with a neural processing unit (NPU) designed from the ground-up for generative AI, while leveraging a heterogeneous mix of processors, such as the central

heterogeneous-computing-toolboxFigure 1: Choosing the right processor, like choosing the right tool in a toolbox, depends on many factors and enhances generative AI experiences.

processing unit (CPU) and graphics processing unit (GPU). By using an appropriate processor in conjunction with an NPU, heterogeneous computing maximizes application performance, thermal efficiency and battery life to enable new and enhanced generative AI experiences.

Why is heterogenous computing important?

Because of the diverse requirements and computational demands of generative AI, different processors are needed. A heterogeneous computing architecture with processing diversity gives the opportunity to use each processor’s strengths, namely an AI-centric custom-designed NPU, along with the CPU and GPU, each excelling in different task domains. For example, the CPU for sequential control and immediacy, the GPU for streaming parallel data, and the NPU for core AI workloads with scalar, vector and tensor math.

Heterogeneous computing maximizes application performance, device thermal efficiency and battery life to maximize generative AI end-user experiences.

NPU-evolutionFigure 2: NPUs have evolved with the changing AI use cases and models for high performance at low power. What is an NPU?

The NPU is built from the ground-up for accelerating AI inference at low power, and its architecture has evolved along with the development of new AI algorithms, models and use cases. AI workloads primarily consist of calculating neural network layers comprised of scalar, vector,and tensor math followed by a non-linear activation function. A superior NPU design makes the right design choices to handle these AI workloads and is tightly aligned with the direction of the AI industry.

Qualcomm-AI-EngineFigure 3: The Qualcomm AI Engine consists of the Qualcomm Hexagon NPU, Qualcomm Adreno GPU, Qualcomm Kryo or Qualcomm Oryon CPU, Qualcomm Sensing Hub, and memory subsystem. Our leading NPU and heterogeneous computing solution

Qualcomm is enabling intelligent computing everywhere. Our industry-leading Qualcomm Hexagon NPU is designed for sustained, high-performance AI inference at low power. What differentiates our NPU is our system approach, custom design and fast innovation. By custom-designing the NPU and controlling the instruction set architecture (ISA), we can quickly evolve and extend the design to address bottlenecks and optimize performance.

The Hexagon NPU is a key processor in our best-in-class heterogeneous computing architecture, the Qualcomm AI Engine, which also includes the Qualcomm Adreno GPU, Qualcomm Kryo or Qualcomm Oryon CPU, Qualcomm Sensing Hub, and memory subsystem. These processors are engineered to work together and run AI applications quickly and efficiently on device.

Our industry-leading performance in AI benchmarks and real generative AI applications exemplifies this. Read the whitepaper for a deeper dive on our NPU, our other heterogeneous processors, and our industry-leading AI performance on Snapdragon 8 Gen 3 and Snapdragon X Elite.

Qualcomm-AI-Stack-includesFigure 4: The Qualcomm AI Stack aims to help developers write once and run everywhere, achieving scale. Enabling developers to accelerate generative AI applications

We enable developers by focusing on ease of development and deployment across the billions of devices worldwide powered by Qualcomm and Snapdragon platforms. Using the Qualcomm AI Stack, developers can create, optimize and deploy their AI applications on our hardware, writing once and deploying across different products and segments using our chipset solutions.

The combination of technology leadership, custom silicon designs, full-stack AI optimization and ecosystem enablement sets Qualcomm Technologies apart to drive the development and adoption of on-device generative AI. Qualcomm Technologies is enabling on-device generative AI at scale.

durga_malladi_formal_photo_sized_0DURGA MALLADI SVP & GM, Technology Planning & Edge Solutions, Qualcomm Technologies, Inc. Pat-LawlorPAT LAWLOR
Director, Technical Marketing,
Qualcomm Technologies, Inc.

The post What is an NPU? And why is it key to unlocking on-device generative AI? appeared first on ELE Times.

Boost AI Projects on Google Cloud Platform using Intel Cloud Optimization Modules

Tue, 03/19/2024 - 12:53

Courtesy: Intel

Applications powered by artificial intelligence are some of the most popular pieces of software being developed, especially on cloud computing platforms, which can provide easy access to specified hardware and accelerators at a low startup cost with the option to scale effortlessly. A popular cloud service provider, Google Cloud Platform* (GCP), contains a suite of cloud computing services that provide a variety of tools to develop, analyze, and manage data and applications. GCP also includes tools specific to AI and machine learning development, such as the AI Platform, the Video Intelligence API, and the Natural Language API. Using a platform like GCP for your AI projects can simplify your development while gaining access to powerful hardware that meets your specific needs.

Further enhancements to model efficiency can be accomplished with pre-built software optimizations tailored for diverse applications. By implementing these software optimizations, developers can see models deploy and infer faster and with fewer resources. However, the process of discovering and integrating these optimizations into workflows can be time-consuming and demanding. Accessing comprehensive guides and documentation packaged in an open-source environment empowers developers to overcome challenges by incorporating new optimizing architectures, facilitating the effortless enhancement of their models’ performance.

What are Intel Cloud Optimization Modules?

The Intel Cloud Optimization Modules consist of open-source codebases that feature codified Intel AI software optimizations designed specifically for AI developers working in production environments. These modules provide a set of cloud-native reference architectures to enhance the capabilities of AI-integrated cloud solutions. By incorporating these optimization solutions, developers can boost the efficiency of their workloads and ensure optimal performance on Intel CPU and GPU technologies.

These cloud optimization modules are available on several highly popular cloud platforms, including GCP. The modules utilize specifically built tools and end-to-end AI software and optimizations that enhance workloads on GCP and increase performance. These optimizations can increase machine learning models for a variety of use cases, such as Natural Language Processing (NLP), transfer learning, and computer vision.

intel.web.1440.1080

Within each module’s content package is an open-source GitHub repository that includes all the relevant documentation: a whitepaper with more information on the module and what it relates to, a cheat sheet that highlights the most relevant code for each module, and a video series with hands-on walkthroughs on how to implement the architectures. There is also an option to attend office hours for specific implementation questions.

Intel Cloud Optimization Modules for GCP

Intel Cloud Optimization Modules are available for GCP, including optimizations for generative pre-trained transformer (GPT) models and Kubeflow pipelines. You can learn more about these optimization modules available for GCP below:

nanoGPT Distributed Training

Large Language Models (LLMs) are becoming popular in Generative AI (GenAI) applications, but it is often sufficient to use smaller LLMs in many use cases. Using a GPT model, such as nanoGPT (124M parameter), can result in better model performance, as smaller models are quicker to build and easier to deploy. This module teaches developers how to fine-tune a nanoGPT model on a cluster of Intel Xeon CPUs on GCP and demonstrates how to transform a standard single-node PyTorch training scenario into a high-performance distributed training scenario. This module also integrates software optimizations and frameworks like the Intel Extension for PyTorch* and oneAPI Collective Communications Library (oneCCL) to accelerate the fine-tuning process and boost model performance in an efficient multi-node training environment. This training results in an optimized LLM on a GCP cluster that can efficiently generate words or tokens suitable for your specific task and dataset.

XGBoost on Kubeflow Pipeline

Kubeflow is a popular open-source project that helps make deployments of machine learning workflows on Kubernetes simple and scalable. This module guides you through the setup of Kubeflow on GCP and provides optimized training and models to predict the probability of client loan default. By completing this module, you will learn how to enable Intel Optimization for XGBoost and Intel daal4py in a Kubeflow pipeline. You’ll also learn to set up and deploy a Kubeflow cluster using Intel Xeon CPUs on GCP with built-in AI acceleration through Intel AMX. Developers also have the option to bring and build their own Kubeflow pipelines and learn how these optimizations can help improve the pipeline workflow.

Elevate your AI initiatives on GCP with Intel Cloud Optimization Modules. These modules can help you leverage Intel software optimizations and containers for popular tools to develop accelerated AI models seamlessly with your preferred GCP services and enhance the capabilities of your projects. See how you can take AI to the next level through these modules, and sign up for office hours if you have any questions about your implementation!

We encourage you to check out Intel’s other AI Tools and Framework optimizations and learn about the unified, open, standards-based oneAPI programming model that forms the foundation of Intel’s AI Software Portfolio. Also, check out the Intel Developer Cloud to try out the latest AI hardware and optimized software to help develop and deploy your next innovative AI projects!

The post Boost AI Projects on Google Cloud Platform using Intel Cloud Optimization Modules appeared first on ELE Times.

Meeting the Demand for Higher Voltage Power Electronics

Tue, 03/19/2024 - 12:38

Courtesy: Onsemi

The ongoing search for efficiency is impacting the design of electronic applications across multiple sectors, including both the automotive and renewables industries. Greater efficiency for an Electric Vehicle (EV) translates into increased range between battery charges and, in renewables, more efficient generation converts more natural energy from the sun or wind into usable electricity.

Meeting-Demand-Higher V-Power-Blog-Fig1The quest for efficiency is driving designs in EVs and renewables.

Both applications use switching electronic devices extensively, and the drive for increased efficiency is driving demand for higher voltage devices. The link between higher voltage and higher efficiency is governed by Ohm’s Law, which states that power, or loss, generated in a circuit increases with the square of the current. The same law also tells us that doubling the voltage halves the current flowing in the circuit – reducing losses by a factor of four. Electricity companies demonstrate this principle, operating their grids at very high voltages – 275,000 or 400,000 volts in the UK – to reduce transmission losses.

While the electricity utilities rely on components such as heavy-duty transformers to handle high transmission voltages, it’s a bit more complicated in automotive and renewables applications, both of which make extensive use of electronic devices.

High Voltage Challenges for Semiconductors

Converters and inverters, based on switching power electronic devices, are key components in both alternative energy plants and EVs. Although both MOSFETs and IGBTs are used in these systems, the low gate-drive power, fast switching speeds and high efficiency at low voltages of the MOSFET have led to its dominance, and it is deployed in a wide range of power electronic applications.

Power MOSFETs have three main roles – blocking, switching, and conducting, figure 2, and the device must meet the requirements of each phase.

Meeting-Demand-Higher V-Power-Blog Fig2MOSFETs are required to block large voltages between their drain and source during switching.

During the blocking phase the MOSFET must withstand the full rated voltage of the application, while during the conduction and switching phases, losses and switching frequency are important. Conduction and switching losses both impact overall efficiency while higher switching frequencies enable smaller and lighter systems, a key attribute in both EVs and industrial applications.

The trend towards higher voltage is pushing the limits of the traditional silicon MOSFET. However, it is harder and costlier to get the low RDS(on) and high gate charge values required for reduced conduction losses and fast switching times. Power electronics designers are consequently turning to silicon carbide (SiC) to achieve higher efficiencies. SiC, a wide bandgap technology, has several advantages over silicon, including high thermal conductivity, a low thermal expansion coefficient, and high maximum current density, giving it excellent electrical conductivity compared to silicon. Additionally, SiC’s higher critical breakdown field means that a reduced thickness device can support a given voltage rating, leading to significant size reduction.

SiC MOSFETs are now available which can withstand voltage thresholds up to almost 10 kV, compared with 1500 V for the silicon variants. Also, the low switching losses and high operating frequencies of SiC devices enable them to achieve superior efficiencies, particularly in higher-power applications requiring high current, high temperatures, and high thermal conductivity.

onsemi Addresses the Need for Higher Voltages

In response to the growing demand for devices with high breakdown voltages, onsemi has built an end-to-end in-house SiC manufacturing capability including a range of products such as SiC diodes, SiC MOSFETs, and SiC modules.

This product family includes the NTBG028N170M1, a high-breakdown voltage SiC MOSFET, figure 3. This N-channel, planar device is optimized for fast switching applications at high voltages, with a VDSS of 1700 V, and an extended VGS of ‑15/+25 V.

Meeting-Demand-Higher V-Power-Blog-ONSB670-Fig3onsemi’s NTBG028N170M1

The NTBG028N170M1 supports drain currents (ID) up to 71 A continuously and 195 A when pulsed and its superior RDS(ON) – typical value 28 mW – mitigates conduction losses. The ultra-low gate charge (QG(tot)), at just 222 nC, ensures low losses during high-frequency operation and the device is housed in a surface mountable D2PAK–7L package, which reduces parasitic effects during operation.

The onsemi EliteSiC range also includes a range of 1700 V-rated SiC Schottky diodes, which complement MOSFETs in power electronics systems such as rectifiers. The high Maximum Repetitive Peak Reverse Voltage (VRRM) of these diodes, along with their low Peak Forward voltage, (VFM) and excellent reverse leakage currents, equip design engineers to achieve stable, high voltage operation at elevated temperatures.

EliteSiC Supports Efficient Power Electronics Designs

The quest for efficiency is relentless in applications which depend on power electronics devices. The trend towards higher system voltages is challenging the traditional Si-MOSFET and SiC devices offer a way forward, enhancing efficiencies while reducing form factors. The 1700 V NTBG028N170M1 from onsemi enables higher voltage designs for key power electronics systems.

The post Meeting the Demand for Higher Voltage Power Electronics appeared first on ELE Times.

Circuit to Success: Navigating a Career in ESDM as a New Grad

Tue, 03/19/2024 - 11:57

Author: Dr Abhilasha Gaur, Chief Operating Officer, Electronics Sector Skills Council of India (ESSCI)

As the last pages of the Class 12 exam papers are turned, a new chapter eagerly awaits the young minds of India. Amidst the excitement and anticipation of what lies ahead, many students find themselves pondering the age-old question: “What next?” For those with a passion for fashion, creativity, and innovation, the ESDM sector beckons as a realm of boundless opportunities. In this article, we embark on a journey through the colourful landscape of India’s ESDM industry, exploring the diverse career avenues that await aspiring professionals post-Class 12 exams.

Dr Abhilasha Gaur, Chief Operating Officer, ESSCI

Exploring the Landscape:

The Electronics System Design and Manufacturing (ESDM) sector in India is experiencing phenomenal growth. Supported by government initiatives like “Make in India,” it’s rapidly becoming a major hub for electronics manufacturing and innovation. As per Invest India report, it is projected that India will achieve the milestone of becoming a $1 trillion digital economy by the fiscal year 2026. Presently, the electronics market within India holds a value of $155 billion, with domestic production contributing to 65% of this figure.

If you’ve completed your 12th standard and have a passion for technology,  a career in the ESDM sector holds immense potential. ESDM encompasses the entire spectrum of electronics activities, including:

  • Design: Designing integrated circuits (ICs), printed circuit boards (PCBs), electronic systems, and embedded software.
  • Manufacturing: Production and assembly of electronic components, devices, and end-products. This includes semiconductor fabrication.
  • Testing and Validation: Ensuring product quality, reliability, and compliance with industry standards.
  • Repair and Maintenance: Servicing, troubleshooting, and repairing electronic products and systems.

Why is ESDM a Lucrative Career Path?

  • Government Initiatives: The Government of India is heavily invested in developing the ESDM sector. Several policies and schemes aim to boost domestic manufacturing, attract foreign investment, and create a skilled workforce.
  • Rapid Growth: India’s ESDM market is experiencing substantial growth, projected to reach trillions of rupees in value over the next few years. This growth fuels the demand for skilled professionals.
  • Skill Development Focus: Programs focusing on skilling and training the ESDM workforce are a priority, ensuring you have ample opportunities to acquire the required skills.
  • Diverse Ecosystem: India’s ESDM sector is diverse, offering opportunities in consumer electronics, telecommunications, defence, healthcare, automotive, and many other industries.
  • Global Requirements: In the global market, the requirements of the ESDM industry are multifaceted and continually evolving. First and foremost, there is a persistent need for innovation and technological advancement to stay competitive. Companies are investing in research and development to create cutting-edge products that meet the ever-changing demands of consumers worldwide.

Next Level Options in ESDM for 12th Pass Students

Here’s how you can embark on a rewarding ESDM career after completing your 12th standard:

  1. Diploma Programs
  • The ESDM sector offers exciting career opportunities for 12th graders through diploma programs. Options like Electronics and Communication Engineering (ECE) provide a strong foundation in electronics principles, communication systems, and embedded systems, opening doors to technician, engineer, and quality control roles. Electronics & Communication Engineering (ECE) focuses on telecom infrastructure and equipment, preparing individuals for technician and maintenance positions in this growing field. Consider your interests and career goals when choosing a diploma program to launch your journey in the dynamic ESDM sector.
  1. Skill Development and Certification Courses
  • The booming ESDM sector demands skilled professionals. Skill development and certification courses offer a fast-track entry point, equipping 12th-pass students with industry-relevant skills. Government initiatives and industry collaborations provide various affordable options, empowering individuals to join the electronics revolution and contribute to India’s technological advancement. You can learn a lot of the basics and advanced skills with ESSCI – Electronics Sector Skills Council of India, a non-profit organisation, which works under the aegis of MSDE – Ministry of Skill Development and Entrepreneurship.
  1. Bachelor’s degree Options

If you desire advanced positions and specialization, consider bachelor’s degree courses. Some popular undergraduate courses are – B.Tech /B.E. in Electronics and Communication Engineering, Electrical and Electronics Engineering, Instrumentation and Control Engineering, Computer Science and Engineering, Mechatronics, Automation and Robotics. These engineering programs provide extensive training in electronics hardware design, testing, manufacturing processes, software skills, and embedded systems. Reputed institutes like IITs, NITs, IIITs and private colleges offer ESDM-focused bachelor’s degree courses for students interested in building careers in the electronics industry. The programs aim to develop competent engineering graduates equipped for upcoming technology shifts like IoT, AI and Industry 4.0.

Target Industries Within the ESDM Ecosystem

  • Consumer Electronics Manufacturing: Contribute to the production of smartphones, laptops, televisions, home appliances, and other consumer goods.
  • Semiconductors: Play a part in the design and fabrication of the integrated circuits that power these electronics.
  • Telecom Infrastructure: Work on the networks and equipment that form the backbone of communication.
  • Medical Devices and Healthcare: Develop life-saving medical electronics, diagnostic equipment, and healthcare technology.
  • Defence and Aerospace: Be involved with electronics for military and space applications.
  • Mechatronics: Design control systems, sensors, and smart features for vehicles.

Essential Skills for the ESDM Sector

  • Technical Knowledge: Strong foundation in electronics fundamentals.
  • Problem Solving: Analytical thinking and troubleshooting ability.
  • Attention to Detail: Precision is critical when working with electronics.
  • Adaptability: Keeping up with advancements in technology.

 

The post Circuit to Success: Navigating a Career in ESDM as a New Grad appeared first on ELE Times.

OMNIVISION Announces Automotive Image Sensor with TheiaCel Technology Now Compatible with NVIDIA Omniverse for Autonomous Driving Development

Tue, 03/19/2024 - 11:01
At GTC 2024, OMNIVISION will demonstrate its OX08D10 image sensor on NVIDIA Omniverse, a platform of APIs, SDKs and services for 3D applications such as autonomous vehicle simulation
OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog and touch and display technology, has announced that its OX08D10 8-megapixel CMOS image sensor with TheiaCel technology is now compatible with the NVIDIA Omniverse development platform. OMNIVISION is demonstrating the solution at booth 636 during NVIDIA GTC, taking place through March 21 at the San Jose Convention Center.
NVIDIA Omniverse is a platform of application programming interfaces (APIs), software development kits (SDKs) and services that enables developers to easily integrate Universal Scene Description (OpenUSD) and RTX rendering technologies into their 3D applications and services. Such applications include high-fidelity, physically based simulation for accelerated autonomous vehicle (AV) development.
 The recently announced OX08D10 is the first image sensor that features OMNIVISION’s new 2.1-micron (µm) TheiaCel technology, which harnesses the capabilities of next-generation lateral overflow integration capacitors (LOFIC) and OMNIVISION’s DCG high dynamic range (HDR) technology to accurately capture LED lights without any flickering artifact for nearly all driving conditions. TheiaCel enables the OX08D10 to achieve HDR image capture at up to 200 meters. This range is the sweet spot for delivering the best balance between SNR1 and dynamic range and is optimal for automotive exterior camera applications. The OX08D10 features industry-leading low-light performance and low power consumption in a compact size that is 50% smaller than other exterior cabin sensors in its class.
“The OX08D10 is OMNIVISION’s flagship image sensor that features our TheiaCel technology, ushering in a new era of low-light sensitivity in an easy-to-implement solution that yields dramatic improvements in image quality,” said Dr. Paul Wu, head of automotive product marketing, OMNIVISION. “We are proud to be part of the ecosystem of NVIDIA partners who are working together to accelerate AV development. Today, we are excited to announce that the OX08D10 is now compatible with NVIDIA Omniverse, a powerful platform for high-fidelity sensor simulation capabilities, reducing automotive OEM development efforts and cost.”

The post OMNIVISION Announces Automotive Image Sensor with TheiaCel Technology Now Compatible with NVIDIA Omniverse for Autonomous Driving Development appeared first on ELE Times.

Separating the Signal from the Noise: Combining Advanced Imaging with AI for Chip Defect Review

Mon, 03/18/2024 - 14:22

As the semiconductor industry moves to next-generation 3D architectures, the need intensifies for process control solutions that can reduce the time to ramp a technology to production-level yields. Gate-All-Around (GAA) transistors, EUV lithography, and scaled memory devices all present challenging requirements for detection of defects buried within 3D structures. As critical dimensions shrink, these defects can approach single-digit nanometers in size, or only a few atoms thick.

Chipmakers use two tools to find and control manufacturing defects: optical inspection to detect potential defects on the wafer, followed by eBeam review to characterize these defects in more precise detail. Optical inspection and eBeam review are complementary – together they deliver an actionable pareto that engineers can use to optimize yield and ensure faster time-to-market.

A key challenge facing eBeam defect review at the most advanced nodes is the ability to differentiate the true defects from the false alarms presented from the optical inspection systems, while maintaining the high throughput necessary for volume production.

The eBeam review process has become much more challenging as transistors have moved from planar to FinFET and now GAA. The “false rate” – when optical inspection flags something that is not a true defect – more than doubles with the GAA structures. Defects are smaller and killer defects are more difficult to distinguish from noise with GAA and advanced memories. The defect maps created after optical inspection become denser, with a large amount of nuisance (>90%), in order to capture the required defects of interest (DOIs). With such a high nuisance rate, it becomes nearly impossible to deliver an actionable pareto with enough DOIs to achieve statistically significant process control. To compensate for the high number of candidates in inspection, process control engineers need defect review systems that can deliver far more samples than today’s typical benchmark of several hundreds of DOI candidates.

blog-semvision-fig1

Deep Learning for Defect Classification

Applied Materials is the leading provider of eBeam defect review systems. In 2022, we introduced our “cold field emission” (CFE) technology, a breakthrough in eBeam imaging that enables chipmakers to better detect and image nanometer-scale, buried defects. We are now extending this technology to address the increased sampling requirements of the high false alarm rates (“High FAR”) of advanced logic and memory.

When combined with the use of back-scattered electrons that enable high-resolution imaging of deep structures, CFE technology allows better throughput while maintaining high sensitivity and resolution compared with previous-generation thermal field emission (TFE) sources – enabling sub-nanometer resolution for detecting the smallest buried defects.

blog-semvision-fig2

Applied is now combining the use of CFE with deep learning AI technology for automatic extraction of true DOIs from the false “nuisance” defects. In many cases, the actual DOIs are only 5 percent or less of the review candidates. The deep learning network is continuously trained with data from the fab and sorts the defects into a defect distribution including voids, residues, scratches, particles and dozens of other defect types. Defect extraction is highly accurate, with nearly 100-percent accuracy.

3D Devices Need 3D Process Control

The use of Applied-developed AI to enable automatic DOI extraction and classification is a new application. In one use case, the eBeam system considered roughly 10,000 defect candidates of a GAA device. While traditional defect review might be able to sample this many candidates, the new CFE with AI defect review system delivers much greater sensitivity and higher throughput, handling 10,000 candidates in less than an hour. Moreover, the AI-enabled in-line detection, filtering and classification system can classify 4X as many DOIs into specific types. Combining CFE technology with a full envelope of AI solutions makes it possible to deal with the high false alarm rates for 3D structures presented by the wafer inspection systems. CFE offers the required sensitivity to image the challenging defects, at higher throughputs compared with traditional TFE systems. Subsequently, with the help of AI, the required DOIs are captured with high accuracy, filtering out nuisances. 

blog-semvision-fig3

As 3D devices are being deployed in production, Applied has developed defect review technology that can sample 10,000 – 20,000 locations per hour, handle false-alarm rates exceeding 90 percent, and classify the defect types presented to statistical process control solutions. This innovative defect review approach is being successfully demonstrated at leading logic and memory chipmakers. Based on the feedback so far, we see a strong pull from customers as they address the High FAR challenge.

SARVESH MUNDRASenior Product Marketing Manager
Applied MaterialsSARVESH MUNDRA
Senior Product Marketing Manager
Applied Materials

The post Separating the Signal from the Noise: Combining Advanced Imaging with AI for Chip Defect Review appeared first on ELE Times.

Battery monitor maximizes performance of electric vehicle batteries

Mon, 03/18/2024 - 13:30

Courtesy: Arrow Electronics

Lithium-ion (Li-Ion) batteries are a common energy storage method for electric vehicles, offering very high energy density compared to all existing battery technologies. However, to maximize performance, it is essential to use a battery monitoring system (BMS) to safely manage the charging and discharging cycles, thereby extending the battery’s lifespan. This article will introduce the architecture and operation modes of BMS, as well as the product features and advantages of the BMS devices introduced by ADI.

BMS can enhance the operational efficiency of electric vehicle batteries

0224-arrowtimes-adi-article-battery-system

Advanced BMS can assist electric vehicles in efficiently extracting a significant amount of charge from the battery pack during operation. It can accurately measure the battery’s state of charge (SOC) to extend battery runtime or reduce weight, and enhance battery safety involves avoiding electrical overloads in the form of deep discharge, overcharging, overcurrent, and thermal overstress.

The primary function of the BMS is to monitor the physical parameters during battery operation, ensuring that each individual cell within the battery pack stays within its safe operating area (SOA). It monitors the charging and discharging currents, individual cell voltages, and the overall battery pack temperature. Based on these values, it not only ensures the safe operation of the battery but also facilitates SOC and state of health (SOH) calculations.

Another crucial function provided by the BMS is cell balancing. In a battery pack, individual cells may be connected in parallel or series to achieve the desired capacity and operating voltage (up to 1 kV or higher). Battery manufacturers attempt to provide identical cells for the battery pack, but achieving perfect uniformity is not physically realistic. Even small differences can lead to variations in charging or discharging levels, and the weakest cell in the battery pack can significantly impact the overall performance. Precise cell balancing is a vital feature of the BMS, ensuring the safe operation of the battery system at its maximum capacity.

Wireless BMS removes communication wiring, reducing complexity

Electric vehicle batteries are composed of several cells connected in series. A typical battery pack, with 96 cells in series, generates over 400 V when charged at 4.2 V. The more cells in the battery pack, the higher the voltage achieved. While the charging and discharging currents are the same for all cells, it is necessary to monitor the voltage on each cell.

To accommodate the large number of batteries required for high-power automotive systems, multiple battery cells are often divided into several modules and distributed throughout the entire available space in the vehicle. A typical module consists of 10 to 24 cells and can be assembled in different configurations to fit various vehicle platforms. Modular design serves as the foundation for large battery packs, allowing the battery pack to be distributed over a larger area, thus optimizing space utilization more effectively.

In order to support a distributed modular topology in the high EMI environment of electric/hybrid vehicles, a robust communication system is essential. Isolated CAN bus is suitable for interconnecting modules in this environment. While the CAN bus provides a comprehensive network for interconnecting battery modules in automotive applications, it requires many additional components, leading to increased costs and circuit board space. Moreover, if modern Battery Management Systems (BMS) adopt wired connections, it comes with significant drawbacks. Wiring becomes a challenging issue as wires need to be routed to different modules, adding weight and complexity. Wires are also prone to pick up noise, requiring additional filtering.

Wireless BMS is a novel architecture that eliminates the need for communication wiring. In a wireless BMS, interconnection between each module is achieved through wireless connections. The wireless connection in large battery packs with multiple cells reduces wiring complexity, lowers weight, decreases costs, and enhances safety and reliability. However, wireless communication faces challenges in harsh EMI environments and signal propagation obstacles caused by RF-shielding metal components.

0224-arrowtimes-adi-article-ltc6811-typical-application

Embedded wireless networks can improve reliability and precision

The SmartMesh embedded wireless network, introduced by ADI, has undergone on-site validation in Industrial Internet of Things (IoT) applications. It achieves redundancy through the use of path and frequency diversity, providing connections with reliability exceeding 99.999% in challenging environments such as industrial and automotive settings.

In addition to enhancing reliability by creating multiple redundant connection points, wireless mesh networks also extend the functionalities of BMS. The SmartMesh wireless network enables flexible placement of battery modules and improves the calculation of battery SOC and SOH. This is achieved by collecting more data from sensors installed in locations previously unsuitable for wiring. SmartMesh also provides time-correlated measurement results from each node, enabling more precise data collection.

ADI has integrated the LTC6811 battery stack monitor with ADI SmartMesh network technology, representing a significant breakthrough. This integration holds the potential to enhance the reliability of large multi-cell battery packs in electric and hybrid vehicles while reducing costs, weight, and wiring complexity.

The LTC6811 is a battery stack monitor designed for multi-cell battery applications. It can measure the voltage of up to 12 series-connected cells with a total measurement error of less than 1.2mV. The measurement of all 12 cells can be completed within 290μs, and a lower data acquisition rate can be selected for high noise reduction. The LTC6811 has a battery measurement range of 0V to 5V, suitable for most battery chemistry applications. Multiple devices can be daisy-chained to simultaneously monitor very long high-voltage battery stacks. The device includes passive balancing for each cell, and data exchange occurs on either side of an isolation barrier, compiled by the system controller. The controller is responsible for calculating SOC, controlling battery balancing, checking SOH, and ensuring the entire system stays within safe limits.

Moreover, multiple LTC6811 devices can be daisy-chained, allowing simultaneous monitoring of long high-voltage battery stacks. Each LTC6811 has an isoSPI interface for high-speed and RF-resistant remote communication. When using LTC6811-1, multiple devices are connected in a daisy-chain, and all devices share one host processor connection. When using LTC6811-2, multiple devices are connected in parallel to the host processor, and each device is individually addressed.

The LTC6811 can be powered directly from the battery pack or an isolated power source and features passive balancing for each battery cell, along with individual PWM duty cycle control for each cell. Other features include a built-in 5V regulator, 5 general-purpose I/O lines, and a sleep mode (where current consumption is reduced to 4μA).

0224-arrowtimes-adi-article-12-cell-battery-stack-module

Cell balancing is employed to optimize battery capacity and performance

Cell balancing has a significant impact on the performance of batteries because even with precise manufacturing and selection, subtle differences can emerge between them. Any capacity mismatch between cells can lead to a reduction in the overall capacity of the battery pack. Clearly, the weakest cell in the stack will dominate the performance of the entire battery pack. Cell balancing is a technique that helps overcome this issue by equalizing the voltage and SOC between cells when the battery is fully charged.

Cell balancing technology can be divided into passive and active types. When using passive balancing, if one cell is overcharged, the excess charge is dissipated into a resistor. Typically, a shunt circuit is employed, consisting of a resistor and a power MOSFET used as a switch. When the cell is overcharged, the MOSFET is closed, dissipating the excess energy into the resistor. LTC6811 uses a built-in MOSFET to control the charging current for each monitored cell, thus balancing each cell being monitored. The integrated MOSFET allows for a compact design and can meet a 60 mA current requirement. For higher charging currents, an external MOSFET can be used. The device also provides a timer to adjust the balancing time.

On the other hand, active balancing involves redistributing excess energy among other cells in the module. This approach allows for energy recovery and lower heat generation, but the disadvantage is that it requires a more complex hardware design.

ADI has introduced an architecture using LT8584 to achieve active balancing of batteries. This architecture actively shunts charging current and returns energy to the battery pack, addressing the issues associated with passive shunt balancers. Energy is not dissipated as heat but is instead reused to recharge the remaining batteries in the stack. The architecture of this device also tackles a problem where one or more cells in the stack reach a low safe voltage threshold before the entire stack’s capacity is depleted, resulting in reduced runtime. Only active balancing can redistribute charge from stronger cells to weaker ones, allowing weaker cells to continue supplying power to the load and extracting a higher percentage of energy from the battery pack. The flyback topology enables charge to move back and forth between any two points in the battery pack. In most applications, the charge is returned to the battery module (12 cells or more), while in other applications, the charge is returned to the entire battery stack or auxiliary power rails.

The LT8584 is a monolithic flyback DC/DC converter designed specifically for active balancing of high-voltage battery packs. The high efficiency of the switch-mode regulator significantly increases the achievable balancing current while reducing heat dissipation. Additionally, active balancing allows for capacity recovery in stacks of mismatched batteries, a feature not attainable with passive balancing systems. In typical systems, over 99% of the total battery capacity can be achieved.

The LT8584 features an integrated 6A, 50V power switch, reducing the design complexity of the application circuit. The device operates entirely relying on the cells which it is discharging, eliminating the need for complex biasing schemes typically required when using an external power switches. The enable pin (DIN) is designed to seamlessly coordinate with the LTC680x series battery stack monitor ICs. Additionally, when used in conjunction with LTC680x series devices, the LT8584 provides system telemetry functions, including current and temperature monitoring. When disabled, the LT8584 typically consumes less than 20nA of total static current from the battery.

Conclusion

The key to low-emission vehicles lies in electrification, but it also requires smart management of energy sources (such as lithium-ion batteries). Improper management could render the battery pack unreliable, significantly reducing the safety of the vehicle. Both active and passive battery balancing contribute to safe and efficient battery management. Distributed battery modules are easy to support, and they can reliably transmit data to the BMS controller, whether through wired or wireless means, enabling dependable SOC and SOH calculations. ADI offers a comprehensive range of BMS components that can assist customers in accelerating BMS development, ensuring more efficient management of the operational efficiency and safety of electric vehicle batteries.

The post Battery monitor maximizes performance of electric vehicle batteries appeared first on ELE Times.

Passive components in EV chargers should be selected carefully (EV Charging)

Mon, 03/18/2024 - 13:11

Courtesy: Avnet

When selecting components for an EV charger design, semiconductors are the usual focus of attention. Newer power switching technologies, Silicon Carbide in particular, promise very low losses and overall cost savings. Passive components cannot be forgotten. The use of wide bandgap (WBG) switches such as SiC MOSFETs presents additional opportunities for optimization. Passive components in the power train can be smaller in size and lower in weight, which comes with reduced cost. These developments bring passive technologies into play that would otherwise be unsuitable. The main passives to consider are DC-link capacitors, filter inductors, and transformers.

The DC-link capacitor

All on- and off-board EV chargers have similar power chains. They start with a power factor correction (PFC) stage followed by an isolated DC-DC conversion stage. The output power level does not change this basic architecture, as the fastest 400kW+ roadside chargers will still typically comprise lower power modules in a stacked configuration. Each module will deliver around 30kW, to reduce stress and provide redundancy. Each stage may be bi-directional in modern designs and overall would resemble any high-power AC-DC converter.

A Typical EV charger outline with critical passive components highlighted typical-ev-charger-circuit-diagram-highlighting-passivesPassives play an important role in EV charging topologies. Their selection will depend on the type of converter used, which will help indicate the efficiencies that can be achieved through the most optimal component selection.

One of the main differences between a generic converter and an EV battery charger design is the sizing of the DC-link capacitor. This capacitor is positioned on the DC rail, or link, between the PFC and DC-DC conversion stages. The potential here will be a voltage of around 650V up to 1000V. In a general-purpose AC-DC converter, this capacitor is usually sized for ‘hold-up’ time, maintaining the rail for typically 18/20ms after a mains failure. At 30kW, this would need around 8,000 µF, occupying about 80 cubic inches (1300cm3). At this capacity, aluminum electrolytics are the most economically viable option.

Hold-up capacitance is calculated by equating the hold-up energy required (hold-up time x output power/efficiency), with the energy expended as the capacitor voltage drops after AC failure from its normal level to a drop-out level, perhaps from 650V to 500V. That is, 30kW x 20ms/0.90 = (0.5 x C x 6502) – (0.5 x C x 5002) giving C = 7.7 mF.

In an EV charger application, hold-up is not an issue. The size of the DC-link capacitor is based on its ability to source high-frequency ripple current for the DC-DC stage and sink ripple current from the PFC stage. The total ripple voltage and temperature rise will also be factors.

The most suitable part is determined by the Equivalent Series Resistance (ESR) and Equivalent Series Inductance (ESL) of the capacitor, as well as its capacitance. Although high capacitance for hold-up is not necessary it is still common to select AL-electrolytics. Often engineers will use large capacitors in parallel, to achieve the desired ESR and ESL. Because of the capacitors’ size, it can be difficult to keep total connection resistance low with good ripple current sharing between the components.

The total impedance of an AL-electrolytic will typically reach its minimum at around 10kHz. That frequency is due to the capacitance, ESL, and variation in ESR. This low frequency is not a good match when using WBG devices, which switch better at several hundred kHz. The ESR of AL-electrolytics also rises strongly at low temperatures which could be problematic at start-up, especially in a battery charger application located outside. At the other extreme, 105°C is usually the maximum rating for an AL electrolytic.

Transfer curve of an AL electrolytic transfer-curve-of-AL-electrolytic-capacitorThe impedance of a large AL-electrolytic capacitor is typically at a minimum around 10kHz, which is not a good fit when using wide bandgap power transistors.

For an alternative to AL-electrolytics, look at film and multilayer ceramic capacitors (MLCCs). MLCCs have very low ESR and ESL, so the low impedance point occurs at a higher frequency. This higher frequency is more suitable when using WBG devices. The MLCC also has a longer lifetime than AL-electrolytics, perhaps 10x under the same conditions.

It is now common to see film capacitors used in the DC-link position. Film types are available rated to high voltages and operate at temperatures of at least 135°C. The common PCB-mount ‘box’ format used for MLCCs makes them easy to assemble with good packing density. They can also self-heal after over-voltage stress, unlike AL-electrolytics.

However, MLCCs are relatively high cost and low capacitance value per package. Achieving high capacitance requires using many in parallel. Some MLCCs are also relatively fragile and susceptible to substrate flexing. However, some MLCCs designed specifically for DC-link applications are now available, with fitted metal frames around paralleled parts. This eases assembly and provides some mechanical flexibility in the terminations.

Quantifying ripple current

Ripple current for a DC-link capacitor is difficult to quantify. The value depends on operating conditions, and summing the total value sunk from the PFC stage and sourced to the DC-DC stage is not simple. If the stages are not synchronized or if either stage is variable frequency, it is harder still to identify.

Simulation and bench measurements can be used, but as an approximation, for a DC link at 650V and 30kW load, the average current is about 50A allowing for inefficiencies. For a DC-DC duty cycle of 80%, this is about 25A rms sourced from the capacitor assuming a square wave. At a switching frequency of 100kHz and 10V rms ripple, only about 4µF would be needed if capacitive impedance dominates. If the capacitor ESR were 10 milliohms, this would add an extra 0.25V rms of ripple. We could guess that the ripple from the PFC stage is of the same order.

Despite these gross assumptions, it indicates that only a few tens of µF would be needed and film capacitors become practical if several are paralleled to achieve the ripple current capability. For example, four paralleled 20µF/700V metalized polypropylene capacitors can handle 62.5A rms total ripple with an overall ESR of less than one milliohm, giving less than 4 W total dissipation at 50A rms ripple current. The overall volume is 8.5 cubic inches (139 cm3).

An AL-electrolytic solution, for similar ripple current capability could be assembled from 10x 2700µF/400V parts, in a 5-parallel 2-series arrangement, with about 85A ripple current rating (10kHz) and an ESR of about 8 milliohms total. At 50A rms ripple current, this would give about 20W of dissipation overall.

Ripple voltage is much lower than the film capacitor solution, because of lower capacitive impedance, but the overall volume would be 125 cubic inches (2060 cm3) or nearly 15x larger. Further advantages of film capacitors include a particularly low ESL of a few tens of nH, adding only a volt or so to the ripple voltage waveform.

Comparing a typical MLCC solution, three in parallel could achieve 50A rms ripple rating and adequate capacitance for less than 10V rms ripple. ESR would be around 2 milliohms total and dissipation around 3W overall. Low ESR and ESL are maintained up to a frequency of at least 1MHz. This makes MLCC a good candidate for ultra-fast switching where capacitance value is less important. ESR and capacitance do vary however quite strongly with temperature and bias voltage. Typically, three modules would occupy just 0.8 cubic inches (13.25 cm3).

Indicative volume pricing shows four of the film parts would cost around one quarter the price of ten AL-electrolytics, while three MLCC modules would be about half the price of the ten AL-electrolytics. In practice, derating will be applied to capacitors of any type, requiring further parallel parts. That may apply more so for the electrolytics. In this case, the difference becomes even more striking. The table shows the difference in headline performance of film, MLCC, and AL-electrolytic capacitors.

Comparing capacitors for EV chargers comparing-capacitors-for-ev-chargingA comparison of capacitor technologies for typical industrial-grade parts, including the figures of merit important in an EV charging application. Magnetics in EV chargers

Magnetic components in EV chargers are like any found in AC-DC converters, but the fast charger environment and the trend toward WBG semiconductors influences the choice of fabrication technique. The main components to consider are the input EMI filter, PFC inductor, DC-DC transformer, output choke, and any additional resonant inductors, depending on the converter topology being used.

The EMI filter will comprise at least one common-mode choke in the AC input lines with windings phased so that flux from line currents cancel. This allows high inductance to be used without risk of saturation. High permeability ferrite cores are normally used but nano-crystalline material is sometimes seen for maximum inductance.

Windings are spaced to achieve voltage isolation and ideally in just a single layer, to keep self-capacitance low and self-resonance high. Differential-mode chokes are also usually necessary, and these see flux from the full line current. To avoid saturation, they are typically low inductance, wound on iron power core toroids. Some common-mode choke designs add separation to their windings to deliberately introduce leakage flux, which acts as an integrated differential mode choke. Both common-mode and differential-mode chokes are wound with magnet wire on bobbins or headers for PCB mounting.

magnetic-components-in-ev-charging-circuitsThe operating environment will influence the choice of common-mode (T1) and differential-mode (L1, L2, L3) chokes in the EMI filter stage of an EV charger, based on the materials used and manufacturing process. Magnetic component selection in EV chargers

The PFC choke operates at high frequency and its inductance value is chosen to match the operating mode of the stage; continuous, discontinuous, or ‘boundary’. These modes trade off semiconductor stress with potential EMI and choke size, and with the high peak currents present, a low effective core permeability is needed to avoid saturation.

A powder core would produce excessive core losses, so the preference is a gapped ferrite. This should offer minimum loss at the working flux density and frequency, and at the expected operating temperature. The component could be of bobbin construction, but a planar approach can be practical with PCB traces used as windings, giving low losses and a large surface area to help dissipate heat.

The DC-DC converter topology will invariably be a version of a forward converter, typically a full-bridge, and often a resonant type at the power levels involved. Planar transformer designs are popular as they are consistent and easy to integrate with the power switches operating at high frequency. However, safety isolation is required and the appropriate creepage and clearance distances can be difficult to achieve with this construction.

In most cases, high primary inductance is needed, achieved with an ungapped high-permeability core, and, like the PFC choke, the material is chosen for the lowest core losses. Resonant converters use an extra inductor that can be formed from the leakage inductance of the main transformer. This can be difficult to control and can limit overall performance, so normally the inductor is a separate component. The value can be very low so it could conceivably be air-cored but is more likely to utilize a core to constrain the magnetic field and reduce interactions.

An output choke, if necessary for the topology, is chosen in a similar way to the PFC choke. A desired ripple current is specified, which sets inductance for a given output voltage, duty cycle, and frequency. The DC output current flows through the choke, so a gapped ferrite is the normal core solution. The component again could be a planar construction in modern designs.

Conclusion

Passive components can become a limit to the performance achieved in EV charger designs. There are choices of components however which can leverage the characteristics of the latest semiconductor technologies to minimize losses and contribute to overall reduction in size, weight, and cost.

As a global leader in IP&E solutions, Avnet has a robust supplier line card in all regions as well as extensive design support and demand creation services. Our dedicated IP&E experts can help with everything from supply chain needs to service organization requirements.

The post Passive components in EV chargers should be selected carefully (EV Charging) appeared first on ELE Times.

Servotech Power Systems to Build 20 EV Charging Stations for Nashik Municipal Corporation

Mon, 03/18/2024 - 12:47

Servotech Power Systems Ltd., a prominent player in the EV charging and solar industry, has secured a substantial contract from the Nashik Municipal Corporation (NMC). This contract involves Servotech supplying, commissioning, and constructing 20 electric vehicle (EV) charging stations throughout the Nashik Municipal Corporation area.

The objective of this contract is to meet the increasing need for convenient and accessible charging facilities for electric vehicles, thus facilitating the state’s shift towards sustainable transportation solutions. As the demand for EV mobility grows, there is a corresponding need for enhanced EV charging infrastructure and these charging stations will enable EV owners to recharge their vehicles conveniently while on the move.

Servotech will oversee the installation, supply, commissioning, construction and maintenance of EV charging stations, catering to various vehicles and substantially improving Nashik’s EV charging network. This positions Servotech as a frontrunner in India’s growing EV infrastructure market and aligns with the government’s vision to create a robust EV ecosystem nationwide. Additionally, this initiative reflects Servotech’s commitment to sustainability by facilitating Nashik’s transition to cleaner transportation, aligning with its environmental responsibility goals, and reducing carbon emissions in one of the key cities of India.

Sarika Bhatia, Director of Servotech Power Systems Ltd. said, “This contract represents a major milestone for Servotech Power Systems, we are deeply committed to advancing India’s electric vehicle revolution and fostering sustainable transportation solutions. We are already a leader in the EV charger market and through this initiative, we are set to become a leader in the EV charging infrastructure market as well. This collaboration with the Nashik Municipal Corporation underscores our capabilities to provide cutting-edge and reliable EV charging solutions for cities across the nation. By expanding the accessibility of EV charging infrastructure, we aim to support the widespread adoption of electric vehicles, reducing carbon emissions and promoting a cleaner, greener future for generations to come. We are excited to contribute to Nashik’s transition to cleaner transportation and look forward to delivering high-quality charging solutions that meet the city’s evolving needs. As a premier EV charger manufacturer, we aim to transform India into a nation where EVs are not just a vision but a reality. With a shared vision and unwavering dedication, we believe in making this dream come true, driving a seamless shift toward a greener, more sustainable transportation landscape.

The post Servotech Power Systems to Build 20 EV Charging Stations for Nashik Municipal Corporation appeared first on ELE Times.

Powering the Next Wave of Smart Industrial Sensors with NuMicro M091 Series Microcontrollers

Mon, 03/18/2024 - 12:10

In the era of Industry 5.0, where intelligence, sensing capabilities, and automation are paramount, the demand for precise, compact sensors continues to soar across various fields of industrial automation and IoT applications—introducing the NuMicro M091 series, a line of 32-bit high-integration analog microcontrollers designed to elevate the accuracy of analog functions and digital controls within a small package size.

Key Features of High-Integration Analog Microcontrollers

Based on the Arm Cortex-M0 core, the NuMicro M091 series operates at frequencies up to 72 MHz, with Flash memory ranging from 32 KB to 64 KB, 8 KB of SRAM, and a working voltage of 2.7V to 3.6V. Breaking new ground in performance, this series offers rich analog peripherals, including 4 sets of 12-bit DACs and up to 16 channels of 12-bit 2 MSPS ADCs. Additionally, it supports up to four sets of precision Rail-to-Rail operational amplifiers (OP Amps), delivering exceptional specifications to enhance output signal accuracy. These specifications include Input Offset Voltage as low as 50 µV, an extremely low-temperature drift of 0.05 µV/℃, a high slew rate of up to 6V/µs, and a broad gain bandwidth of 8 MHz, ensuring the integrity of amplified signals. It also includes a built-in temperature sensor with a ± 2 °C deviation.

Rich Peripheral Modules and Applications

With the addition of up to 6 sets of 32-bit timers, 1 UART, 1 SPI, 2 I²C, and 6-channel 16-bit BPWM peripheral modules, the NuMicro M091 series ensures seamless adaptation to various application scenarios, providing a more comprehensive solution. To meet the growing demand for small-sized sensors, this series offers QFN33 (4 x 4 mm) and QFN48 (5 x 5 mm) compact package sizes, facilitating easy integration of sensing technology into diverse application scenarios.

Ease of Development

Equipped with the NuMaker-M091YD development board and Nu-Link debugger, the M091 series offers powerful tools for product evaluation and development. Moreover, it supports third-party IDEs such as Keil MDK, IAR EWARM, and the self-developed NuEclipse IDE by Nuvoton, providing developers with more choices and convenience.

Experience the Future of Industrial Sensing with NuMicro M091 Series Microcontrollers, Redefining Precision, and Integration in the World of Smart Sensors.

The post Powering the Next Wave of Smart Industrial Sensors with NuMicro M091 Series Microcontrollers appeared first on ELE Times.

Infineon partner Thistle Technologies integrates its Verified Boot technology with Infineon’s OPTIGA Trust M for enhanced device security

Fri, 03/15/2024 - 14:01

Infineon Technologies AG has announced the integration of its OPTIGA Trust M security controller, with tamper-resistant hardware certified to Common Criteria EAL6+, with the Verified Boot technology by Thistle Technologies, a pioneer of advanced security solutions for connected devices. This integration enables designers to easily defend their devices against firmware tampering and protect the software supply chain integrity. The result is improved end-user security, which is particularly important in industries with high security requirements such as healthcare, automotive and device manufacturing.

Thistle Technologies Verified Boot provides a secured boot process for IoT devices. Enhanced integrity checks cryptographically examine that the device firmware has not been tampered with. The solution supports the needs of a wide range of IoT devices for smart homes, smart cities and smart buildings, among others, enabling easy implementation with minimal development time. By leveraging the robust security features of Infineon’s OPTIGA Trust M, including its hardware-based root-of-trust, the technology offers a high level of protection against unauthorized firmware modifications and sophisticated cyberattacks.

“Since the start of our partnership in January 2023, Thistle has developed a software integration for our OPTIGA Trust M within Linux to extend our hardware capability into the application software domain for Linux-based system architectures,” said Vijayaraghavan Narayanan, Senior Director and Head of Edge Identity & Authentication at Infineon. “The new solution enables our shared customers to quickly enhance the security of their development.”

“Integrating our Verified Boot technology with Infineon’s OPTIGA Trust M is a significant step forward in making it easy to incorporate sophisticated security capabilities into devices quickly,” said Window Snyder, CEO of Thistle Technologies.

The post Infineon partner Thistle Technologies integrates its Verified Boot technology with Infineon’s OPTIGA Trust M for enhanced device security appeared first on ELE Times.

Infineon sues Innoscience for Patent Infringement

Fri, 03/15/2024 - 13:26

Infineon Technologies AG has filed a lawsuit, through its subsidiary Infineon Technologies Austria AG, against Innoscience (Zhuhai) Technology Company, Ltd., and Innoscience America, Inc. and affiliates (hereinafter: Innoscience). Infineon is seeking a permanent injunction for infringement of a United States patent relating to gallium nitride (GaN) technology owned by Infineon. The patent claims cover core aspects of GaN power semiconductors encompassing innovations that enable the reliability and performance of Infineon’s proprietary GaN devices. The lawsuit was filed in the district court of the Central District of California.

Infineon alleges that Innoscience infringes the Infineon patent mentioned above by making, using, selling, offering to sell and/or importing into the United States various products, including GaN transistors for numerous applications, within automotive, data centres, solar, motor drives, consumer electronics, and related products used in automotive, industrial, and commercial applications.

“The production of gallium nitride power transistors requires completely new semiconductor designs and processes”, said Adam White, President of Infineon’s Power & Sensor Systems Division. “With nearly two decades of GaN experience, Infineon can guarantee the outstanding quality required for the highest performance in the respective end products. We vigorously protect our intellectual property and thus act in the interest of all customers and end users.” Infineon has been investing in R&D, product development and manufacturing expertise related to GaN technology for decades. Infineon continues to defend its intellectual property and protect its investments.

On 24 October 2023, Infineon announced the closing of the acquisition of GaN Systems Inc., becoming a leading GaN powerhouse and further expanding its leading position in power semiconductors. Infineon leads the industry with its GaN patent portfolio, comprising around 350 patent families. Market analysts expect the GaN revenue for power applications to grow by 49% CAGR to approximately US$2 billion by 2028 (source: Yole, Power SiC and GaN Compound Semiconductor Market Monitor Q4 2023). Gallium nitride is a wide bandgap semiconductor with superior switching performance that allows smaller size, higher efficiency and lower-cost power systems.

The post Infineon sues Innoscience for Patent Infringement appeared first on ELE Times.

Pages