Microelectronics world news

System performs one-pass wafer test up to 3 kV

EDN Network - Thu, 10/10/2024 - 23:50

Keysight’s 4881HV wafer test system enables parametric tests up to 3 kV, accommodating both high and low voltage in a single pass. Its high-voltage switching matrix facilitates this one-pass operation, boosting productivity and efficiency.

The system’s switching matrix scales up to 29 pins and integrates with precision source measure units, allowing flexible measurements from low current down to sub-pA resolution at up to 3 kV on any pin. High-voltage capacitance measurements with up to 1-kV DC bias are also possible. This switching matrix enables a single 4881HV to replace separate high-voltage and low-voltage test systems, increasing efficiency while reducing the required footprint and testing time.

Power semiconductor manufacturers can use the 4881HV to perform process control monitoring and wafer acceptance testing up to 3 kV, meeting the future requirements of automotive and other advanced applications. To safeguard operators and equipment during tests, the system features built-in protection circuitry and machine control, ensuring they are not affected by high-voltage surges. Additionally, it complies with safety regulations, including SEMI S2 standards.

To request a price quote for the 4881HV test system, click the product page link below.

4881HV product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post System performs one-pass wafer test up to 3 kV appeared first on EDN.

Sink controllers ease shift to USB-C PD

EDN Network - Thu, 10/10/2024 - 23:50

Diodes’ AP33771C and AP33772S sink controllers enable designers to transition from proprietary charging ports, legacy USB ports, and barrel-jack ports to a standard USB Type-C PD 3.1 port. These controllers can be embedded into battery-powered devices and other types of equipment using a USB Type-C socket as a power source.

Both ICs manage DC power requests for devices with USB Type-C connectors, supporting the PD 3.1 extended power range (EPR) of up to 140 W and adjustable voltage supply (AVS) of up to 28 V. The AP33771C provides multiple power profiles for systems without an MCU, featuring eight resistor-settable output voltage levels and eight output current options. In contrast, the AP33772S uses an I2C communications interface for systems equipped with a host MCU.

The sink controllers’ built-in firmware supports LED light indication, cable voltage-drop compensation, and moisture detection. It also offers safety protection schemes for overvoltage, undervoltage, overcurrent, and overtemperature. No programming is required to activate the firmware in the AP33771C, while designers have the option of using I2C commands to configure the AP33772S.

Housed in a 3×3-mm, 14-pin DFN package, the AP33771C is priced at $0.79 each in 1000-unit quantities. The AP33772S, in a 4×4-mm, 24-pin QFN package, costs $0.84 each in like quantities.

AP33771C product page 

AP33772S product page 

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sink controllers ease shift to USB-C PD appeared first on EDN.

Platform advances 800G Ethernet AN/LT validation

EDN Network - Thu, 10/10/2024 - 23:50

Teledyne LeCroy has announced an integrated platform for validating the limits of auto-negotiation and link training (AN/LT) in 800-Gbps Ethernet. As an extension to the existing LinkExpert software, the new functionality leverages the Xena Z800 Freya Ethernet traffic generator and the SierraNet M1288 protocol analyzer to test Ethernet’s AN/LT specifications.

The fully automated test platform simplifies interoperability testing across various Ethernet switches and network interface cards. It tests each equalizer tap to its maximum limit to verify protocol compliance and ensure links re-establish if limits are exceeded. The system also verifies the stability of the SerDes interface and validates that all speeds can be negotiated to support backward compatibility.

The SierraNet M1288 protocol analyzer provides full stack capture, deep packet inspection, and analysis for links up to 800 Gbps. It also offers jamming capabilities for direct error injection on the link at wire speed. The Xena Z800 Freya Ethernet traffic generator can test up to 800G Ethernet using PAM4 112G SerDes, achieving the best possible signal integrity and bit error rate performance.

LinkExpert with the AN/LT test functionality is now shipping as part of the SierraNet Net Protocol Suite software.

SierraNet M1288 product page 

Xena Z800 Freya product page 

Teledyne LeCroy 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Platform advances 800G Ethernet AN/LT validation appeared first on EDN.

Tulip antenna delivers 360° stability for UWB

EDN Network - Thu, 10/10/2024 - 23:50

A tulip-shaped antenna from Kyocera AVX is designed for ultra-wideband (UWB) applications, covering a frequency range of 6.0 GHz to 8.5 GHz. The surface-mount antenna is manufactured using laser direct structuring (LDS) technology, which enables a 3D pattern design. LDS allows the antenna to operate both on-board and on-ground, offering an omnidirectional radiation pattern with consistent 360° phase stability.

The antenna’s enhanced phase stability, constant group delay, and linear polarization are crucial for signal reconstruction, improving the accuracy of low-energy, short-range, and high-bandwidth USB systems. It can be placed anywhere on a PCB, including the middle of the board and over metal. This design flexibility surpasses that of off-ground antennas, which require ground clearance and are typically positioned along the perimeters of PCBs.

The tulip antenna is 6.40×6.40×5.58 mm and weighs less than 0.1 g. It is compatible with SMT pick-and-place assembly equipment and complies with RoHS and REACH regulations. When installed on a 40×40-mm PCB, the antenna typically exhibits a maximum group delay of 2 ns, a peak gain of 4.3 dBi, CW power handling of 2 W, and an average efficiency of 61%. 

Designated P/N 9002305L0-L01K, the tulip antenna is produced in South Korea and is now available through distributors Mouser and DigiKey.

9002305L0-L01K antenna product page

Kyocera AVX 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Tulip antenna delivers 360° stability for UWB appeared first on EDN.

Analog Devices’ approach to heterogeneous debug

EDN Network - Thu, 10/10/2024 - 16:35
Creating a software-defined version of ADI

Embedded World this year has had a quite clear focus on the massive growth in the application space of edge computing and the convergence of IoT, AI, security, and underlying sensor technologies. A stop by the Analog Devices Inc (ADI) booth reveals quite clearly that the company aims to address the challenges of heterogeneous debugging and security compliance in embedded systems. “For some companies, an intelligent edge refers to the network edge or the cloud edge, for us it means right down to sensing, taking raw data off sensors and converting them into insights,” says Jason Griffin, managing director of software engineering and security solutions at ADI. “So bridging the physical and digital words, that’s our sweet spot.” ADI aims to bolster embedded solutions and security software where “as the signal chain becomes more digital, we add on complex security layers.” As an established leader in the semiconductor industry, ADI’s foundational components now require more software enablement, “so as we move up the stack, moving closer to our customer’s application layer, we’re starting to add an awful lot more software where our overall goal is to create a software-defined version of ADI and meet customers at their software interface.” The company is focusing their efforts on open source development “we’re open sourcing all our tools because we truly believe that software developers should own their own pipeline.” 

Enter CodeFusion Studio

This is where CodeFusion Studio comes into play, the software development environment was built to help develop applications on all ADI digital technologies. “In the future, it will include analog too,” notes Jason. “There’s three main components to CodeFusion Studio: the software development kit that includes usage guides and reference guides to get up and running; the modern Visual Studio Code IDE so customers can go to the Microsoft marketplace and download it to facilitate heterogenous debug and application development;  and a series of configuration and productivity tools where we will continue to expand CodeFusion Studio.” The initial release of this software includes a pin config tool, ELF file explorer, and a heterogeneous debug tool.  

Config tools

Kevin Townsend, embedded systems engineer offered a deeper dive into the open source platform starting with the config tools. “There’s not a ton of differentiation in the config tools themselves, every vendor is going to give you options to configure pin mux, pin config, and generate code to take those config choices and set up your device to solve your business problem.” The config tools themselves are more or less standard, “in reality, you have two problems with pin mux and pin config: you’ve got to capture your config choices, and every tool will do that for you, for example, I could want ADC6 on Pin B4, or UART TXD and RXD on C7 and D9, the problem with most of those tools today is that they lock you into a very opinionated sense of what that code should look like. So most vendors will generate code for you like everybody else but it will be based on the vendor-specific choices of RTOSs (real-time operating systems), my SDKs; and if I’m a moderately complex-to-higher-end customer, I don’t want to have anything to do with those, I need to generate code for my own scheduler, my own APIs in-house.” 

So CodeFusion Studio’s tool has done is decouple config choice capture from code generation, “we save all of the config choices for your system, we save all of that into a JSON file which is human and machine readable and rather than just generating opinionated code for you right away, we have an extensible command-line utility that takes this JSON file and it will generate code for you based upon the platform that you want.” The choices can include MSDK (microcontrollers SDK), Zephyr 3.7, ThreadX, or an in-house scheduler maybe used by a larger tech company. “I can take this utility and we have a plug-in based architecture where it’s relatively trivial for me to write my own export engine.” This gives people the freedom to generate the code they need. 

Figure 1: CodeFusion Studio Config tool demo at the ADI booth at Embedded World 2024.

ELF file explorer

Kevin bemoaned that half of a software developer’s day was spent doing meetings while the other half was spent doing productive work where so much of that already-reduced time had to include debug, profiling, and instrumentation. “That’s kind of the bread and butter of doing software development but traditionally, I don’t feel like embedded has tried to innovate on debug, it’s half of my working day and yet we’re still using 37-year-old tools like gdb on the command line to debug the system we’re using.” 

He says “if I want to have all my professional profiling tools to understand where all my SRAM or flash is going, I have to buy an expensive proprietary tool.” Kevin strongly feels that making a difference for ADI customers does not involve selling another concrete tool but an open platform that does not simply generate code, but to generates code that enables customers to get the best possible usage of their resources. “MCUs are a commodity at the end of the day, people are going to buy them for a specific set of peripherals, but it’s a lot of work to get the best possible usage out of them.” 

The ELF file explorer is one such example to improve the quality of life for developers. The ELF file is analogous to the .exe file for Windows desktop application, “it’s like an embedded .exe” , says Kevin. “It is kind of the ultimate source of the truth of the code that I am running on my device but it is a block box.” ELF file explorer attempts to take these opaque systems and build “windows” into them so see what is going on in the final binary. “There’s nothing in this tool that I couldn’t do on the command line but I would need 14 different tools, an Excel spreadsheet, and a piece of paper with pencil, and it would take me three hours.” Instead, developers can finalize debugging in a fraction of the time to potentially speed up time-to-market. 

“So for example, I can see the 10 largest symbols inside my image where I’m running out of flash; where is it all going?” In this case, the tool allows the user to readily see the 10 largest functions (Figure 2) with the ability to right click on it to go to the symbol source and get a view directly into the source code. 

Figure 2: CodeFusion Studio ELF file explorer tool demo.

Heterogenous Debug

The heterogenous debug tool is aimed at simplifying multi-core architecture debugging, this is quickly becoming a necessity in modern embedded development environments where implementing 2 cores or beyond is becoming commonplace. Kevin Townsend explains “almost all the debug tools that exist in the MCU space today are designed for one core at a time, they’re designed to solve one and analyze one thread of data on one architecture. You could have a design with an Arm core, a RISC-V core, an Xtensa DSP core, and maybe some proprietary instruction set from a vendor, all on the same chip; and I need to trace data as it moves through the system in a very complex way.” An example is used with an analog front end that goes to DSP for processing and then to an Arm core for further processing, and a final RISC-V core that might control a BLE radio to send a signal out to a mobile device. 

“It breaks down the ability to debug multiple cores in parallel inside the same IDE, in the same moment in time”, this diverges from the traditional approach with different IDEs, pipelines, and debuggers where the developer has to set a breakpoint on one core and switch over the the next processor’s tools to continue the debug process. This process is inherently cumbersome and fraught with human error and oversight where quite often, different cores might be controlled with different JTAG connectors causing the developer to manually switch connections as well while switching (alt-tabbing) between tools. 

In the heterogenous debug tool, users with multiple debuggers connected to multiple cores can readily visualize code for all the cores (there is no limit to the number) and they can set breakpoints on each (Figure 3). Once the offending line of code is found and fixed, the application can be rebuilt with the change and run to ensure that it works. 

Figure 3: The heterogenous debug tool demo at showing how a user can debug a system using a RISC-V and Arm core to play the game 2048. 

Trusted Edge Security Architecture

“We have our Trusted Edge Architecture which we’re embedding into our SDK as well as having security tooling within CodeFusion Studio itself, its all SDK-driven APIs so customers can use it pretty easily,” said Jason Griffin. Kevin Townsend also adds “traditional embedded engineers haven’t had to deal with the complexities of security, but now you have all this legislation coming down the pipeline there is a pressure that has never existed before for software developers to deliver secure solutions and they often don’t have the expertise.” The Trusted Edge Security Architecture is a program that offers access to crypto libraries all embedded within the software that can be used on an MCU (Figure 4). The secure hardware solutions include tamper-resistant key storage, root of trust (RoT), and more to provide a secure foundation for embedded devices. “How do we give you, out of the box, something that complies with 90% of the requirements in cybersecurity,” says Kevin, “We can’t solve everything, but we really need to raise the bar in the software libraries and software components that we integrate into our chips to really reduce the pressure on software developers.” 

Figure 4: ADI Trusted Edge Security Architecture demo.

Aalyia Shaukat, associate editor at EDN, has worked in design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Analog Devices’ approach to heterogeneous debug appeared first on EDN.

High-precision evaluation of an electric vehicle’s motors

ELE Times - Thu, 10/10/2024 - 15:16

The evaluation of electric vehicle motors is based on many factors. Mainly, it is based on three factors- durability, performance, and reliability of an electric vehicle motor.

The evaluation system for the assessment of an electric vehicle’s motor simulates the real-world driving conditions. It measures and records vital parameters such as speed, torque, power, and temperature.

Thereafter, the data collected by the evaluation system is used to identify key areas of improvement in the performance of an electric vehicle’s motor.

The key parameters on which an electric vehicle’s motor is evaluated are as follows:

First, constant load testing. In this testing method, a fixed load is applied to an electric vehicle’s motor for a constant time period. This load can be set to a predetermined value. And then, it records the motor’s performance in terms of its temperature, torque, speed, and power consumption.

Second, real-time DC- A real-time DC is built to assess the effectiveness of different energy management controllers in a variety of road conditions.

Third, EVREVO. This testing method evaluates any evaluation software without waiting for the other vehicle parts to be installed on it.

Fourth, EV motor test systems. This test system simulates the driving conditions prevailing on roads. This aid the driver to hone his/ her driving skills pertaining to different parameters. For instance, acceleration, deceleration, and other road and load conditions.

Thereafter, the data collected from these sources is used to identify areas of improvement in the performance and design of an electric vehicle.

Fifth, input voltage control. The allowed voltage range should be selected before the test, and each test point should work as quickly as possible.

Sixth, efficiency maps. They are installed in an electric vehicle’s simulation model. It gives vital inputs about the performance of an electric vehicle’s motor, battery, and other components.

Seventh, simulation calculation. It can be used to derive the distribution of operating points under different conditions. This can help to match the motor system to the whole vehicle and get a more accurate efficiency performance.

Eight, analytical hierarchy process (AHP). It is used to quantitatively measure the relative performance of different types of batteries. For example, lithium-ion batteries, fuel cells, and hybrid batteries.

Ninth, airgap eccentricity. It detects any major defect that may occur in any variable speed induction motor.

Tenth, input voltage control. It maintains a specific voltage range at each point of an electric vehicle’s motor.

Eleventh, step load testing. In this method, a specific load is applied in incremental steps to an electric vehicle’s motor. Thereafter, the response of that motor is monitored at each load level. This enables to assess the ability of that motor to handle increasing loads without any decrease in performance or overheating. This helps to determine that motor’s load capacity and performance under different conditions.

Twelfth, peak load testing. It is undertaken to evaluate an electric vehicle’s motor’s ability to withstand any sudden high loads or short bursts of power demand. In this method, that motor is subjected to its maximum rated load or slightly higher than that limit for a brief period of time. This helps to assess that motor’s ability to withstand any such spurt in peak load without overheating or failing.

Thirteenth, endurance testing. In this method, a load is continuously applied to an electric vehicle’s motor for a prolonged period. This type of testing helps to evaluate that motor’s reliability, durability, and thermal performance under prolonged operating conditions.

Fourteenth, regenerative load testing. In this method, an electric vehicle’s motor is subjected to variable loads in regenerative mode. This helps to evaluate its efficiency in both recovering and converting kinetic energy into electrical energy during the process of deceleration.

Fifteenth, thermal load testing. It is aimed at assessing an electric vehicle’s motor’s thermal management system. In this method, a motor is operated under heavy loads or at maximum power for an extended period. During this process, its change in temperature is monitored. Hence, this testing helps in determining the effectiveness of that motor’s cooling system so that it can maintain an optimal operating range of temperature.

The post High-precision evaluation of an electric vehicle’s motors appeared first on ELE Times.

In-vehicle passenger detection: Wi-Fi sensing a ‘just right’ solution

EDN Network - Thu, 10/10/2024 - 10:28

Every year during extreme weather, infants, toddlers, and disabled adults are sickened or die overlooked in vehicles. While the numbers are not huge, each case is a tragedy for a family and community. Accordingly, regulators are moving toward requiring that new vehicles be able to detect the presence of a human left in an otherwise empty vehicle. New regulations are not a question of if, but of when and of how.

This presents vehicle manufacturers with a classic Goldilocks problem. There are three primary techniques for human-presence detection in an enclosed environment, presenting a range of cost points and capabilities.

The first alternative is infrared detection: simply looking for a change in the infrared signature of the back-seat region—a change that might indicate the presence of a warm body or of motion. Infrared technology is, to say the least, mature. And it is inexpensive. But it has proved extraordinarily difficult to derive accurate detection from infrared signatures, especially over a wide range of ambient temperatures and with heat sources moving around outside the vehicle.

In an application where frequent false positives will cause the owner to disable the system, and a steady false negative can cause tragedy, infrared is too little.

Then there are radars, cameras

Radar is the second alternative. Small, low-power radar modules already exist for a variety of industrial and security applications. And short-wavelength radar can be superbly informative—detecting not only the direction and range of objects, but even the shapes of surfaces and subtle motions, such as respiration or even heartbeat. If anything, radar offers the system developer too much data.

Radar is also expensive. At today’s prices it would be impractical to deploy it in any but luxury vehicles. Perhaps if infrared is too little, radar is a bit too much.

A closely related approach uses optical cameras instead of radar transceivers. But again, cameras produce a great flood of data that requires object-recognition processing. Also, they are sensitive to ambient light and outside interference, and they are powerless to detect a human outside their field of view or concealed by, say, a blanket.

Furthermore, the fact that cameras produce recognizable images of places and persons creates a host of privacy issues that must be addressed. So, camera-based approaches are also too much.

Looking for just right

Is there something in between? In principle there is. Nearly all new passenger vehicles today offer some sort of in-vehicle Wi-Fi. That means the interior of the vehicle, and its near surroundings, will be bathed from time to time in Wi-Fi signals, spanning multiple frequency channels.

For its own internal purposes, a modern Wi-Fi transceiver monitors the signal quality on each of its channels. The receiver records what it observes as a set of data called Channel State Information, or CSI. This CSI data comes in the form of a matrix of complex numbers. Each number represents the amplitude and phase on a particular channel at a particular sample moment.

The sampling rate is generally low enough that the receiver continuously collects CSI data without interfering with the normal operation of the Wi-Fi (Figure 1). In principle it should be possible to extract from the CSI data stream an inference on whether or not a human is present in the back seat of a vehicle.

Figure 1 To detect a human presence using Wi-Fi, a receiver records what it observes as a set of data called CSI, which can be done without interfering with the normal operation of the Wi-Fi. Even small changes in the physical environment around the Wi-Fi host and client will result in a change of the amplitude and state information on the various channels. Wi-Fi signals take multiple paths to reach a target, and by looking at CSI at different times and comparing them, we can understand how the environment is changing over time. Source: Synaptics

And since the Wi-Fi system is already in the vehicle, continuously gathering CSI data, the incremental cost to extract the inference could be quite modest. The hardware system would require only adding a second Wi-Fi transceiver at the back of the vehicle to serve as a client on the Wi-Fi channels. This might just be the middle ground we seek.

A difficult puzzle

The problem is that there is no obvious way to extract such an inference from the CSI data. To the human eye, the data stream looks completely opaque (Figure 2). There is no nice, simple stream of bearing, range, and amplitude data. There may not even be the gross changes in signature upon which infrared detectors depend. The data stream looks like white noise. But it is not.

Figure 2 Making accurate inferencing of what the CSI data is sensing in real-world scenarios is a key challenge as much of it looks the same. Using a multi-stage analysis pipeline, the Synaptics team combined spectral analysis, a set of compact, very specialized deep-learning networks, and a post-processing algorithm to continuously process the CSI data stream. Source: Synaptics

Complicating the challenge is the issue of interference. In the real world, the vehicle will not be locked in a laboratory. It will be in a parking lot, with people walking by, perhaps peering at the windows. Given the nature of young humans, if they were to discover that they could set off the alarm, they would attempt to do so by waving, jumping about, or climbing onto the vehicle.

All this activity will be well within the range of the Wi-Fi signals. Making accurate inferences in the presence of this sort of interference, or of intentional baiting, is a compounding problem.

But the problem has proven to be solvable. Recently, researchers at Synaptics have reported impressive results. Using a multi-stage analysis pipeline, the team combined spectral analysis, a set of compact, very specialized deep-learning networks, and a post-processing algorithm to continuously process the CSI data stream. The resulting algorithm is compact enough for implementation in modest-priced system-on-chip (SoC), but it has proved highly accurate.

Measured results

The Synaptics developers produced CSI data using Wi-Fi devices in an actual car. They performed tests with and without an infant doll and with babies, in both forward- and rear-facing infant seats. The team also tested with children and a live adult, either still or moving about. In addition to tests in isolation, they performed tests with various kinds of interference from humans outside the car, including tests in which the humans attempted to tease the system.

Overall, the system achieved 99% accuracy across the range of tests. In the absence of human interference, the system was 100% accurate, recording no false positives or false negatives at all. Given that a false negative caused by outside interference will almost certainly be transient, the data suggest that the system would be very powerful at saving human passengers from harm.

Using the CSI data streams from existing in-vehicle Wi-Fi devices as a means of detecting human presence is inexpensive enough to deploy in even entry-level cars. Our research indicates that a modestly priced SoC is capable, given the right AI-assisted algorithm, of achieving an excellent error rate, even in the presence of casual or intentional interference from outside the vehicle.

This combination of thrift and accuracy makes CSI-based detection a just-right solution to the Goldilocks problem of in-vehicle human presence detection.

Karthik Shanmuga Vadivel is principal computer vision architect at Synaptics.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post In-vehicle passenger detection: Wi-Fi sensing a ‘just right’ solution appeared first on EDN.

UKRI grants £5.5m for new Responsible Electronics and Circular Technologies Centre

Semiconductor today - Thu, 10/10/2024 - 08:36
As part of its ‘Building a Green Future’ strategic theme (which aims to accelerate the green economy by supporting research and innovation that unlocks solutions essential to achieving net zero in the UK by 2050), UK Research and Innovation (UKRI) is granting £25m under its ‘Accelerating the Green Economy’ program to five new green industry centers across the UK...

Navitas adds TOLT package to GaNSafe family

Semiconductor today - Wed, 10/09/2024 - 21:00
Gallium nitride (GaN) power IC and silicon carbide (SiC) technology firm Navitas Semiconductor Corp of Torrance, CA, USA says that its high-power GaNSafe family is now available in a TOLT (Transistor Outline Leaded Top-side cooling) package...

PhotonDelta launches engineering contest to drive photonic chip applications

Semiconductor today - Wed, 10/09/2024 - 20:05
Photonic chips industry accelerator PhotonDelta of Eindhoven, the Netherlands (which connects and collaborates with an ecosystem of photonic chip technology organizations worldwide) is launching a global engineering contest in collaboration with engineering community and knowledge platform Wevolver to stimulate the creation of new applications for photonic chips that tackle global challenges...

Latest issue of Semiconductor Today now available

Semiconductor today - Wed, 10/09/2024 - 19:08
For coverage of all the key business and technology developments in compound semiconductors and advanced silicon materials and devices over the last month, subscribe to Semiconductor Today magazine...

Implementing enhanced wear-leveling on standalone EEPROM

EDN Network - Wed, 10/09/2024 - 16:43
Introduction/Problem

Longer useful life and improved reliability of products is becoming a more desirable trait. Consumers expect higher quality and more reliable electronics, appliances, and other devices on a tighter budget. Many of these applications include embedded electronics which contain on-board memory like Flash or EEPROM. As system designers know, Flash and EEPROM do not have unlimited erase/write endurance, but even so, these memories are necessary for storing data during operation and when the system is powered off. Therefore, it has become common to use wear-reduction techniques which can greatly increase embedded memory longevity. One common method of wear-reduction is called wear-leveling.

Wear-leveling

When using EEPROM in a design, it’s crucial to consider its endurance, typically rated at 100,000 cycles for MCU-embedded EEPROM and 1 million cycles for standalone EEPROM at room temperature. Designers must account for this by estimating the number of erase/write cycles over the typical lifetime of the application (sometimes called the mission profile) to determine what size of an EEPROM they need and how to allocate data within the memory.

For instance, in a commercial water metering system with four sensors for different areas of a building, each sensor generates a data packet per usage session, recording water volume, session duration, and timestamps. The data packets stored in the EEPROM are appended with updated data each time a new session occurs until the packet becomes full. Data is stored in the EEPROM until a central server requests a data pull. The system is designed to pull data frequently enough to avoid overwriting existing data within each packet. Assuming a 10-year application lifespan and an average of 400 daily packets per sensor, the total cycles per sensor will reach 1.46 million, surpassing the typical EEPROM endurance rating. To address this, you can create a software routine to spread wear out across the additional blocks (assuming you have excess space). This is called wear-leveling.

So, how is this implemented?

To implement wear-leveling for this application, you can purchase an EEPROM twice as large, allowing you to now allocate 2 blocks for each sensor (for a total of 2 million available cycles per sensor). This provides a buffer of additional cycles if needed (an extra 540 thousand cycles for each sensor in this example).

You will then need some way to know where to write new data to spread the wear. While you could write each block to its 1-million-cycle-limit before proceeding to the next, this approach may lead to premature wear if some sensors generate more data than others. If you spread the wear evenly across the EEPROM, the overall application will last longer. Figure 1 illustrates the example explained above, with four water meters sending data packets (in purple) back to the MCU across the communication bus. The data is stored in blocks within the EEPROM. Each block has a counter in the top left indicating the number of erase-write cycles it has experienced.

Figure 1 Commercial water metering, data packets being stored on EEPROM, EEPROM has twice as much space as required. Source: Microchip Technology

There are two major types of wear-leveling: dynamic and static. Dynamic is more basic and is best for spreading wear over a small space in the EEPROM. It will spread wear over the memory blocks whose data changes most often. It is easier to implement and requires less overhead but can result in uneven wear, which may be problematic as illustrated in Figure 2.

Figure 2 Dynamic wear-leveling will spread wear over the memory blocks whose data changes most often leading to a failure to spread wear evenly. Source: Microchip Technology

Static wear-leveling spreads wear over the entire EEPROM, extending the life of the entire device. It is recommended if the application can use the entire memory as storage (e.g., if you do not need some of the space to store vital, unchanging data) and will produce the highest endurance for the life of the application. However, it is more complex to implement and requires more CPU overhead.

Wear-leveling requires monitoring each memory block’s erase/write cycles and its allocation status, which can itself cause wear in non-volatile memory (NVM). There are many clever ways to handle this, but to keep things simple, let’s assume you store this information in your MCU’s RAM, which does not wear out. RAM loses data on power loss, so you will need to design a circuit around your MCU to detect the beginnings of power loss so that you will have time to transfer current register states to NVM.

The software approach to wear-leveling

In a software approach to wear-leveling, the general idea is to create an algorithm that directs the next write to the block with the least number of writes to spread the wear. In static wear-leveling, each write stores data in the least-used location that is not currently allocated for anything else. It also will swap data to a new, unused location if the number of cycles between the most-used and least-used block is too large. The number of cycles each block has been through is tracked with a counter, and when the counter reaches the maximum endurance rating, that block is assumed to have reached its expected lifetime and is retired.

Wear-leveling is an effective method for reducing wear and improving reliability. As seen in Figure 3, it allows the entire EEPROM to reach its maximum specified endurance rating as per the datasheet. Even so, there are a few possibilities for improvement. The erase/write count of each block does not represent the actual physical health of the memory but rather a rough indicator of the remaining life of that block. This means the application will not detect failures that occur before the count reaches its maximum allowable value. The application also cannot make use of 100% of the true life of each memory block.

Figure 3 Wear-leveling extending the life of EEPROM in application, including blocks of memory that have been retired (Red ‘X’s). Source: Microchip Technology

Because there is no way to detect physical wear out, the software will need additional checks if high reliability is required. One method is to read back the block you just wrote and compare it to the original data. This requires time on the bus, CPU overhead, and additional RAM. To detect early life failures, this readback must occur for every write, at least for some amount of time after the lifetime of the application begins. Readbacks to detect cell wear out type failures must occur every write once the number of writes begins to approach the endurance specification. Any time a readback does not occur, the user will not be able to detect any wear out and, hence, corrupted data may be used. The following software flowchart illustrates an example of static wear-leveling, including the readback and comparison necessary to ensure high-reliability.

Figure 4 Software flowchart illustrating static wear-leveling, including readbacks and comparisons of memory to ensure high-reliability. Source: Microchip Technology

The need to readback and compare the memory after each write can create severe limitations in performance and use of system resources. There exist some solutions to this in the market. For example, some EEPROMs include error correction, which can typically correct a single bit error out of every specified number of bytes (e.g., 4 bytes). There are different error correction schemes used in embedded memory, the most common being Hamming codes. Error correction works by including additional bits called parity bits which are calculated from the data stored in the memory. When data is read back, the internal circuit recalculates the parity bits and compares them to the parity bits that were stored. If there is a discrepancy, this indicates that an error has occurred. The pattern of the parity discrepancy can be used to pinpoint the exact location of the error. The system can then automatically correct this single bit error by flipping its value, thus restoring the integrity of the data. This helps extend the life of a memory block. However, many EEPROMs don’t give any indication that this correction operation took place. Therefore, it still doesn’t solve the problem of detecting a failure before the data is lost.

A data-driven solution to wear-leveling software

To detect true physical wear out, certain EEPROMs include a bit flag which can be read when a single-bit error in a block has been detected and corrected. This allows you to readback and check a single status register to see if ECC was invoked during the last operation. This reduces the need for readbacks of entire memory blocks to double-check results (Figure 5). When an error is determined to have occurred within the block, you can assume the block is degraded and can no longer be used, and then retire it. Because of this, you can rely on data-based feedback to know when the memory is actually worn out instead of relying on a blind counter. This essentially eliminates the need for estimating the expected lifetime of memory in your designs. This is great for systems which see vast shifts in their environments over the lifetime of the end application, like dramatic temperature and voltage variations which are common in the manufacturing, automotive and utilities industries. You can now extend the life of the memory cells all the way to true failure, potentially allowing you to use the device even longer than the datasheet endurance specification.

Figure 5 Wear-leveling with an EEPROM with ECC and status bit enables maximization of memory lifespan by running cells to failure, potentially increasing lifespan beyond datasheet endurance specification. Source: Microchip Technology

Microchip Technology, a semiconductor manufacturer with over 30 years of experience producing EEPROM now offers multiple devices which provide a flag to tell the user when error-correction has occurred, in turn alerting the application that a particular block of memory must be retired.

  • I2C EEPROMs: 24CSM01 (1 Mbit), 24CS512 (512 Kbit), 24CS256 (256 Kbit)
  • SPI EEPROMs: 25CSM04 (4 Mbit), 25CS640 (64 Kbit)

This is a data-driven approach to wear-leveling which can further extend the life of the memory beyond what standard wear-leveling can produce. It is also more reliable than classic wear-leveling because it uses actual data instead of arbitrary counts—if one block lasts longer than another, you can continue using that block until cell wear out. This can reduce time taken on the bus, CPU overhead, and required RAM which in turn can reduce power consumption and overall system performance. As shown in Figure 6, the software flow can be updated to accommodate this new status indicator.

Figure 6 Software flowchart illustrating a simplified static wear-leveling routine using an error correction status indicator. Source: Microchip Technology

As illustrated in the flowchart, using an error correction status (ECS) bit eliminated the need to readback data, store it in RAM, and perform a complete comparison to the data just written, free up resources and creating a conceptually simpler software flow. A data readback is still required (as the status bit is only evaluated on reads), but the data can be ignored and thrown out before simply reading the status bit, eliminating the need for additional RAM and CPU comparison overhead. The number of times the software checks the status bit will vary based on the size of the blocks defined, which in turn depend on the smallest file size the software is handling.

 The following are some advantages of the ECS bit:

  • Maximize EEPROM block lifespan by running cells to failure
  • Option to remove full block reads to check for data corruption, freeing up time on the communication bus
  • If wear-leveling is not necessary or too burdensome to the application, the ECS bit serves as a quick check of memory health, facilitating the extension of EEPROM block lifespan and helping to avoid tracking erase/write cycles
Reliability improvements with an ECS bit

Error correction implemented with a status indicator is a powerful tool for enhancing reliability and extending device life, especially when used in a wear-leveling scheme. Any improvements in reliability are highly desired in automotive, medical, and other functional safety type applications, and are welcomed by any designer seeking to create the best possible system for their application.

Eric Moser is a senior product marketing engineer for Microchip Technology Inc. and is responsible for guiding the business strategy and marketing of multiple EEPROM and Real Time Clock product lines. Moser has 8 years of experience at Microchip, spending five years as a test engineer in the 8-bit microcontroller group. Before Microchip, Moser worked as an embedded systems engineer in various roles involving automated testbed development, electronic/mechanical prognostics, and unmanned aerial systems. Moser holds a bachelor’s degree in systems engineering from the University of Arizona.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Implementing enhanced wear-leveling on standalone EEPROM appeared first on EDN.

OMNIVISION Partners with Philips on Industry’s First In-Cabin Driver Health Monitoring Automotive Solution

ELE Times - Wed, 10/09/2024 - 14:57

First-ever demo of connected in-cabin vital signs monitoring will debut at AutoSens Europe, featuring OMNIVISION’s state-of-the-art CMOS image sensor and Philips’ vital signs camerai for automotive software

OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog and touch & display technology, and Philips, a global technology company focused on improving people’s health and well-being through meaningful innovation in healthcare and consumer lifestyle, today announced they will jointly demonstrate a prototype of the world’s first in-cabin connected well-being monitoring solution at AutoSens Europe, taking place October 8-10, 2024, at Palau de Congressos, Barcelona, Spain.
The in-cabin health and well-being system monitors vital signs such as pulse and breathing rate. The data may enable customization of comfort settings while driving, such as intelligently adapting media, climate, lighting, seating, engine modes, scent and more. It will also help enable timed delivery of vehicle notifications or make adaptive route and break suggestions.
“Automotive OEMs are continuously looking to add value and differentiate their brands by adding novel features that increase the comfort level in cars,” said Ritesh Agarwal, senior automotive marketing manager, OMNVISION.“As a leading supplier of image sensors for the automotive market, we have partnered with Philips, a renowned health and well-being technology software provider, to develop a vital signs monitoring solution particularly tailored to the automotive industry, which has the potential to be connected to the comfort and safety settings of the car. This in-cabin solution will bring added value to automotive consumers and shorten time to market for tier-one automotive OEMs.”
“By collaborating with OMNIVISION, we have demonstrated that camera sensors already available in the automotive industry are capable of accurately measuring vital signs such as pulse rate and breathing rate,” said Laurens Pronk, business development manager EMEA, Philips. “Philips has over 20 years of experience in developing and clinically validating patented vital signs monitoring algorithms for various sensor technologies. We have partnered with OMNIVISION, an industry leader in in-cabin automotive image sensors, to leverage our joint capabilities and demonstrate this state-of-the-art technology during AutoSens Europe.”
The demonstrated prototype combines Philips’ vital signs camera for automotive software with OMNIVISION’s state-of-the-art OX05B1S CMOS image sensor, a 5-megapixel (MP) RGB-IR backside illuminated (BSI) global shutter sensor for in-cabin monitoring systems. The image sensor features Nyxel technology, which uses novel silicon semiconductor architectures and processes to achieve the industry leading quantum efficiency at the 940nm near-infrared (NIR) wavelength. This enables the OX05B1S to detect and recognize objects that other image sensors would miss under extremely low lighting conditions, providing higher performance in‑cabin camera capabilities. An advanced artificial intelligence (AI)-enabled OAX4600 image signal processor seamlessly processes the data from the image sensor for the med-tech system.

The post OMNIVISION Partners with Philips on Industry’s First In-Cabin Driver Health Monitoring Automotive Solution appeared first on ELE Times.

Lucid Motors Selects Everspin’s PERSYST MRAM for Gravity Electric SUV

ELE Times - Wed, 10/09/2024 - 14:39

Lucid has integrated Everspin’s MRAM Across Multiple High-Performance EV Models, Strengthening Data Reliability and System Performance

Everspin Technologies, Inc., the world’s leading developer and manufacturer of Magnetoresistive Random Access Memory (MRAM) persistent memory solutions, today announced that Lucid Motors will use its PERSYST MRAM in its recently released Gravity SUV. The MR25H256A, a 256Kb serial MRAM, was selected because it meets the AEC Q100 Grade 1 specification of -40C to 125C temperature operation. The MRAM is used in the Gravity to handle data logging and parameter storage to assist in the efficient operation of the all-electric powertrain.
“The selection of our PERSYST product family for both the Lucid Gravity SUV and the Lucid Air is a testament to the reliability and performance our MRAM products provide in demanding environments,” said Sanjeev Aggarwal, President and CEO of Everspin Technologies. “As the automotive industry evolves, the ability to ensure data integrity and system resilience is critical, and our memory solutions continue to play an essential role in meeting those challenges.”
Everspin’s PERSYST family of MRAM’s represent the highest performing persistent memory in the industry. With virtually infinite write endurance, customers can adopt MRAM and be confident that the product performance will not degrade over the life of the system. Combined with very fast write speed and low read latency, PERSYST MRAM can handle the most stringent memory workloads and protect vital user data, even in the event of power loss or interruption.

The post Lucid Motors Selects Everspin’s PERSYST MRAM for Gravity Electric SUV appeared first on ELE Times.

STMicroelectronics Announces Timing for Third Quarter 2024 Earnings Release and Conference Call and Capital Markets Day Webcast

ELE Times - Wed, 10/09/2024 - 14:29

STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, announced that it will release third quarter 2024 earnings before the opening of trading on the European Stock Exchanges on October 31, 2024.

STMicroelectronics will conduct a conference call with analysts, investors and reporters to discuss its third quarter 2024 financial results and current business outlook on October 31, 2024 at 9:30 a.m. Central European Time (CET) / 3:30 a.m. U.S. Eastern Time (ET).

A live webcast (listen-only mode) of the conference call will be accessible at ST’s website, https://investors.st.com, and will be available for replay until November 15, 2024.

The Company will webcast live its 2024 Capital Markets Day meeting from Paris, France, on Wednesday, November 20, from 9:00 a.m. to 1:15 p.m. Central European Time (CET) / 3:00 a.m. to 7:15 a.m. U.S. Eastern Time (ET).

The post STMicroelectronics Announces Timing for Third Quarter 2024 Earnings Release and Conference Call and Capital Markets Day Webcast appeared first on ELE Times.

Optimizing Storage Controller Chips to Meet Edge AI Demands

ELE Times - Wed, 10/09/2024 - 13:40

As AI technologies advance, they are placing unprecedented demands on personal computing devices and smartphones. These edge devices, which are becoming increasingly untethered from cloud data centers, must handle substantial computing loads, driven by AI models that often contain billions of parameters. With AI integration predicted to skyrocket, storage controller chips are facing growing pressure to deliver optimized performance to keep pace with these evolving workloads.

According to industry forecasts, by 2025 nearly half of all new personal computers will run AI models, including generative AI, locally. This shift is transforming edge computing, enabling devices like PCs and smartphones to process AI tasks without relying on cloud infrastructure. However, this advancement brings with it significant challenges for hardware, particularly in terms of memory, interconnect, and storage.

Key Challenges for Storage in AI-Driven Systems

Storage systems in edge devices must excel in four critical areas to effectively support AI workloads: capacity, power efficiency, data efficiency, and security.

  1. Capacity:

The massive datasets required by generative AI models demand extensive storage capacity. Applications such as image generation tools or AI-driven content creation software may require gigabytes, if not terabytes, of storage. For example, Microsoft’s Phi-3 language model, despite being compact, has 3.8 billion parameters and requires between 7 and 15 gigabytes of storage. As multiple AI applications coexist on a single device, storage needs will quickly surpass a terabyte.

  1. Power Efficiency:

While often overlooked, power efficiency is critical for edge devices, particularly mobile platforms where battery life is a priority. Storage components contribute significantly to power consumption, accounting for about 10% of a laptop’s power usage and roughly 5% in smartphones. As AI models and workloads expand, power-efficient storage solutions are essential to maintain extended operating hours without compromising performance.

  1. Data Efficiency:

Efficient use of storage space not only improves performance but also impacts access latency and the longevity of NAND flash storage. Storage controllers must manage how data is placed and retrieved from NAND flash to minimize latency and optimize flash endurance. Techniques like zoned namespaces (ZNS) and flexible data placement (FDP) can help ensure that data is stored in a way that optimizes both power and data efficiency, which is crucial for AI applications.

  1. Security:

As AI models often represent years of research and development, their parameter files are highly valuable and must be protected. Developers require robust security protocols to safeguard these files from tampering or theft. Additionally, with more data processing occurring locally rather than in the cloud, users are increasingly storing sensitive personal information on their devices, further heightening the need for secure storage systems.

Designing Storage Controllers for AI at the Edge

To meet these evolving demands, storage controllers must be specifically designed to handle the unique requirements of AI workloads on edge devices. A new generation of storage controllers is now available, optimized for AI-ready PCs and smartphones, each offering performance and efficiency enhancements tailored to their respective platforms.

Case Study: AI-Ready PCs

For AI-enabled personal computers, raw storage performance and capacity are critical to support large AI models and multitasking environments. One example is Silicon Motion’s SM2508 controller, designed for high-performance AI workloads in PCs. The SM2508 controller features four PCIe Gen5 lanes for data transfer to the host and eight NAND channels, enabling sequential read speeds of up to 14.5 Gbytes per second. This high throughput ensures smooth operation even with complex, multi-tasking AI applications.

In addition to speed, the SM2508 can manage up to 8 terabytes of NAND flash, providing ample capacity for AI workloads that rely on vast amounts of data. To support this, system designers are leveraging the latest quad-level-cell (QLC) 3D NAND flash, which allows for dense storage. However, QLC chips are prone to unique error patterns as they age, requiring advanced error-correction algorithms to maintain reliability. Silicon Motion has developed a machine-learning-based error correction code (ECC) that adapts to these patterns over time, reducing latency and extending the lifespan of the storage system.

Power Efficiency and Data Management

Power efficiency is also a significant concern in AI-ready PCs, especially given the intense computational loads AI models impose. The SM2508 controller is manufactured using TSMC’s 6 nm process, which allows for more efficient power management compared to previous generations built on 12 nm technology. By organizing the functional blocks within the chip and incorporating sophisticated power management features, Silicon Motion has managed to reduce power consumption by half.

Data management plays a crucial role in both power efficiency and overall performance. By optimizing how data is placed and managed within NAND flash, the SM2508 controller can reduce power usage by up to 70% compared to competing solutions. These enhancements ensure that AI workloads can run efficiently without draining battery life or reducing system performance.

Security for AI-Driven PCs

Security is another essential pillar for AI-based systems. The SM2508 controller features a tamper-resistant design and uses a secure boot process to authenticate firmware, ensuring that the system remains protected from unauthorized access. The controller also complies with Opal full-disk encryption standards and supports AES 128/256 and SHA 256/384 encryption, securing data without compromising performance.

Case Study: AI-Enabled Smartphones

While the requirements for AI smartphones are similar to those of AI PCs—capacity, power efficiency, data efficiency, and security—mobile devices face additional constraints in size, weight, and battery life. For this market, Silicon Motion has developed the SM2756 controller, optimized for the mobile-optimized Universal Flash Storage (UFS) 4 specification.

UFS 4 offers significant performance improvements over UFS 3.1, and the SM2756 controller takes full advantage of these enhancements. With a 2-lane HS-Gear-5 interface and MPHY 5.0 technology, the controller achieves sequential read speeds of up to 4.3 Gbytes per second, allowing smartphones to load multi-billion-parameter AI models in under half a second. This fast-loading capability is crucial for AI applications to provide a seamless user experience.

To meet the capacity requirements of AI smartphones, the SM2756 controller supports tri-level and QLC 3D flash, managing up to 2 terabytes of storage. Power efficiency is another critical aspect, with the SM2756 achieving nearly 60% power savings when loading large AI parameter files compared to UFS 3 controllers.

Like its counterpart for PCs, the SM2756 leverages sophisticated firmware algorithms to optimize data placement and improve performance. Additionally, it includes anti-hacking measures to prevent unauthorized access during boot-up, ensuring data integrity and security on mobile devices.

Conclusion

As AI continues to evolve, pushing more workloads to edge devices like PCs and smartphones, the demands on storage systems will only intensify. Storage controller chips will play a pivotal role in ensuring that devices can handle the performance, capacity, power efficiency, and security requirements necessary to support AI applications. By developing controllers like the SM2508 and SM2756, Silicon Motion is paving the way for a new generation of AI-enabled devices, equipped to meet the challenges of the edge AI revolution.

Citations from Silicon Motion

The post Optimizing Storage Controller Chips to Meet Edge AI Demands appeared first on ELE Times.

The Role of Wide-Bandgap Semiconductors in Powering the Future of Software-Defined Vehicles

ELE Times - Wed, 10/09/2024 - 08:41

The automotive industry is undergoing a profound transformation, shifting from mechanical-driven vehicles to software-defined vehicles (SDVs). This transition is not just about enhancing features but also about creating platforms that can adapt and evolve. SDVs are capable of upgrading their functionalities via over-the-air updates, thanks to the increased reliance on software for managing many critical vehicle systems. A cornerstone of this shift is the incorporation of advanced semiconductor technologies, particularly wide-bandgap (WBG) semiconductors such as silicon carbide (SiC) and gallium nitride (GaN). These materials offer superior performance compared to traditional silicon-based components, making them pivotal in supporting the next generation of electric and autonomous vehicles.

Wide-Bandgap Semiconductors: An Overview

WBG semiconductors, primarily represented by SiC and GaN, are becoming essential in automotive innovation due to their exceptional electrical and thermal properties. What sets these semiconductors apart is their ability to operate at significantly higher voltages, temperatures, and frequencies than conventional silicon-based components. This is possible because of their larger bandgaps—SiC has a bandgap of 3.3 eV and GaN about 3.4 eV, which is much wider than silicon’s 1.1 eV bandgap. The wider bandgap allows these semiconductors to handle higher electric fields, dissipate heat more efficiently, and reduce energy losses, making them ideal for high-performance applications.

In automotive systems, these characteristics translate into several key advantages. WBG semiconductors enable higher electrical efficiency, reduce the size of cooling systems, and increase the reliability of power electronics—all of which are critical as vehicles become more electrified and software-defined. Moreover, these semiconductors’ ability to function in extreme conditions makes them well-suited for next-generation automotive platforms.

Automotive Applications of WBG Semiconductors

The adoption of SiC and GaN in vehicles is revolutionizing various key systems, including power electronics, electric drivetrains, and charging infrastructure. WBG semiconductors are already playing a central role in enhancing electric vehicles’ performance, efficiency, and longevity (EVs).

  1. Power Electronics: WBG semiconductors are increasingly being utilized in inverters, which are essential components in EVs. Inverters transform the direct current (DC) from the battery into alternating current (AC), which is necessary to drive the electric motor. SiC and GaN components enable inverters to operate at higher voltages and temperatures, significantly improving power conversion efficiency. This not only leads to better energy utilization but also extends the range of EVs by reducing energy losses.
  2. Electric Drivetrains: The use of SiC in drivetrain systems allows EVs to handle higher power loads with greater efficiency. SiC components can manage faster switching speeds and higher temperatures, which enhances the overall performance of the electric motor. This means that vehicles can achieve better acceleration, longer driving ranges, and increased battery life—all critical for the next generation of electric vehicles.
  3. Charging Systems: Fast charging has become a major focus area for EVs, and WBG semiconductors are enabling significant advancements in this space. SiC and GaN components allow for faster switching speeds in power electronics, which supports ultra-fast charging stations. These components can handle higher voltages and currents without overheating, allowing vehicles to recharge in a fraction of the time required by traditional charging systems. This is a game-changer for EV owners, as it addresses one of the major pain points—long charging times.
Impact on Vehicle Performance and Efficiency

The integration of WBG semiconductors into EV systems fundamentally improves several key performance metrics, including vehicle efficiency, charging capabilities, and component longevity. These enhancements are critical as automakers strive to make EVs more appealing to mainstream consumers.

  1. Improved Electrical Efficiency: SiC and GaN semiconductors have lower electrical losses compared to traditional silicon components. In power electronics systems, such as inverters, this means that less energy is lost as heat during the conversion of electricity from the battery to the motor. Studies show that SiC inverters can improve efficiency by up to 3%, which translates into more of the battery’s energy being used for propulsion rather than being wasted. This improvement plays a direct role in extending the range of EVs.
  2. Extended EV Range: As WBG semiconductors improve the efficiency of critical systems like inverters and drivetrains, they also directly impact the vehicle’s range. Vehicles using SiC and GaN components can travel longer distances on a single charge, a feature that helps alleviate “range anxiety”—a common concern among potential EV buyers. The increased efficiency means that EVs can compete more effectively with traditional internal combustion engine vehicles in terms of range.
  3. Faster Charging Times: The use of WBG semiconductors in charging systems not only allows for faster charging speeds but also supports the development of higher-powered charging stations. SiC and GaN’s ability to operate at higher voltages and currents without overheating means that EVs can charge to 80% capacity in as little as 20 minutes. This reduction in downtime makes EVs more practical for long-distance travel and enhances their overall convenience.
  4. Longer Component Lifespan: WBG semiconductors are more durable and capable of withstanding extreme temperatures and voltages, which makes them less prone to degradation over time. This resilience leads to longer lifespans for critical components like inverters and chargers, reducing maintenance costs and increasing the overall lifecycle of the vehicle. For manufacturers, this means fewer warranty claims, while for consumers, it means lower repair costs over the vehicle’s lifetime.
Challenges and Limitations of WBG Semiconductors

Despite their many advantages, the adoption of WBG semiconductors in the automotive industry faces some challenges. One of the most significant is the cost. SiC and GaN materials are considerably more expensive than traditional silicon, and their production involves complex fabrication techniques. As a result, vehicles equipped with WBG components may have higher upfront costs, potentially limiting their market penetration in the short term.

Another challenge is the integration of these advanced materials into existing vehicle architectures. Automotive standards are stringent, and new technologies must undergo rigorous validation to ensure they can perform reliably under diverse and often harsh conditions. The need for extensive testing and validation may slow down the adoption of WBG semiconductors in mass-market vehicles.

The Road Ahead: Future Trends in WBG Technology

Looking forward, ongoing research and development in WBG semiconductor technology aim to overcome these challenges and further enhance their performance. Researchers are exploring ways to improve the efficiency and durability of SiC and GaN components while reducing production costs. Additionally, advancements in material science could lead to the development of new composite materials that combine the best properties of WBG semiconductors with other elements.

As WBG technology matures, it is expected to have a profound impact on vehicle design and functionality. The enhanced power-handling capabilities of SiC and GaN could lead to more compact and efficient vehicle architectures, freeing up space for other innovations. Furthermore, these technologies will play a key role in enabling more advanced software-defined features, such as autonomous driving systems and adaptive performance tuning.

Conclusion

Wide-bandgap semiconductors represent a critical enabler for the future of software-defined vehicles. Their superior electrical and thermal properties position them as indispensable components in next-generation EVs, offering enhanced efficiency, faster charging, and greater durability. However, realizing their full potential will require continued research, collaboration between automakers and semiconductor manufacturers, and innovations that address cost and integration challenges. As these obstacles are overcome, WBG semiconductors will play a transformative role in shaping the future of the automotive industry, driving more sustainable, efficient, and intelligent transportation solutions.

Citations from an article by Infineon Technologies

The post The Role of Wide-Bandgap Semiconductors in Powering the Future of Software-Defined Vehicles appeared first on ELE Times.

STMicroelectronics showcases Sustainable and Innovative Technologies at electronica India 2024

ELE Times - Wed, 10/09/2024 - 08:23

Driving Innovation in Efficiency, Precision, and AI-Enabled Solutions

STMicroelectronics has introduced a series of cutting-edge innovations, empowering developers in motor control, edge AI, sensor fusion, human presence detection, and ultra-low-power radio solutions. These advancements are set to transform industries like home appliances, industrial automation, robotics, and smart sensing, reinforcing ST’s leadership in embedded technologies.

Rashi Bajpai, Sub-Editor at ELE Times, engaged with ST’s leadership during electronica India 2024 to explore emerging technologies.

  1. Motor Control + Edge AI for Washing Machines by Mohammed Zeya WASE

ST’s all-in-one kit solution for washing machine and motor-driving developers combines Motor Control FOC Sensorless technology with Nano Edge AI to significantly boost energy and water efficiency. With precise cloth weight measurement (accuracy of 100g) and double-digit improvements in energy consumption, the integration of ST’s SLLIMM IPM ensures superior motor performance.

Additionally, developers can create advanced user interfaces using the TouchGFX graphical framework, allowing for swift deployment of interactive washing machine designs. This solution marks a major leap forward in the creation of eco-friendly, intelligent home appliances.

  1. ST MEMS Sensor with Orientation Tracking by Hong Shao Chen

ST’s new generation of Inertial Measurement Units (IMUs) featuring built-in sensor fusion algorithms allow for real-time orientation tracking of robotic and vehicle applications. The sensor fusion processes data from the accelerometer, gyroscope, and magnetometer (optional) to deliver quaternion output, tracking an object’s orientation in 3D space.

This feature is available through the STM32 MotionFX API or directly within ST’s IMUs, such as the LSM6DSV family for consumer applications and the ISM330BX for industrial use cases. By embedding these algorithms directly into the sensors, developers can accelerate their innovation in robotics, drones, and other motion-sensitive applications.

  1. ST BrightSense – Imaging sensors for computer vision by Vincent Lin

ST BrightSense portfolio leverages cutting-edge pixel technologies to offer a tiny form factor and ultra-low power consumption. Combining global shutter, 3D stacking, backside interface (BSI), and capacitive deep trench isolation (CDTI) technologies, ST BrightSense camera sensors provide superior image quality for smart, accurate, and reactive camera-based systems. Their rich set of on-chip features allows faster and more efficient processing to support the next generation of smart devices.

  1. STM32WL33 with ULP Wake Up Radio by Pradyumna Kumar JENA

The STM32WL33 delivers an ultra-low-power Sub-GHz System-on-Chip (SoC) with integrated radio capabilities, optimized for IoT and industrial applications. One of its standout features is its ULP Wake Up Radio, which consumes just 4.2µA in always-on mode, allowing remote activation of devices with minimal power consumption.

Boasting 20 dBm transmission power and internal PA, this SoC is built for energy-efficient operation, with a receiving current of 5.6 mA and a transmission current of 10mA at 10 dBm. Available for evaluation via the NUCLEO-WL33CC1 board, this solution opens the door for ultra-low-power IoT devices that can remain on standby without draining energy resources.

Conclusion:

STMicroelectronics continues to push the boundaries of innovation, providing developers with cutting-edge tools for creating smarter, more efficient, and highly integrated products. From washing machines to industrial sensors, these new solutions underscore ST’s commitment to energy efficiency, advanced functionality, and seamless integration for next-gen applications for a sustainable future.

 

The post STMicroelectronics showcases Sustainable and Innovative Technologies at electronica India 2024 appeared first on ELE Times.

QPT wins APC project grant to develop high-frequency GaN inverter demonstrator for automotive

Semiconductor today - Wed, 10/09/2024 - 00:25
Independent power electronics company Quantum Power Transformation (QPT) Ltd of Cambridge, UK — which was founded in 2019 and develops gallium nitride (GaN)-based electric motor controls — has won a grant for the project VERDE to develop a high-frequency 400V/60kW GaN inverter demonstrator for automotive use that will help to demonstrate that GaN is now superior to silicon carbide (SiC) or silicon...

Keysight unveils 3kV high-voltage wafer test system for power semiconductors

Semiconductor today - Wed, 10/09/2024 - 00:19
Keysight Technologies Inc of Santa Rosa, CA, USA has expanded its semiconductor test portfolio by introducing the 4881HV high-voltage wafer test system, which is said to improve the productivity of power semiconductor manufacturers by enabling parametric tests up to 3kV supporting high- and low-voltage in one-pass test...

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки