Новини світу мікро- та наноелектроніки

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 09/10/2022 - 17:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Cooking up Next-gen Semiconductors

ELE Times - Fri, 09/09/2022 - 15:10

A household microwave oven modified by a Cornell engineering professor is helping to cook up the next generation of cellphones, computers, and other electronics after the invention was shown to overcome a major challenge faced by the semiconductors industry.

As microchips continue to shrink, silicon must be doped, or mixed, with higher concentrations of phosphorus to produce the desired current. Semiconductor manufacturers are now approaching a critical limit in which heating the highly doped materials using traditional methods no longer produces consistently functional semiconductors.

The Taiwan Semiconductor Manufacturing Company (TSMC) theorized that microwaves could be used to activate the excess dopants, but just like with household microwave ovens that sometimes heat food unevenly, previous microwave annealers produced “standing waves” that prevented consistent dopant activation.

TSMC partnered with Hwang, who modified a microwave oven to selectively control where the standing waves occur. Such precision allows for the proper activation of the dopants without excessive heating or damage to the silicon crystal.

This discovery could be used to produce semiconductor materials and electronics appearing around the year 2025, said Hwang, who has filed two patents for the prototype.

“A few manufacturers are currently producing semiconductor materials that are 3 nanometers,” Hwang said. “This new microwave approach can potentially enable leading manufacturers such as TSMC and Samsung to scale down to just 2 nanometers.”

The breakthrough could change the geometry of transistors used in microchips. For more than 20 years, transistors have been made to stand up like dorsal fins so that more can be packed on each microchip, but manufacturers have recently begun to experiment with a new architecture in which transistors are stacked horizontally. The excessively doped materials enabled by microwave annealing would be key to the new architecture.

The post Cooking up Next-gen Semiconductors appeared first on ELE Times.

Intelligent Quantum Sensor of Light Waves

ELE Times - Fri, 09/09/2022 - 15:10

The University of Texas at Dallas physicists and their collaborators at Yale University have demonstrated an atomically thin, intelligent quantum sensor that can simultaneously detect all the fundamental properties of an incoming light wave.

A new concept based on quantum geometry that could find use in health care, deep-space exploration, and remote-sensing applications.

“We are excited about this work because typically, when you want to characterize a wave of light, you have to use different instruments to gather information, such as the intensity, wavelength, and polarization state of the light. Those instruments are bulky and can occupy a significant area on an optical table,” said Dr. Fan Zhang, a corresponding researcher of the quantum study and associate professor of physics in the School of Natural Sciences and Mathematics.

“Now we have a single device—just a tiny and thin chip—that can determine all these properties simultaneously in a very short time,” he said.

The device exploits the unique physical properties of a novel family of two-dimensional materials called moiré metamaterials. Zhang, a theoretical physicist, published a review article on these materials.

2D materials have periodic structures and are atomically thin. If two layers of such a material are overlaid with a small rotational twist, a moiré pattern with an emergent, orders-of-magnitude larger periodicity can form. The resulting moiré metamaterial yields electronic properties that differ significantly from those exhibited by a single layer alone or by two naturally aligned layers.

The sensing device that Zhang and his colleagues chose to demonstrate their new idea incorporates two layers of relatively twisted, naturally occurring bilayer graphene, for a total of four atomic layers.

“The moiré metamaterial exhibits what’s called a bulk photovoltaic effect, which is unusual,” said Patrick Cheung, a physics doctoral student at UT Dallas. “Normally, you have to apply a voltage bias to produce any current in a material. But here, there is no bias at all; we simply shine a light on the moiré metamaterial, and the light generates a current via this bulk photovoltaic effect. Both the magnitude and phase of the photovoltage are strongly dependent on the light intensity, wavelength, and polarization state.”

By tuning the moiré metamaterial, the photovoltage generated by a given incoming light wave creates a 2D map that is unique to that wave—like a fingerprint—and from which the wave’s properties might be inferred, although doing so is challenging, Zhang said.

Researchers in Dr. Fengnian Xia’s lab at Yale University, who constructed and tested the device, placed two metal plates, or gates, on top and underneath the moiré metamaterial. The two gates allowed the researchers to tune the quantum geometric properties of the material to encode the infrared light waves’ properties into “fingerprints.”

The team then used a convolutional neural network—an artificial intelligence algorithm that is widely used for image recognition—to decode the fingerprints.

“We start with light for which we know the intensity, wavelength, and polarization, shine it through the device, and tune it in different ways to generate different fingerprints,” Cheung said. “After training the neural network with a data set of about 10,000 examples, the network is able to recognize the patterns associated with these fingerprints. Once it learns enough, it can characterize an unknown light.”

Cheung performed theoretical calculations and analysis using the resources of the Texas Advanced Computing Center, a supercomputer facility on the UT Austin campus.

“Patrick has been good at pencil-and-paper analytical calculations—that is my style—but now he has become an expert in using a supercomputer, which is required for this work,” Zhang said. “On the one hand, our job as researchers is to discover new science. On the other hand, we advisors want to help our students discover what they are best at. I’m very happy that Patrick and I figured out both.”

The post Intelligent Quantum Sensor of Light Waves appeared first on ELE Times.

2D Materials in Piezoelectric Nanogenerators (PENGs)

ELE Times - Fri, 09/09/2022 - 15:08

As technology advances, so does the technology that is powering it. Over the years, a number of energy creation, energy storage, and energy harvesting devices have been developed to provide power to large electronic systems and individual electronic devices in a range of ways. As society moves towards Industry 4.0 and the Internet of Things (IoT), there’s an opportunity to create many types of ultra-small devices that can be used for automation and remote monitoring, and telemedicine applications.

Powering small-scale devices—especially in remote applications—requires unconventional self-powering mechanisms to be self-sufficient. In recent years, a number of different small-scale energy harvesters, known as nanogenerators, have gathered interest for powering small-scale devices in medical, remote monitoring, and IoT applications. The small size of these devices means they are not too bulky for the small devices they are powering. Despite their small size, they can still provide enough electricity for many devices to self-charge using their natural operating environment. In some cases, it might also be possible to use nanogenerators for large-scale harvesting—if many individual devices are integrated into a single harvesting system—however, this is something that has not been looked at yet from a widespread research perspective.

While there are a number of different nanogenerators, they are all used in different operating environments because the generation of an electrical charge is often governed by the stimuli in the surrounding environment. One of the more promising, widely talked about, and widely researched nanogenerators is the piezoelectric nanogenerator—often referred to in shorthand as a PENG.

Using the Piezoelectric Effect

PENGs use the piezoelectric effect to generate an electric charge. The piezoelectric effect is when an electrical charge is generated under an applied stress/load on a material. The piezoelectric effect is a reversible effect, so once the stress has been removed, the electrical charge stops. This also means that the piezoelectric effect can work in the other direction where an electrical voltage can be applied to the material, causing the atomic structure of the material to deform and become stress-induced.

In terms of the specific mechanisms, it is the rearrangement of ions at the atomic level—within the solid-state lattice—that generates the piezoelectricity. Most piezoelectric materials are inorganic in nature, and when they aren’t, they have some form of crystal structure (inorganic materials have this also). This means that (mostly) the piezoelectric has a regular and repeating array of well-ordered cations and anions within its atomic lattice. It is the deformation of the ions within this regularly patterned lattice that generates an electrical charge. While the material retains an overall neutral charge—the overall charge of the material doesn’t change, only the localized distribution of charges at the atomic level changes.

So, when the stress/load is applied to the piezoelectric material, the oppositely charged ions move from their original positions within the lattice to a point where they lie closer to each other. This rearrangement alters the charge balance within the lattice and induces an external electric field. The effects of the charge imbalance also permeate throughout the material. The result is the appearance of a net charge—either positive or negative—on one of the outer faces of the crystal. This subsequently creates a voltage across the oppositely charged crystal face. The piezoelectric charge can be harnessed, but when the stress stimulus is removed, the crystal lattice returns to its natural state and the voltage ceases.

In certain scenarios—such as the movement of a limb in wearable electronics, the movement of internal organs in implantable electronics, or the movement of the local surrounding environment in remote sensing/monitoring applications, to name a few—movements can create stresses across the piezoelectric material at the atomic scale that can then be harnessed.

In many cases where the PENG is used, the harnessing of the induced stress and the resulting electrical charge can then be used to power a small device that it is attached to. However, in certain situations—mainly sensing—the nanogenerator can act as both the powering device and the sensing device, as the generation of an electrical charge can act as a usable and detectable output for the sensor in load-bearing/stress-sensing situations.

Why 2D Materials Are Showing Promise for PENG Energy Harvesters

2D materials show promise for PENG Energy Harvesters for a number of reasons. First, the inherent thinness and small size of 2D materials enable the creation of ultra-small harvesting devices that are small enough to power the very small nodes in IoT systems, power very small sensors in remote monitoring applications, and charge small-scale implantable or wearable medical devices. By contrast, bulkier materials would create harvesting/power systems that are too large and unfeasible for these types of applications. This is why you often see nanomaterials touted for wearable/implantable electronics, IoT, and remote sensing applications.

Another aspect is the mechanical strength and flexibility of many 2D materials. Because the piezoelectric effect is induced by some level of mechanical deformation, the materials generating the electric current need to be robust and be able to withstand many bending cycles. The inherent thinness of 2D materials means that they have a very high degree of flexibility. While graphene has the highest flexibility, inorganic materials have relatively high flexibility compared to their bulkier counterparts, and other inorganic materials in general. When this flexibility is coupled with high mechanical strength, it means that the 2D materials can withstand a great deal of mechanical stress, leading to PENGs that can withstand many bending cycles, and in turn, be able to produce an electrical charge for longer time periods than when using other materials.

Then, there’s also the ability to exhibit piezoelectric properties. Traditionally, piezoelectric properties are seen in a range of inorganic materials, including natural and synthetic crystal materials, synthetic ceramics, group III-V and II-VI semiconductors, and various metal oxide complexes. Many different 2D materials are also known to exhibit piezoelectric properties, some of which are semiconducting materials. In terms of the materials of interest for PENGs, currently, hexagonal boron nitride (h-BN), various semiconducting transition metal dichalcogenides, group III and IV monochalcogenides, and chemically modified graphene—so that it is more semiconducting in nature rather than fully conducting as it naturally has no electronic bandgap—are the go-to choices.

Factors to Be Aware of with 2D Material PENGs

While the potential for creating PENGs using 2D materials exists, they, like any material, need to be used in the right way. In many cases, the piezoelectricity is only seen in single and few layered 2D materials. Once you get beyond this, the level of piezoelectricity generated is insufficient to power devices. As more 2D layers are added, this diminishing effect has been attributed to the lattice distortion caused by strain and the consequent charge polarization in the crystal. The more layers, the less flexible the 2D material is, so the lower the induced amount of strain, and therefore, the lower the degree of crystal polarization and generated electrical charge.

There have also been some other interesting phenomenon discovered with some 2D materials known as the layer dependence effect. While it’s not applicable to all 2D materials, it’s not only the number of layers that can influence the piezoelectric properties of the 2D material, but also whether there is an odd or even number of layers. This is because, in some cases, an odd number of layers has piezoelectric properties, but once the number of layers becomes even, the other layer becomes counterbalanced leading to piezoresistive properties. This then reverts to piezoelectric properties once another layer is added, and so forth until the layers become too great to exhibit piezoelectric properties anyway

Nevertheless, despite needing to ensure that the 2D materials are used in the correct manner, there are several 2D materials that can be harnessed, including a few materials where their bulkier 3D inorganic counterparts show no piezoelectric properties. There are now also many ways to create single and few layered 2D materials on a commercial level, so these kinds of challenges are not as significant as they would have been even just a few years ago. So, there’s a chance to break away from the traditional piezoelectric materials when it comes to creating these small-scale nanogenerators.

Conclusion

The piezoelectric effect is a common phenomenon in a range of bulk inorganic materials, but it is also observed in a range of 2D materials. 2D materials that can generate a piezoelectric charge that can be used in a range of PENGs for powering small-scale devices. There are a lot of benefits of using 2D materials in PENGs, including high flexibility and mechanical strength, as well as an inherent thinness, and PENGs offer a lot of potential for small-scale energy harvesting in remote applications—be it IoT, monitoring, or medical applications.

Liam Critchley, Mouser Electronics

The post 2D Materials in Piezoelectric Nanogenerators (PENGs) appeared first on ELE Times.

Keysight World: Innovate to Spotlight Emerging Technologies and Trends

ELE Times - Fri, 09/09/2022 - 13:14

Keysight Technologies, Inc will showcase emerging technology trends and provide actionable insights for innovators looking to advance their engineering innovation in 5G and 6G, electric and autonomous vehicles, quantum computing and systems, and digital twins and artificial intelligence (AI). This four-day online vision conference will be held in regions around the world throughout October, November and December 2022.

“Every day, technological advancements are reshaping the human experience – from how we live and work, to how we move through the world. There is no doubt that the rapid pace of technology innovation is only going to accelerate and present new challenges and opportunities for all of us,” stated Jeff Harris, vice president of Portfolio and Global Marketing at Keysight Technologies. “At Keysight World: Innovate, industry leaders, experts, and even a ‘mad scientist’ or two will share their expertise and predictions to give technology leaders, engineers and innovators a head start on near-term and future developments in technology innovation.”

Rapidly evolving technologies, including quantum computing, digital twins, artificial intelligence, electric and autonomous vehicles, as well as 5G and 6G, are powering endless imagination and innovation across all industries.​ Each day of Keysight World: Innovate will feature an industry expert keynote on near-term trends, a moderated panel discussion on key industry challenges, an industry luminary vision keynote and relevant solution demonstrations. Specific sessions include:

The Next Tier of Deployment, Evolving to 6G: Global 5G deployments are accelerating and scaling digital transformation across sectors. This session explores how to capture the full potential of 5G private networks, the use cases driving their development and implementation and how global 5G deployments are helping to move digital transformation beyond manufacturing and shaping research into 6G.

  • Keynote address: Pablo Tomasi, Principal Analyst and Lead Researcher on 5G Private Networks, Omdia Informa
  • Panel moderator: Jessy Cavazos, 5G Solutions Manager, Keysight Technologies
  • Vision keynote: Maurice Conti, Futurist and Founder, Applied Intelligence

Building the Foundation for Quantum: Decades-long hype has centered on quantum systems and how they will revolutionize computing. This session looks at the state of quantum technology today, its near and next-term potential and the kinds of problems it will solve.

  • Keynote address: Shohini Ghose, Quantum Physicist, Wilfrid Laurier University
  • Panel moderator: Pamela Mallette, Head of Americas Marketing at Keysight Technologies
  • Vision keynote: Patrick Moorhead,  Founder, CEO and Chief Analyst, Moor Insights and Strategy

Advancing Development with Digital Twins and Artificial Intelligence: Digital twins and artificial intelligence are transformative technologies promising to dramatically alter the world. This session examines how digital twins are transforming product development and the changes on the horizon from the growing role of AI in design and manufacturing.

  • Keynote address: Michael Grieves, Executive Director, Digital Twin Institute
  • Panel moderator: Cheryl Ajluni, Director of Industry Solutions, Keysight Technologies
  • Vision keynote: Vivienne Ming, Mad Scientist and Co-Founder, Socos Labs

Accelerating the Automotive Revolution: The automotive revolution is reshaping our world, with innovations in both electric vehicles (EVs) and autonomous vehicles (AVs) continuing at a feverish pace. This session examines the challenges to wide-scale adoption that still lie ahead and explores what the next decade will hold as the industry progresses down the path to full vehicle autonomy.

  • Keynote address: Javier Verdura, Global Director of Product Design, Tesla
  • Panel moderator: Janet Ooi, Automotive Solutions Manager at Keysight Technologies
  • Vision keynote: Chris Gerdes, Professor and Co-Director, Center for Automotive Research at Stanford (CARS)

Keysight World Virtual Events

Americas

October 4 – 7, 2022

Europe, Middle East, Africa and India
October 11 – 14, 2022

South Asia Pacific
October 18 – 21, 2022

Japan
October 25 – 28, 2022

Korea
October 25 – 28, 2022

Shanghai
November 29 – December 2, 2022

Taipei
November 29 – December 2, 2022

The post Keysight World: Innovate to Spotlight Emerging Technologies and Trends appeared first on ELE Times.

Two Domains Are Better Than One: Oscilloscopes and Signal Analyzers

ELE Times - Fri, 09/09/2022 - 12:13

The spectrum analyzer has been a staple of the RF engineering community for over sixty years. Its evolution has kept pace with technological innovations such as cellular networks, wireless protocols, and complex modulation techniques. As analyzers adopted new functionalities and capabilities, the traditional terminology and classification of these devices became more diverse, sometimes leading to confusion. On top of this, analysis in the frequency domain has grown beyond traditional spectrum analysis to include additional types of instruments such as RF recorders, vector network analyzers, and – as I’ll highlight in this blog – oscilloscopes. Let’s look at how the spectrum analyzer has evolved into the signal analyzer and how it relates to the modern oscilloscope.

SWEPT-TUNED SPECTRUM ANALYZERS

The swept-tuned spectrum analyzer operates on the principal of adjusting (or tuning) a resolution bandwidth filter through a range of frequencies and measuring the amplitude of the signal at each step. The block diagram seen in Figure 1 illustrates the path from input to display. You may notice it looks very similar to the super-heterodyne technique of an AM radio, except the sweeping is automated instead of manually done with a dial, and the output is a display instead of a speaker. This type of design leverages a traditional analog RF architecture and requires a relatively small manufacturing cost. However, due to its nature of “sweeping” through the spectrum, this type of analyzer is only suitable for stable, unchanging signals.

Signal

Figure 1: Swept-tuned spectrum analyzers used a super-heterodyne design to capture and display amplitude of various frequencies within the range of the instrument.

FAST FOURIER TRANSFORMS AND THE JOURNEY TO REAL-TIME

The first attempt to cover a wider spectrum over a single analysis acquisition involved designing multiple filters into the instrument, each covering a unique frequency range. The instrument performed parallel analyses over the range of the filters, removing the variability of the signal changes over the course of a sweep. While this technique was effective, the cost of installing many parallel filters warranted a more feasible approach.
Over time, many components used in a spectrum analyzer shifted from the analog to the digital realm. With the incorporation of analog-to-digital converters (ADCs) and digital processors, spectrum analyzers could now perform operations on digitized data, providing more freedom and flexibility. What once required multiple acquisitions now required only one by sampling signal data in the time domain with a single acquisition and applying a Fourier transform (FFT) to the data. This led to the industry calling these instruments “FFT analyzers”. This new method enabled an additional capability – the ability to capture both magnitude and phase data (vectors), which is essential for analyzing complex, modulated RF signals. This vector analysis capability led to some manufacturers to refer to these instruments as “vector signal analyzers,” though when we say “VSA” at Keysight, we’re referring to our Vector Signal Analysis software (more on that later).

Despite the significant advancements made by incorporating digital FFTs into the spectrum analyzer, it still suffered from “blind time” between acquisitions due to the time it took to post-process the FFTs after collecting the data. With the advent of efficient digital signal processing (DSP) in the 1990s, the FFT analyzer evolved into the “real-time spectrum analyzer,” which got its name due to its rapid, continuous spectrum display updates. Figure 2 illustrates an example architecture incorporating digital processing.

Signal

Figure 2: Availability of processors and advanced digital signal processing advanced the capabilities of FFT-based spectrum analyzers.

Today, modern analyzers fall into two categories: devices that only measure magnitude against frequency (spectrum analyzers or basic spectrum analyzers), and devices that can measure magnitude and phase information against frequency (signal analyzers), allowing analysis and demodulation of complex signals. Either type of device may include swept-tuned and FFT-based modes of operation, and if an instrument can show a continuous analysis of the spectrum without any gaps in time, such as Keysight’s PXA signal analyzer in Figure 3, it’s considered to have “real-time spectrum analysis” (RTSA) capability.

Signal

Figure 3: Keysight’s N9030B PXA signal analyzer features a 50 GHz frequency range with the ability to analyze up to 510 MHz of real-time spectrum analysis.

IT’S A SCOPE! IT’S AN ANALYZER! IT’S…  BOTH!

Modern oscilloscopes use an architecture that conditions, samples, processes, and displays a signal in the time domain. Typically, scope users are concerned with time-based measurements such as rise time, eye height, and overshoot, or they might want to correlate analog and digital signals. So, why have oscilloscopes traditionally been limited to the time domain?
The data obtained right after the ADC is basically the same whether it’s digitized in an oscilloscope or in a signal analyzer — data samples are in the time domain and await further processing. An oscilloscope traditionally continues to operate in the time domain, while a signal analyzer converts the data to the frequency domain. Because of the similarity in techniques, an opportunity presented itself for us to add a functionality to the oscilloscope that converts the data to the frequency domain in much the same manner as a signal analyzer.
Keysight is proud to offer oscilloscopes with spectrum analysis functionalities, but before you throw out your signal analyzer, it is best to understand that the frequency-domain capabilities of an oscilloscope will vary from model to model. When you’re considering a scope for spectral analysis, be sure to evaluate these important characteristics:

  • FFT methodology – how is the FFT being performed? Almost every scope has an FFT function living within its list of “math” functions. However, it generally takes dedicated hardware to process the incoming data fast enough to continuously monitor a spectrum.
  • Frequency range – what range of center frequencies is the oscilloscope capable of analyzing? Some oscilloscopes may feature center frequency ranges beyond the bandwidth of the oscilloscope.
  • Analysis bandwidth – what frequency span is available around the center frequency, and how does that vary with the number of channels being used?
  • Real-time spectrum analysis – does the instrument offer gapless, real-time analysis with a 100% point of intercept (POI), ensuring transient events are captured?
  • RF characteristics – how does the front end perform in terms of sensitivity, amplitude accuracy, phase noise, error vector magnitude (EVM), and spurious-free dynamic range (SFDR)?
  • Triggering – does the instrument support time-domain triggers and/or frequency-domain triggers?
  • Analysis tools – does the instrument have the capability to perform vector and digital demodulation? Can it perform advanced analysis on cellular, wireless, and digital video signals? Can data be exported to external software for further analysis?

Keysight’s MXR-Series real-time oscilloscope (Figure 4, left) includes options for spectrum analysis through digital down-conversion (DDC) and real-time spectral analysis (RTSA). In DDC mode, the scope will capture spectral data in the form of IQ streams, allowing you to save, export, and analyze the data with advanced tools such as Keysight’s 89600 Vector Signal Analysis software (Figure 4, right). In RTSA mode, the data is presented visually, but not saved, to ensure gapless monitoring and 100% probability of intercept – a great way to catch rare, intermittent events on the screen.

Figure 4: Keysight’s MXR Series oscilloscope (left) features spectrum analysis up to a 6 GHz center frequency with both real-time spectrum analysis of up to 320 MHz of bandwidth and IQ data streaming up to 2 GHz of bandwidth. IQ data can be exported to 89600 VSA software (right) for advanced demodulation, vector analysis, and standards-specific measurements.

Rick Clark, PRODUCT MARKETING MANAGER, Keysight Technologies

The post Two Domains Are Better Than One: Oscilloscopes and Signal Analyzers appeared first on ELE Times.

How to Use High Accuracy Digital Temperature Sensors in Health Monitoring Wearables

ELE Times - Fri, 09/09/2022 - 12:12

Accurate digital temperature measurements are important in a range of applications including wearables, medical monitoring devices, health and fitness trackers, cold chain and environmental monitoring, and industrial computing systems. While widely applied, the implementation of highly accurate digital temperature measurements frequently involves temperature sensor calibration or linearization, as well as higher power consumption which can be an issue for compact, ultra-low power applications with multiple acquisition modes. The design challenges can quickly mount, causing cost overruns and delayed schedules.

Complicating the matter, some applications involve multiple temperature sensors sharing a single communication bus. In addition, some production test setups need to be calibrated according to the U.S. National Institute of Standards and Technology (NIST), while verification equipment needs to be calibrated by an ISO/IEC-17025 accredited laboratory. Suddenly, what seemed a straightforward function becomes both intimidating and costly.

This article briefly describes the requirements for high-accuracy temperature measurements in mobile and battery-powered health monitoring applications. It then introduces a low-power, high-accuracy digital temperature sensor IC from ams OSRAM that doesn’t require calibration or linearization. It finishes with integration recommendations, an evaluation board, and a Bluetooth-enabled demo kit with a companion app that makes it possible to modify sensor settings and observe the impact on power consumption.

Requirements for high-accuracy temperature monitoring

Accuracy is mandatory in health monitoring applications. As manufactured, digital temperature sensors exhibit part-to-part variations in performance that need to be addressed. As in-house calibration is expensive and using uncalibrated sensors increases the cost of achieving the desired accuracy, designers should consider sensors that are fully calibrated and linearized. It is, however, important to ensure that the sensor maker uses calibration instruments traceable to NIST standards. Using instruments with traceable calibration ensures an unbroken chain back to the basic NIST standards, with the uncertainties at each link in the chain identified and documented so they can be addressed in the device maker’s quality assurance system.

The primary standard for testing and calibration laboratories is ISO/IEC 17025 “General requirements for the competence of testing and calibration laboratories.” ISO/IEC 17025 is based on technical principles focused specifically on calibration and testing laboratories, is used for their accreditation, and provides the basis for developing continuous improvement plans.

Digital temperature sensor with NIST-traceable production testing

To meet the many design and certification requirements, designers can turn to the AS6211 digital temperature sensor from ams OSRAM that provides accuracy up to ±0.09°C and requires no calibration or linearization. Designed for use in healthcare devices, wearables and other applications that require high-performance thermal information, the AS6211’s production testing is calibrated by an ISO/IEC-17025 accredited laboratory according to NIST standards. The calibrated production testing speeds the process of gaining certification to EN 12470-3, which is required for medical thermometers in the European Union.

The AS6211 is a complete digital temperature sensor in a six-pin, 1.5 x 1.0-millimeter (mm) wafer level chip scale package (WLCSP), ready for system integration. An orderable part number example, the AS6221-AWLT-S, is delivered in lots of 500 pieces on tape & reel. The AS6211’s measurements are delivered through a standard I²C interface, and it supports eight I²C addresses, thereby eliminating concerns about bus conflicts in multi-sensor designs.

High accuracy plus low power

The AS6221 delivers high accuracy with low power consumption over its full supply range from 1.71 to 3.6 volts DC, which is especially important in applications powered by a single battery cell. It includes a sensitive and accurate silicon (Si) bandgap temperature sensor, an analog-to-digital converter, and a digital signal processor with associated registers and control logic. The integrated alert function can trigger an interrupt at a specific temperature threshold, which is programmed by setting a register value.

The AS6221 consumes 6 microamperes (µA) when making four measurements per second, and in standby mode, power consumption is only 0.1 µA. The use of the integrated alarm function to wake up the application processor only when a temperature threshold has been reached can reduce system power consumption even more.

Wearables integration options

In wearable applications, the better the thermal connection between the sensor and the skin, the more accurate the temperature measurement. Designers have several options for optimizing the thermal connection. One way is to put a thermally conductive pin between the skin and the sensor (Figure 1). To achieve reliable results, the pin needs to be isolated from any external sources of thermal energy, such as the device case, and a thermal paste or adhesive should be used between the pin and the AS6211. This approach benefits from using a flexible (flex) printed circuit board (PCB) to carry the AS6221, enabling more freedom in locating the sensor.

Sensors

Figure 1: A flex PCB and thermal adhesive can be used to provide a low thermal impedance path between the skin and the sensor.

In designs that benefit from having the sensor on the main PCB, the thermal connection can be made using a contact spring or a thermal pad. If the sensor is mounted on the bottom of the PCB, a contact spring can be used to make a thermal connection between the contact pin and thermal vias on the PCB that are connected to the sensor (Figure 2). This approach can result in a cost-effective device that supports longer distances between the sensor and the skin, but it requires careful consideration of the several thermal interfaces to achieve high levels of sensitivity.

Figure 2: When the sensor is mounted on the bottom of a PCB, thermal vias and a contact spring can be used to connect to the contact pin.

A third option is to use a thermal pad to connect the pin to a sensor mounted on the top of the PCB (Figure 3). Compared with using a spring contact or flex PCB, this approach requires a pad with high thermal conductivity and careful mechanical design to ensure minimum thermal impedance between the contact pin and the sensor. This can result in a simpler assembly while still delivering high levels of performance.

Figure 3: A thermal pad can connect a top-mounted sensor to the contact pin. This provides simpler assembly, while still delivering high performance.

Improving thermal response time

In order to obtain fast thermal response times, it’s important to minimize the external influences on the measurement, especially by the portion of PCB directly adjacent to the sensor. Two viable design suggestions are to use cutouts to minimize any copper planes in the vicinity of the sensor on the top of the PCB (Figure 4, top), and to reduce thermal loading from the bottom of the PCB by using a cutout area below the sensor to reduce overall PCB mass (Figure 4, bottom).

Sensors

Figure 4: Cutouts on the top and bottom of the PCB can minimize the PCB mass around the sensor and improve its response time.

In addition to minimizing PCB effects, other techniques that can help improve measurement speed and performance include:

  • Maximizing the contact area with the skin to increase the heat available to the sensor.
  • Using thin copper traces and minimizing the size of power and ground planes.
  • Using batteries and other components such as displays that are as small as possible to achieve the device performance requirements.
  • Designing the package to thermally isolate the sensor on the PCB from the surrounding components and the outside environment.

Sensing environmental temperature

Additional considerations apply when using multiple temperature sensors, such as in designs that use both skin temperature and the temperature of the surrounding environment. A separate sensor should be used for each measurement. The thermal design of the device should maximize the thermal impedance between the two sensors (Figure 5). A higher intervening thermal impedance provides better isolation between the sensors and ensures that the measurements will not interfere with one another. The device package should be fabricated with materials that have low thermal conductivities, and a thermal isolation barrier should be inserted between the two sensor sections.

Sensors

Figure 5: For accurate environmental temperature sensing, there should be a high thermal resistance between the skin and environmental temperature sensors.

Eval kit kickstarts AS6221 development

To speed application development and time to market, ams OSRAM offers designers both an eval kit and a demo kit. The AS62xx Eval Kit can be used to quickly set up the AS6221 digital temperature sensor, enabling a quick evaluation of its capabilities. This eval kit connects directly to an external microcontroller (MCU) that can be used to access temperature measurements.

Figure 6: The AS62xx eval kit can be used to set up and evaluate the AS6221

Demo kit for the AS6221

Once the basic evaluation is completed, designers can turn to the AS6221 demo kit as an application development platform. The demo kit includes an AS6221 temperature button and a CR2023 coin cell battery. Downloading the companion app from the App Store or Google Play Store supports connection to up to three sensor buttons at one time (Figure 7). The app communicates with the sensor buttons over Bluetooth, making it possible to modify all of the sensor settings, including the measurement frequency, and observe the impact on power consumption. The app can record measurement sequences, thereby enabling comparisons of the performance of various temperature sensor settings. Designers can also use the demo kit to experiment with the alert mode and learn how it can be used to improve solution performance.

Sensors

Figure 7: The AS6221 demo kit serves as a temperature sensor application development platform for the AS6221.

Conclusion

Designing high-accuracy digital temperature sensing systems for healthcare, fitness, and other wearables is a complex process with respect to design, test, and certification. To simplify the process, lower cost, and get to market more quickly, designers can use highly integrated, low-power, high-accuracy sensors.

Jeff Shepard, Digi-Key Electronics

The post How to Use High Accuracy Digital Temperature Sensors in Health Monitoring Wearables appeared first on ELE Times.

Can Green Hydrogen Become the World’s Most Sustainable Source of Energy?

ELE Times - Fri, 09/09/2022 - 12:11

Energy, as the laws of physics state, can be neither created nor destroyed; it can only be transformed. This makes the term “renewable energy” something of a misnomer and it should, perhaps, be renamed inexhaustible energy. That, again, needs some context. Conceptually, the continuous transformation of an inexhaustible source of energy into useful work seems somehow more achievable – and entirely more sustainable – than the idea of using energy that constantly renews itself.

Renewable energy tends to be synonymous with what are, for all practical reasons, inexhaustible sources of energy. These comprise the sun, the wind, and the tides of the seas. As transforming energy requires even more energy, it makes sustainable sense to use a source of energy that is effectively inexhaustible, because it doesn’t really matter if the efficiency of transforming energy into work is low.

In a world that could moderate its energy consumption based on supply, transforming natural energy sources into work would be the most sustainable way of living. But, long ago, human ingenuity successfully harnessed other forms of exhaustible energy. That created a dependence on energy that has only increased over time. The supply of energy transformed from inexhaustible sources cannot currently meet this demand, but things are changing.

The trouble with inexhaustible energy

The reason why energy sourced from solar, wind or wave is still not our only source of energy is complex. It would be fair to say that the technology isn’t mature enough, or that the efficiency is too low. Other observations, like cloud cover and the uncontrollable nature of the wind are valid, too.

Cost is a major factor and it cannot be ignored. But if we do ignore it for a moment, the problems don’t just go away. The energy infrastructure has developed in relation to the demand. That includes every stage between the turbines (however they are turned) and the socket in your wall. Demand is the pertinent word here because people expect power to be available on demand and in abundance.

This focuses the lens on the real issue with inexhaustible power, which is storing it until it is needed. There are numerous ways of transforming these sources of energy into electricity as a continuous process. We have yet to develop an effective way of storing the raw sources of energy, so any electricity generated must be used or stored.

Battery technology is the obvious solution to this problem. There is a lot of research going into developing batteries that provide higher energy density, lower leakage and faster operation. The challenges associated with this are well understood and include the need for rare earth metals or the use of toxic chemicals. Mining these materials is also becoming a challenge.

In 2021 the U.S. Department of Energy announced its Energy Earthshots initiative which, to date, comprises three programs. These are the Hydrogen Shot (announced on June 7, 2021), Long Duration Shot (announced on July 14, 2021), and Carbon Negative Shot™, (announced on November 5, 2021).

Energy

When combined in a fuel cell, hydrogen and oxygen generate electricity and water, which makes the electrolyzer a critical part of the system.

In reverse order, the Carbon Negative Shot encourages new concepts around reducing the cost of capturing and storing carbon, at scale, at a cost of less than $100/net metric ton of CO2-equivalent (CO2e). The Long Duration Shot is aimed at achieving a 90% reduction in the cost of storing clean energy for 10+ hours (using lithium-ion batteries as a baseline), within a decade. This shot directly addresses the challenge outlined above, of storing unused electricity generated from an inexhaustible source of energy.

The first of the three shots, Hydrogen Shot, is focused on reducing the cost of generating hydrogen to $1 per kilogram within a decade. Currently, the DOE says the cost of generating hydrogen from renewables is around $5 per kg. This initiative comes under the Hydrogen and Fuel Cell Technologies Office (HFTO) within the Office of Energy Efficiency and Renewable Energy (EERE). It covers all ways of generating hydrogen, not just electrolysis, using all sources of energy, including renewables and nuclear.

In gas form, hydrogen is relatively easy to store and transport. In recent times most of the hydrogen generated is used in industrial applications and not as a fuel. If it is to become a dominant source of energy, production will need to increase. In today’s hydrogen economy, its generation isn’t particularly environmentally friendly, so increasing production using exiting methods could compound the problem of greenhouse gas emission.

Moving dependency onto electricity generated from stored hydrogen will only move the problem to hydrogen creation. The cart that must come before that particular horse is mastering hydrogen generation, storage and transportation.

As well as generating clean electricity (and water) using a fuel cell, hydrogen can also be burned in much the same way other gasses are used in turbines. There is also potential for hydrogen to be used as fuel in an internal combustion engine, but this is a less efficient way of using hydrogen, and it produces nitrogen oxides when burning.

The color of hydrogen

Most hydrogen today is extracted from methane, which also releases carbon. This is known as grey hydrogen, but if the carbon is captured and stored it is referred to as blue hydrogen. Hydrogen that uses fossil fuels in the process is generally known as black or brown hydrogen.

If the primary energy source used is nuclear based, the hydrogen is commonly and, apparently, interchangeably referred to as pink, purple or red hydrogen. A relative newcomer, turquoise hydrogen, is made using methane pyrolysis, which creates hydrogen and solid carbon. This has the potential to be a low-emission process if the energy for the heat needed comes from a renewable source. It could also be possible to capture and store the solid carbon more economically.

Naturally occurring hydrogen is sometimes called gold or white hydrogen, but it isn’t believed to occur in large amounts or be easy to find and mine. That leaves hydrogen created through electrolysis using the inexhaustible sources as the primary energy. This includes using only solar power, known as yellow hydrogen, or using any combination of renewable sources producing no greenhouse gases in the process, which is what we now call green hydrogen.

COLOR MANUFACTURING PROCESS
White This is naturally occurring hydrogen (rare).
Yellow The electricity used for the electrolyzer comes only from solar power.
Green The electricity used for the electrolyzer comes from any renewable source and no carbon is produced during manufacturing.
Pink/Purple/Red The electricity used for the electrolyzer comes from nuclear power.
Turquoise The manufacturing process is based on methane pyrolysis. This uses high temperatures rather than high voltages. The source energy can be but it isn’t limited to renewable energy.
Grey The manufacturing process is called steam methane reforming and is the most widely used method of created hydrogen. This generates greenhouse gases which aren’t captured.
Blue This is similar to grey hydrogen, but the carbon created is captured and stored.
Brown/Black This method extracts hydrogen from fossil fuels using gasification, which involves raising the temperature of the source material without igniting it.

Despite being invisible, hydrogen comes in various colors.

The tide is turning on green hydrogen

There are many examples of companies developing technology to generate or use hydrogen as a clean energy source. There is potential, under the term of green hydrogen, to use renewable energy to power the electrolysis process. The hydrogen could then be stored and combined with oxygen to produce electricity on demand, whenever needed.

Conceptually, this could result in a self-contained system that harnesses natural resources to generate electricity. The electrolyzer will be one of the many critical parts in the system, and they come in various forms including polymer electrolyte membrane (PEM), alkaline and solid oxide electrolyzers.

With an abundant source of water available, using tidal energy to generate hydrogen from sea water makes a lot of sense. The Tidal Hydrogen Production Storage and Offtake project (THyPSO) does just that. Two companies, Tocardo and HydroWing, have worked together to prove the concept. The platform stores the hydrogen it produces using its bidirectional tidal turbines. Up to two weeks worth of hydrogen can be stored onboard until it is transferred to an offtake vessel through a pressurized hose.

The THyPSo is a proof of concept for green hydrogen generation in a closed system using tidal power and sea water to generate hydrogen, which is stored on-board before being taken off by a support vessel.

Technology for electrolyzers

Many innovators are working on new types of hydrogen electrolyzers, but the process generally needs a high DC current.

If grid power is being used to power the electrolyzer, then it needs to be AC coupled and rectified to DC. The options here are many, but Infineon recommends one of the simplest, the thyristor. A thyristor solution is lower cost and less complex than using an IGBT module, it also offers high power density. As a line-commutated device, the same solution can work with any line frequency, but it will contain harmonics.

An alternative solution would be a diode rectifier coupled to an IGBT circuit for DC/DC conversion. This approach would reduce harmonics in the DC used to power the electrolyzer. It would also provide a more flexible driver, to accommodate different modes of operation or operating conditions. However, for high current rectifiers of 20MW or above, thyristors are still the suggested approach.

Avnet carries the full range of power management solutions from Infineon, covering low and high power. This includes the OptiMOS™ power MOSFETs offered in a PQFN package using Infineon’s Source-Down technology. This involves flipping the silicon inside the package, so the source is connected to the PCB over the thermal pad rather than the drain. Other low-power solutions include the StrongIRFET and OptiMOS 5 and CoolMOS power MOSFETs, as well as the CoolGaN gallium nitride power MOSFETs.

High-power devices include Infineon’s IGBT EconoDUAL, EasyPIM and EasyPACK modules, supported by the EiceDRIVER family of gate driver ICs. The modules benefit from Infineon’s PressFIT mounting technology developed to offer reliable, solder-less power module mounting.

EnergyA hydrogen electrolyzer requires high power DC, which can be generated from an AC source or DC-DC coupled from a renewable energy supply.

Thyristors provide a simple but effective way of rectifying AC to DC in a hydrogen electrolyzer, but they offer little flexibility and harmonics will be present at the output.

EnergyA diode rectifier feeding into an IGBT-based DC/DC converter gives greater flexibility but is limited in the amount of power it can provide to a hydrogen electrolyzer

Conclusion

Green hydrogen holds a lot of promise, but there are many hurdles ahead. The hydrogen ecosystem is not currently geared toward its use as a green fuel, but it could be. It would require investment at every stage, and this is where the real challenges come.

Alongside this, innovators large and small continue to move forward, developing and improving the underlying technology that will be needed as and when the rest of the ecosystem catches up. At the current rate of change, that should happen soon.

Philip Ling, AVNET

The post Can Green Hydrogen Become the World’s Most Sustainable Source of Energy? appeared first on ELE Times.

PCIM Asia 2022 to be Moved to October with Major Brands and Concurrent Events Confirmed

ELE Times - Fri, 09/09/2022 - 12:08

In light of recent developments concerning the pandemic in Shanghai, PCIM Asia will now be held from 26 – 28 October 2022 in Hall E1 at the Shanghai New International Expo Centre. The show will continue to provide a platform for exchanging ideas, showcasing technologies and expanding business networks. This will be achieved through gathering the latest power electronics suppliers at the fairground and by exploring the latest trends through the fair’s concurrent programme.

On behalf of the organisers, Mr Louis Leung, Deputy General Manager of Guangzhou Guangya Messe Frankfurt Co Ltd expressed: “We are pleased to have confirmed suitable dates for the rescheduling of the fair, which meet the needs of our stakeholders. Meanwhile, we are in contact with our local and international exhibitors and supporters to provide them with the necessary support to facilitate their participation. We are also prepared to put the necessary measures in place to ensure the wellbeing of all fairgoers during the course of the October fair.”

PCIM Asia exhibitors to showcase the latest power electronics solutions from across different sectors

PCIM Asia is a specialised event for power electronics, intelligent motion, renewable energy and energy management. The 2022 edition has gathered some of the biggest names in the industry, showcasing the most innovative products and solutions in the field of power electronics. These include CRRC, Fuji Electric, GaNext, Infineon, Innoscience, MacMic, Mitsubishi Electric, onsemi, Power Integrations, ROHM, Semirkon, Silan, Sunking, Tektronic and Toshiba who have confirmed their participation and are ready to showcase their products and services in the power electronics sector. Covering a range of power electronics solutions, semiconductors, power devices, bus bars, capacitors and more, the fair expects to play host to some 100 brands from around the world across 10,000 sqm of exhibition space.

Get inspired by the value-added concurrent programme

Apart from the diversity in the products on display, the fair also serves as a pivotal marketing and networking platform for all participants. Held alongside the exhibition, the PCIM Asia Conference is one of the most important events for power electronics in Asia. Led by industry experts, the 2022 conference promises to once again provide a platform for industry professionals in power electronics, intelligent motion, renewable energy and energy management from around the world to connect and share ideas. Experts from these sectors have contributed 52 conference papers on a wide range of topics from within the industry. Key topics covered at this year’s conference will include:

  • Power semiconductor devices / WBG devices
  • Passive components
  • Packaging and reliability
  • Motion control
  • Power conversion
  • AI for power electronics

Additionally, the fair will carry out a programme of industry forums covering hot topics in power electronics, intelligent motion, renewable energy, and energy management. Attendees will hear from notable experts, academics and top executives from leading companies in the industry.

Details of highlighted events are as follows:

  • Industry Forum: Electrical transportation technology in the Industry 4.0 era
    Highlighting China’s transportation electrification transformation, invited speakers will unveil the latest developments in this space and lead interactive discussions on the country’s railway, electric vehicles, intelligent mission planning and tracking control of autonomous surface vehicles (ASVs), unmanned aerial vehicles (UAV) and more.
  • Industry Forum: New energy development under China’s ‘Carbon peak’ and ‘Carbon neutrality’ goals
    Experts from renowned universities and research institutes, as well as key industry specialists, will actively discuss the prospects and key technologies of renewable energy development and utilisation under the ‘Dual Carbon’ policy. They will cover the latest renewable energy trends and share innovative insights on energy conversion, energy storage and power management technologies.

Interrelated concurrent shows create strong synergy for the power electronics industry in East China

Held on the same date and venue are Messe Frankfurt’s Shanghai Intelligent Building Technology, Shanghai Smart Home Technology and Parking China. Participants of PCIM Asia 2022 will be able to take advantage of synergies between the different fairs. There are also many potential applications of power electronics across a wide range of sectors such as in smart home appliances with remote control systems, voice recognition and smart sensors as well as within building technology applications including heating, air conditioning and refrigeration. With over 20 years of experience in the East China region, PCIM Asia is a recognised platform which will bring the industry together and showcase the latest innovations in power electronics.

The post PCIM Asia 2022 to be Moved to October with Major Brands and Concurrent Events Confirmed appeared first on ELE Times.

Uncovering the Hidden Flaws of Solid-state Batteries

ELE Times - Fri, 09/09/2022 - 08:25

Solid-state batteries could play a key role in electric vehicles, promising faster charging, greater range, and a longer lifespan than conventional lithium-ion batteries. But current manufacturing and materials processing techniques leave solid-state batteries prone to failure. Now, researchers have uncovered a hidden flaw behind the failures. The next step is to design materials and techniques that account for these flaws and produce next-generation batteries.

In a solid-state battery, charged particles called ions move through the battery within a solid material, in contrast to traditional lithium-ion batteries, in which ions move in a liquid. Solid state cells offer advantages, but local variations or tiny flaws in the solid material can cause the battery to wear out or short, according to the new findings.

“A uniform material is important,” said lead researcher Kelsey Hatzell, assistant professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment. “You want ions moving at the same speed at every point in space.”

How they used high-tech tools at Argonne National Laboratory to examine and track nano-scale material changes within a battery while actually charging and discharging the battery. The research team, representing Princeton Engineering, Vanderbilt, Argonne, and Oak Ridge National Labs, examined the grains made up of crystals in the battery’s solid electrolyte, the core part of the battery through which electrical charge moves. The researchers concluded that irregularities between grains can accelerate battery failure by moving ions faster to one region in the battery over another. Adjusting material processing and manufacturing approaches could help solve the batteries’ reliability problems.

Batteries store electrical energy in materials that make up their electrodes: the anode (the end of a battery marked with the minus sign) and the cathode (the end of the battery marked with the plus sign). When the battery discharges energy to power a car or a smartphone, the charged particles (called ions) move across the battery to the cathode (the + end). The electrolyte, solid or liquid, is the path the ions take between the anode and cathode. Without an electrolyte, ions cannot move and store energy in the anode and cathode.

In a solid-state battery, the electrolyte is typically either a ceramic or a dense glass. Solid state batteries with solid electrolytes may enable more energy-dense materials (e.g. lithium metal) and make batteries lighter and smaller. Weight, volume, and charge capacity are key factors for transportation applications such as electric vehicles. Solid-state batteries also should be safer and less susceptible to fires than other forms.

Engineers have known that solid-state batteries are prone to fail at the electrolyte, but the failures seemed to occur at random. Hatzell and co-researchers suspected that the failures might not be random but actually caused by changes in the crystalline structure of the electrolyte. To explore this hypothesis, the researchers used the synchrotron at the Argonne National Lab to produce powerful X-rays that allowed them to look into the battery during operation. They combined X-ray imaging and high-energy diffraction techniques to study the crystalline structure of a garnet electrolyte at the angstrom scale, roughly the size of a single atom. This allowed the researchers to study changes in the garnet at the crystal level.

A garnet electrolyte is comprised of an ensemble of building blocks known as grains. In a single electrolyte (1mm diameter) there are almost 30,000 different grains. The researchers found that across the 30,000 grains, there were two predominant structural arrangements. These two structures move ions at varying speeds. In addition, these different forms or structures “can lead to stress gradients that lead to ions moving in different directions and ions avoiding parts of the cell,” Hatzell said.

She likened the movement of charged ions through the battery to water moving down a river and encountering a rock that redirects the water. Areas that have high amounts of ions moving through tend to have higher stress levels.

“If you have all the ions going to one location, it is going to cause rapid failure,” Hatzell said. “We need to have control over where and how ions move in electrolytes in order to build batteries that will last for thousands of charging cycles.”

Hatzell said it should be possible to control the uniformity of grains through manufacturing techniques and by adding small amounts of different chemicals called dopants to stabilize the crystal forms in the electrolytes.

“We have a lot of hypotheses that are untested of how you would avoid these heterogeneities,” she said. “It is certainly going to be challenging, but not impossible.”

The post Uncovering the Hidden Flaws of Solid-state Batteries appeared first on ELE Times.

Energy harvesting PMIC provide wireless connectivity

EDN Network - Thu, 09/08/2022 - 19:25

Telink’s TLSR8273-M-EH provides low-power wireless connectivity, energy autonomy, and power management in a compact 23×21-mm module. The part combines Telink’s multiprotocol wireless connectivity SoC with Nowi’s energy harvesting PMIC to make batteryless operation possible for a wide range of IoT applications.

With its multiprotocol radio, 32-bit MCU, and embedded memory, the TLSR8273-M-EH supports various standards in the 2.4-GHz ISN band, including Bluetooth LE 5.1, Bluetooth mesh, IEEE 802.15.4, Zigbee, RF4CE, 6LoWPAN, and Thread. It also has a PTA hardware interface for Wi-Fi coexistence. The module works with various types of rechargeable batteries using an onboard USB charger or the energy harvester. Alternatively, the energy harvester can be used with supercapacitors to make end devices batteryless.

The module is compatible with Telink’s software development kits, allowing customers to upgrade their products with minimal effort. One application for the TLSR8273-M-EH is in television remote controls that harvest energy from indoor lighting using a PV cell. A reference design based on the TLSR8273-M-EH is available for such remote control products.

Samples of the TLSR8273-M-EH wireless connectivity module are available now.

TLSR8273-M-EH datasheet

Telink Semiconductor

Nowi

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Energy harvesting PMIC provide wireless connectivity appeared first on EDN.

MCUs ensure smooth shift to software-defined EVs

EDN Network - Thu, 09/08/2022 - 19:23

Stellar P automotive MCUs from ST target electrified drivetrains, as well as domain-oriented, over-the-air updateable systems for EVs. According to the manufacturer, the Stellar P MCUs are the industry’s first qualifiable devices for model year 2024 vehicles to integrate the CAN-XL in-car communication standard. CAN-XL will enable new vehicle platforms to handle the growing data flows so that cars can operate at peak performance.

The microcontrollers provide real-time and deterministic processing leveraging up to six 32-bit Arm Cortex-R52 cores—some operating in Lockstep mode and others in Split-lock mode. Up to 20 Mbytes of embedded nonvolatile phase-change memory enable the MCUs to deliver fast read access times. This memory also offers a dual-image storage function (2×20 Mbytes) for over-the-air reprograming purposes.

Built on 28-nm FD-SOI process technology, the parts provide quasi-immunity to radiations. Stellar 6 MCUs integrate advanced timers, including GMT-4, with advanced signal processing, dedicated sensor/actuator interfaces, and high-temperature support up to a junction temperature of 165°C.

Samples of the Stellar P microcontrollers are available now for model year 2024 vehicles.

Stellar P series product page

STMicroelectronics

 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCUs ensure smooth shift to software-defined EVs appeared first on EDN.

Temperature sensor withstands automotive use

EDN Network - Thu, 09/08/2022 - 19:17

An automotive-grade digital temperature sensor, the STS4xA from Sensirion, operates over a temperature range of -40°C to +125°C with typical accuracy of ±0.3°C. The 16-bit sensor provides a wide supply voltage range of 2.3 V to 5.5 V and communicates via an I2C interface. An integrated on-chip heater enables advanced onboard diagnostics, instead of relying on simple presence checks.

With dimensions of just 1.5×1.5×0.5 mm, the STS4xA’s 4-pin DFN package comes with optional wettable flanks to permit automated optical inspection. Thanks to its small size, the sensor can be easily integrated into a variety and automotive applications. Additionally, it meets AEC-Q100 reliability standards, including 85°C/85% RH accelerated life tests.

The STS4xA temperature sensor is based on Sensirion’s CMOSens technology, which enables the fusion of the sensor element and evaluation electronics on a tiny CMOS silicon chip. Sensors are fully calibrated, eliminating the need for costly and time-consuming calibration procedure.

STS4xA product page

Sensirion 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Temperature sensor withstands automotive use appeared first on EDN.

Apple’s (first) fall 2022 announcement event: half-steps provide only prior-gen augment (vs fuller advancement)

EDN Network - Thu, 09/08/2022 - 19:10

Even before Tim Cook and other Apple execs hit the stage this Wednesday (quick aside: Apple events are normally on Tuesdays; I assume the reschedule this time was due to Labor Day proximity, but I have no idea why they held the event so early this month this year), the media was already damping the anticipation of what would be unveiled at the “Far Out” soiree, instead focusing on next month’s (or whenever it’ll actually be) subsequent event. And those lowered expectations were admittedly (and unfortunately) in at least some cases fulfilled:

Still, although the company’s rumored AR/VR headset (or, if the latest scuttlebutt ends up being true, headset series) remains under wraps (versus being the “one more thing” that some folks thought would be unveiled this week…the AR “easter egg” was an effective feint), and although many of the products were spitting images of their predecessors (at least judged by exterior appearances), there was still notable news, particularly related to SoCs and connectivity. What follows are the details that caught my attention, in the order in which they were announced.

Apple Watch

I’m going to begin this section by reiterating the disappointment that I first voiced back in June. The Apple Watch Series 3, first introduced five years ago, is admittedly getting quite “long in the tooth” at this point, powered by a now-geriatric SoC and with limited on-board memory that hampers performance and makes software updates difficult at best. As such, it wasn’t surprising that the company announced at WWDC that Series 3 (two of which I own) wouldn’t be supported by upcoming WatchOS 9 (which got its likely final public beta release this week). But after WWDC, Apple continued to sell the Series 3 in both brand-new and refurbished variants. And even though it’s now no longer available new from the Apple Store as of this week, you still can buy them direct from Apple refurbished. See for yourself, courtesy of a screenshot taken from my also-soon-to-be-obsolete iPad mini 4:

That Apple’s still happily extracting money from unknowing consumers’ wallets, direct-hawking products that it knows full well won’t be supported going forward, resulting in eventual obsolescence from unpatched security flaws and the like, is flat-out unconscionable.

Off soapbox. All three new Apple Watch families launched this week have the same SoC at their nexus: the S8 SiP. Not much is known about it yet beyond the fact that it’s a dual-core processor architecture, including whether it actually is a new design (the S6 and S7 were identical, despite their moniker-driven supposed differentiation). But the sensors, displays and other circuitry surrounding it are perhaps the more interesting bits. First off, there’s the second-generation Apple Watch SE, the Apple-anointed successor to the Series 3:

Upgraded accelerometer and three-axis gyro sensors support Crash Detection, where watch data combines with the barometer, GPS, and microphone on a tethered iPhone to determine whether a severe vehicle crash has taken place. In such a situation, quoting from the press release, “the device will check in with the user and dial emergency services if they are unresponsive after a 10-second countdown. Emergency responders will receive the user’s device location, which is also shared with the user’s emergency contacts.”

The new Series 8 watches are also able to measure body temperature at the wrist, a capability usable to both men and women but specifically highlighted by the company as being useful for determining ovulation and enabling more general menstrual period predictions:

And then there’s the beefier new Apple Watch Ultra, whose enhanced water resistance (making it feasible for wearing when diving, for example) and more general ruggedness, along with an one-touch Action button for access in the midst of gloved and other activities, have competitors such as Garmin (my current preference) and Suunto (which I also used to own) in its sights:

AirPods

The first-generation high-end AirPods Pro earbuds are three years at this point, so you might be thinking they’re due for an update. If so, you’d be right.

These second-generation successors are powered by an equally next-generation SoC, the H2. Its added processing power enables, Apple claims, both more robust active noise cancellation and Transparency (the ability to hear your surroundings while also listening to what’s coming over the Bluetooth link) capabilities, as well as the means to more accurately tailor the earbuds’ spatial (virtual surround) and other sound specifics to each listener’s unique head dimensions, ear canal structures and other auditory details. Touch controls are a welcome improvement (says this first-generation owner) over the predecessor’s more rudimentary “stalk” switches. And these added and enhanced capabilities don’t come with a requisite battery life decrease, quite the contrary: up to 1.5 hours of added estimated listening time between charges.

Speaking of charges, the charging case has been similarly enhanced; it, like the earbuds themselves, is now trackable using Apple’s Find My facilities, and, an integrated speaker aids in locating it between the cushions of your sofa or wherever else you’ve inadvertently stashed it. The case can now be charged using a (proprietary, don’t forget) Apple Watch wireless charger, along with legacy MagSafe and Qi wireless chargers and a wired connection.

Speaking of wired connections, a more modest (and not mentioned at the event) evolution also came to the third-generation conventional AirPods. Both they and their second-generation precursors are still sold (here’s a summary of the differences, as well as one for the second- vs first-gen predecessor), and the third-gen offering now comes optionally bundled with a wired-only supportive charging case at a slight discount. Here’s the curious bit; both it and the case for the second-gen AirPods Pro still only comprehend Lightning wired connections. Given the increasing regulatory pressure in Europe and elsewhere to instead adopt industry-standard USB-C, I wonder how much longer Apple will stubbornly cling to its proprietary legacy port.

And speaking of Lightning…

iPhones

Historically, at least to the limits of my admittedly imperfect memory, the entirety of a new iPhone series’ products have been based on an equally new SoC (albeit sometimes selectively using variants with different functional CPU and GPU core counts, running at different clock speeds, etc.). Then again, historically Apple reserved full-generation product number increases for meaningful redesigns, reserving “S” interim upgrades for half-step updates…but I digress.

Any initial confusion you might therefore be feeling when you look at the iPhone 14 versus its iPhone 13 forebear would be understandable, because they’re based on the same A15 Bionic SoC. Not exactly the same, mind you…with the iPhone 13 generation, the full five-functional-GPU-core version of the A15 SoC was reserved for “Pro” versions, with the standard iPhone 13 instead using the four-GPU-core A15 SoC variant. Apparently TSMC has improved its yield on the now-mature 5 nm process, because the five-GPU-core A15 is now mainstream. And, of course, Apple never talks about other specs such as the amount, type and speed of DRAM, so there may be evolution here in the iPhone 13-to-14 transition as well (that said, rumor has it that the LPDDR4 allocation has grown from 4 to 6 GBytes…we’ll need to wait for independent developer confirmation here). Still, the core hardware differences are modest at best.

They include tweaks to the cameras:

  • The main rear camera now has a ƒ/1.5 max aperture, versus ƒ/1.6 on the iPhone 13, and although the image sensor pixel count is the same (12 Mpixels), the pixel size has grown to 1.9-micron, which will also improve low light performance
  • The front camera’s maximum aperture has also widened, from ƒ/2.2 to ƒ/1.9, and now supports autofocus functionality

Battery capacity (therefore between-charges operating life) has also grown, reportedly from 3,240mAh to 3,279mAh on the base 6.1” iPhone 14. The iPhone 14 (following in the footsteps of Android predecessors) also natively supports Crash Detection, courtesy of upgrades to its own accelerometer and gyro. As you may have noticed from the earlier photo, there’s no 5.4” screen “mini” variant this time (although the iPhone 13 mini is still being sold…speaking of “mini”, reiterating a point I made back in June, my first-generation iPhone SEs will be made obsolete by the looming iOS 16 update, as well); instead, Apple’s gone the other direction, adding a 6.7” “Plus” version.

Neither phone supports physical SIMs any longer, at least in the U.S.; instead, they can concurrently comprehend multiple carriers’ eSIM credentials. And last but definitely not least, Apple’s (on the heels of both SpaceX-plus-T-Mobile and Huawei) added limited satellite connectivity; not web browsing, email or even texting, only SOS messaging. Apple didn’t announce its satellite partner but it’s reportedly Globalstar which, if true, is a curious selection. Globalstar’s satellites sit in long-distance HEO, versus the closer-proximity LEO employed by SpaceX (for example), the farther Globalstar orbit seemingly requiring much higher iPhone 14 transmit power.

Then there are the “Pro” variants of the iPhone 14 family, also available in base and larger (this time “Max” versus “Plus”) size options:

Here the generational changes (building on the foundation of the earlier documented iPhone 13-to-14 improvements) are more substantial, starting with a processing nexus switch to the 4 nm-fabricated A16 Bionic SoC. Does this iPhone generation’s more limited processor evolution reflect constrained latest-generation TSMC process capacity (and/or yield), is it simply a reflection of Apple’s desire to inject more feature set differentiation between standard and “Pro” product variants, or are both (and/or other) factors at play? Speculation aside, Apple continues in its role as a lithography-adoption trendsetter. Although the CPU (two high performance, four “efficiency”, i.e., low power) and GPU core counts remain unchanged from the A15 Bionic precursor, all three cores’ architectures have purportedly advanced.

The total transistor count now exceeds 16 billion (believe it or not), the next-generation 16-core neural engine is claimed capable of performing up to 17 trillion operations per second, memory bandwidth to and from the graphics subsystem has increased by 50%, and the design reflects ongoing focus on reducing power consumption: the A16 Bionic’s high-performance cores supposedly use 20% less power compared to those of the A15 Bionic, while the efficiency cores use a third of the power (of unnamed competitor chips, mind you). Apple’s focus on power consumption vs raw performance makes sense in light of subsequently leaked initial Geekbench results, which suggest scant-at-best synthetic benchmark improvements versus the A15 Bionic. Perhaps Geekbench isn’t fully exploiting the A16 Bionic’s various architectural enhancements…and/or perhaps Apple is downclocking the A16 Bionic versus its A15 Bionic predecessor in pursuit of reduced power consumption with comparable performance.

Speaking of memory, the phones purportedly include 6 GBytes of latest-generation LPDDR5 DRAM this time around. The camera subsystem improvements are also more substantial, including:

  • A 48 Mpixel main rear image sensor, albeit with each quad-pixel cluster combined at the camera level
  • A new 12MP ultra-wide rear camera with 1.4 µm pixels
  • An improved rear telephoto camera now offering 3x optical zoom, and
  • A new front TrueDepth camera with an ƒ/1.9 aperture and autofocus

Speaking of the front camera cluster, Apple has migrated from the “notch” at the top of the screen to a “pill” within the main OLED (which for the first time offers an “Always-On mode”). The company refers to it as the Dynamic Island:

 

and you can simplistically think of it as a mini virtual display (plus physical cameras) within a display. Clever.

A few final notes on wireless and wired connectivity, which apply equally to all four iPhone 14 variants discussed here:

With that, and with the 2,000-word threshold looming, it’s a wrap. Over to you for your impressions in the comments!

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s (first) fall 2022 announcement event: half-steps provide only prior-gen augment (vs fuller advancement) appeared first on EDN.

Advanced Collaboration Between People and Machines

ELE Times - Thu, 09/08/2022 - 15:21

As industries continue to grow and mature, businesses are increasingly turning to industrial automation to boost productivity in the workplace and address the challenges they face. Most companies have found that human-machine collaboration and industrial automation solutions can effectively translate labour-intensive, repetitive and hazardous tasks, keeping human workers fresh and healthy, while also speeding up the time it takes to accomplish those tasks.

Industry 4.0 is still in its nascent stage if we take the industries in India into account, however, the acceptability of automation solutions is growing almost every day. People in India also realise the value addition that these industrial automation solutions provider to their operations. From pick and place to surgical inspection of the product quality, Industrial automation is being welcomed by the industry with open arms, and when these solutions are coupled with artificial intelligence and machine learning software, the potential of automation grows manifolds.

What follows ahead is an extremely insightful conversation with Sameer Gandhi, MD, Omron Automation, and India. Sameer was well quipped in successfully portraying a discernible landscape of Industrial automation scenarios in the world and India, its key drivers and Omron’s latest technology offering in regard to the same. Excerpts:

Machines with Sameer Gandhi, MD, Omron Automation, India

ELE Times: Please tell our readers about the new technologies and solutions that your company has displayed recently at the Automation Expo 2022.

Trying a different approach to showcase our strong intent to address the real challenges & issues of manufacturers on the shop floor, this time we displayed more solutions rather than products at our booth at the Automation Expo.

It ranged from Pick-n-place robotic solutions displaying the dexterity of picking up small components – a very relevant offering for the digital industry- to Predictive Maintenance solutions which help to reduce unscheduled downtime. The predictive maintenance solutions aim to answer one of the biggest challenges in the industry which is the availability of skilled and experienced labour for predictive maintenance.

We also showcased our Cobots solutions for palettizing capabilities and also highlighted its uniqueness to work along with humans safely without the need for fencing or an enclosure.

Traceability was another solution, displayed by us at the booth. This underlined the utility of OMRON laser markers to mark bar codes or text on components, Omron verifiers to read and verify the code on components, as well as the Flying Trigger technology that enables bar code reading on the fly without having to stop the component and finally the NX PLCs for sending the data to the cloud for post-despatch verification and records.

ELE Times: Is there a significant difference between using a cobot and a traditional robot for end-of-line packaging or palletizing? If yes, then please elaborate for our readers.

Yes, for tasks like palletizing or end-of-line packaging which usually needs to be performed over a small footprint and where the layout and workflow are such that the human operator/worker must move in that area, preventing the incorporation of a fenced robot, it is ideal to use a Cobot. Cobots have a versatile application and can be used across industries and sizes of operations.

ELE Times: We are now standing in the midst of the fourth industrial revolution and today’s factory production lines bristle with automated control systems, software, computer panels and robots. What are your views and response to the classic hypothesis of manpower requirement getting snow under Industrial Automation?

The 4th Industrial Automation signifies collaboration and harmony between people and machines to enable manufacturing to achieve world-class standards. It’s no longer machines replacing humans but it’s all about humans and machines working together to achieve newer, better and more creative ways of doing things.

OMRON aims to develop new manufacturing that allows people to maximize their potential and experience growth and motivation by making people a leading role while advancing the substitution from people to machines. Leaving the heavy labour and simple repetitive work to robots, machines support human proficiency and aim to realize a manufacturing site where workers can enjoy their work creatively and enjoy manufacturing, and at the same time achieve high productivity by embodying the concept of the “Advanced collaboration between people and machines.

ELE Times: What are your thought on the level of industrial automation in India in contrast to the rest of the world?

In India, bigger manufacturers in the arena of industries like automotive, secondary packaging, FMCG, and consumer electronics have made considerable progress in adopting automation however the path remains a little perplexing for the SMEs. They have a dire need for better levels of flexibility, quality, and consistency amidst challenges like little or no scope of extension or modification of the shop floor space, shortage of skilled manpower and most importantly meagre capital infusion capabilities.

One of the key requisites for expanding the penetration of automation in the country’s manufacturing sector is to strengthen the availability of a partner ecosystem. Most automation requirements are unique and need to be run like a small project. These require some mechanical additions or modifications to the customers’ existing setup. While larger organisations may be able to run some of these projects on their own, many of the MSME sector organizations usually don’t have the required skill sets to execute such projects. This is where the role of system integrators (SIs) comes into the picture. They marry the robot/cobot with the required mechanical design and implement it at customer locations. While India has many such SIs, we require a substantial increase in their number. Although, given the technological skill set availability and an entrepreneurial mindset, it is only a matter of time before this ecosystem also develops rapidly.

ELE Times: How is OMRON leveraging AI and machine learning to strengthen the Automation services?

OMRON has been earnestly trying to integrate AI and machine learning in its portfolio because we feel the integration is needed to bring in the best of human-machine harmony. The last few years have been manifesting many notable things that indicate the transformation the manufacturing world is undergoing. With the concept of glocalization replacing globalization, mass customization is giving way to personalized manufacturing mandates. With climate changes and unprecedented situations like Covid, consumption orientation is getting replaced with sustenance orientation. Also post-pandemic, the makers are now pondering over social needs, more than ever, rather than only industry and production-based needs. All this demands the I4.0-based data-driven systems to do more and fetch more sustainable results by transitioning to knowledge-driven systems.  Hence there is a need to do more and that can be achieved when we integrate AI and machine learning to achieve harmony at all levels: human-human, human-machine, machine-machine and vice versa.

ELE Times: Please tell our readers about Omron’s current and future plans for the packaging automation sector in India. 

There has been rapid adoption of more advanced automation in the packaging machinery manufacturing industry in India in the last two years as demand for speed and efficiency has grown post-Covid. OMRON works very closely with some of India’s leading packaging machine manufacturers especially in the vertical form fill seal (VFFS) and horizontal form fill seal (HFFS) segments and will continue to make deeper and wider progress in the segment.

The acceptance of robotics in machine manufacturing has also increased in the last few years. Covid made manufacturers realize that people are not going to be available every time. This gave a boost to the adoption of movements like zero touch. OMRON has done projects involving high-speed pick and place where complete lines have been automated by multiple robots and this would also remain one of our prime focus areas in the future too.

Looking at the future of the industry per se, I feel there has been a move from making slower machines to faster ones and also from making simpler machines to more complex machines. This evolution will continue to take place in the Indian packaging machine manufacturing space.

Mayank Vashisht | Sub Editor | ELE Times

The post Advanced Collaboration Between People and Machines appeared first on ELE Times.

Infineon Introduces Next-Generation OptiMOS Integrated POL DC-DC Regulators

ELE Times - Thu, 09/08/2022 - 14:20

As artificial intelligence (AI) technology is integrated into massive data centers, the demand for higher performance will continue to increase. Following this trend, high power density and energy-efficient solutions for smart enterprise systems have also become challenging. As a result, Infineon Technologies AG  today introduced a new family of OptiMOS 5 IPOL buck regulators with VR14-compliant SVID standard and I2C/PMBus digital interfaces for Intel/AMD server CPUs and network ASICs/FPGAs. Housed in a 5 x 6 mm2 PQFN package, these devices are an easy-to-use, fully integrated, and highly efficient solution for next-generation server, storage, telecom, and datacom applications, as well as distributed power systems.

The OptiMOS IPOL single-voltage synchronous buck regulator TDA38640 supports up to 40 A output current. The device comes with Intel SVID and I2C/PMBus digital interfaces and can be used for Intel VR12, VR12.5, VR13, VR14, IMPVP8 designs, and DDR memory without significant changes to the bill of materials (BOM). Infineon’s TDA38740 and TDA38725 digital IPOL buck regulators support up to 40 A and 25 A output current, respectively and come with a PMBus interface. All three new devices use Infineon’s proprietary fast constant on time (COT) PWM engine to deliver industry-leading transient performance while simplifying the design development.

The onboard PWM controller and OptiMOS FETs with integrated bootstrap diode make these new devices a small footprint solution with highly-efficient power delivery. In addition, they provide the required versatility by operating in a broad input and output voltage range while offering programmable switching frequencies from 400 kHz to 2 MHz. A multiple-time programming (MTP) memory allows customization during design and high-volume manufacturing, significantly reducing design cycles and time-to-market. They also offer a digitally programmable load line that can be set via configuration registers without external components, resulting in a simplified BOM. The device configuration can be easily defined using Infineon’s XDP Designer GUI and is stored in the on-chip memory.

For more information, www.infineon.com/ipol-digital

The post Infineon Introduces Next-Generation OptiMOS Integrated POL DC-DC Regulators appeared first on ELE Times.

A New Generation of Hearing Aids

ELE Times - Thu, 09/08/2022 - 13:35

A new system capable of reading lips with remarkable accuracy even when speakers are wearing face masks could help create a new generation of hearing aids.

An international team of engineers and computing scientists developed the technology, which pairs radio-frequency sensing with Artificial intelligence for the first time to identify lip movements.

The system, when integrated with conventional hearing aid technology, could help tackle the “cocktail party effect,” a common shortcoming of traditional hearing aids.

Currently, hearing aids assist hearing-impaired people by amplifying all ambient sounds around them, which can be helpful in many aspects of everyday life.

However, in noisy situations such as cocktail parties, hearing aids’ broad spectrum of amplification can make it difficult for users to focus on specific sounds, like a conversation with a particular person.

One potential solution to the cocktail party effect is to make “smart” hearing aids, which combine conventional audio amplification with a second device to collect additional data for improved performance.

While other researchers have had success in using cameras to aid with lip reading, collecting video footage of people without their explicit consent raises concerns for individual privacy. Cameras are also unable to read lips through masks, an everyday challenge for people who wear face coverings for cultural or religious purposes and a broader issue in the age of COVID-19.

The University of Glasgow-led team outlined how they set out to harness cutting-edge sensing technology to read lips. Their system preserves privacy by collecting only radio-frequency data, with no accompanying video footage.

To develop the system, the researchers asked male and female volunteers to repeat the five vowel sounds (A, E, I, O, and U) first while unmasked and then while wearing a surgical mask.

As the volunteers repeated the vowel sounds, their faces were scanned using radio-frequency signals from both a dedicated radar sensor and a wifi transmitter. Their faces were also scanned while their lips remained still.

Then, the 3,600 samples of data collected during the scans were used to “teach” machine learning and deep learning algorithms how to recognize the characteristic lip and mouth movements associated with each vowel sound.

Because the radio-frequency signals can easily pass through the volunteers’ masks, the algorithms could also learn to read masked users’ vowel formation.

The system proved to be capable of correctly reading the volunteers’ lips most of the time. Wifi data was correctly interpreted by the learning algorithms up to 95% of the time for unmasked lips, and 80% for masked. Meanwhile, the radar data was interpreted correctly up to 91% without a mask, and 83% of the time with a mask.

Dr. Qammer Abbasi, of the University of Glasgow’s James Watt School of Engineering, is the paper’s lead author. He said, “Around 5% of the world’s population—about 430 million people—have some kind of hearing impairment.

“Hearing aids have provided transformative benefits for many hearing-impaired people. A new generation of technology which collects a wide spectrum of data to augment and enhance the amplification of sound could be another major step in improving hearing-impaired people’s quality of life.

“With this research, we have shown that radio-frequency signals can be used to accurately read vowel sounds on people’s lips, even when their mouths are covered. While the results of lip-reading with radar signals are slightly more accurate, the Wi-Fi signals also demonstrated impressive accuracy.

“Given the ubiquity and affordability of Wi-Fi technologies, the results are highly encouraging which suggests that this technique has value both as a standalone technology and as a component in future multimodal hearing aids.”

Professor Muhammad Imran, head of the University of Glasgow’s Communications, Sensing and Imaging research group and a co-author of the paper, added, “This technology is an outcome from two research projects funded by the Engineering and Physical Sciences Research Council (EPSRC), called COG-MHEAR and QUEST.

“Both aim to find new methods of creating the next generation of health care devices, and this development will play a major role in supporting that goal.”

The post A New Generation of Hearing Aids appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки