EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 56 min ago

Portable test gear enables mmWave signal analysis

Thu, 11/14/2024 - 19:41

Select Tektronix FieldFox handheld analyzers now cover frequencies up to 170 GHz for mmWave signal analysis. In a collaboration with Virginia Diodes Inc. (VDI), Keysight’s A- and B-Series analyzers (18 GHz and up) can pair with VDI’s PSAX frequency extenders to reach sub-THz frequencies.

Precise mmWave measurements are essential for testing wireless communications and radar systems, particularly in 5G, 6G, aerospace, defense, and automotive radar applications. Because mmWave signals are sensitive to obstacles, weather, and interference, understanding their propagation characteristics helps engineers design more efficient networks and radar systems.

FieldFox with PSAX allows users to capture accurate mmWave measurements in a lightweight, portable package. It supports in-band signal analysis through selectable spectrum analyzer, IQ analyzer, and real-time spectrum analyzer modes, achieving typical sensitivity of -155 dBm/Hz.

The PSAX module connects directly to the RF ports on the FieldFox analyzer. Its adjustable IF connector aligns with the LO and IF port spacings on all FieldFox models. VDI also offers the PSGX module, which, when paired with a FieldFox equipped with Option 357, enables mmWave signal generation up to 170 GHz.

FieldFox product page

PSAX module product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Portable test gear enables mmWave signal analysis appeared first on EDN.

Renesas expands line of programmable mixed-signal ICs

Thu, 11/14/2024 - 19:40

Renesas has launched the AnalogPAK series of programmable mixed-signal ICs, including a 14-bit SAR ADC with a programmable gain amplifier. According to the company, this industry-first device combines a rich set of digital and analog features to support measurement, data processing, logic control, and data output.

AnalogPAK devices, a subset of the GreenPAK family, are NVM-programmable ICs that enable designers to integrate multiple system functions. ICs in both groups minimize component count, board space, and power and can replace standard mixed-signal products and discrete circuits. They also provide reliable hardware supervisory functions for SoCs and microcontrollers.

The SLG47011 multichannel SAR ADC offers user-defined power-saving modes for all macrocells. Designers can switch off some blocks in sleep mode to reduce power consumption to the microamp level. Key features include:

  • VDD range of 1.71 V to 3.6 V
  • SAR ADC: up to 14-bit, up to 2.35 Msps in 8-bit mode
  • PGA: six amplifier configurations, rail-to-rail I/O, 1x to 64x gain
  • DAC: 12-bit, 333 ksps
  • Hardware math block for multiplication, addition, subtraction, and division
  • 4096-word memory table block
  • Oscillators: 2/10 kHz and 20/40 MHz
  • Analog temperature sensor
  • Configurable counter/delay blocks
  • I2C and SPI communication interfaces
  • Available in a 16-pin, 2.0×2.0×0.55-mm QFN package

In addition to the SLG47011, Renesas announced three other AnalogPAK devices. The compact SLG47001 and SLG47003 enable precise, cost-effective measurement systems for applications like gas sensors, power meters, servers, and wearables. The SLG47004-A is an automotive Grade 1 qualified device for infotainment, navigation, chassis and body electronics, and automotive display clusters.

The AnalogPAK devices are available now from Renesas and authorized distributors.

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Renesas expands line of programmable mixed-signal ICs appeared first on EDN.

RS232 meets VFC

Thu, 11/14/2024 - 18:18

In the early days of small (e.g., personal) computers, incorporation of one or two (or more) RS232 serial ports as general purpose I/O adaptors was common practice. Recently, this “vintage” standard has been largely replaced (after all, it is 64 years old) by faster and more power thrifty serial interface technologies (e.g., USB, I2C, SPI). Nevertheless, RS232 hardware is still widely and inexpensively available, and its bipolar signaling levels remain robustly noise and cable-length-effects resistant. Another useful feature is the bipolar supply voltages (usually +/-6 V) generated by typical RS232 adaptors. These can be conveniently tapped into via standard RS232 output signals (e.g., RTS and TXD) and used to power attached analog and digital circuitry.

Wow the engineering world with your unique design: Design Ideas Submission Guide

This design idea (DI) does exactly that by using asynchronous RS232 to power and count pulses from a simple 10 kHz voltage-to-frequency converter (VFC). Getting only one bit of info from each 10-bit serial character may seem inefficient (because it is), but in this case it’s a convenient ploy to add a simple analog input that can be located remotely from the computer with less fear of noise pickup.

See Figure 1 for the mind meld of RS232 with VFC.

Figure 1 A 10-kHz VFC works with and is powered by a generic RS232 port.

Much of the core of Figure 1 was previously described in “Voltage inverter design idea transmogrifies into a 1MHz VFC.”

One difference, other than the 100x lower max frequency, between that older DI and this one is the use of a metal gate CMOS device (CD4053B) for U1 instead of a silicon gate (HC4053) U1. That change is made necessary by the higher operating voltage (12 V versus 5 V) used here. Other design elements remain (roughly) similar.

Input current = Vin/R1, charges C3 which causes transconductance amplifier Q1,Q2 to sink, increasing current from Schmidt trigger oscillator cap C1. This increases U1c oscillator frequency and the current pumped by U1a,b and C2. Because the pump current has negative polarity, it completes a feedback loop that continuously balances pump current to equal input current:

Note that R1 can be chosen to implement almost any desired Vin full-scale factor.

D3 provides the ramp reset pulse that initiates each oscillator cycle and also sets the duration of the RS232 ST start pulse to ~10 µs as illustrated in Figure 2. Note that this combination of time constants and baud rate gives ~11% overrange headroom.

Figure 2 Each VFC pulse generates a properly formatted, but empty, RS232 character.

The ratio of R5/R3 is chosen to balance Q2/Q1 collector currents when Vin and Fpump equal zero, thus minimizing Vin zero offset. Consequently, linearity and zero offset errors are less than 1% of full-scale.

However, this leaves open the possibility of unacceptable scale factor error if the +6 logic power rail isn’t accurate enough, which it’s very unlikely to be. If we want a precision voltage reference that’s independent of +6 V instability, the inexpensive accurate 5 V provided by U2, C5, and R7 will fill the bill. 

However, if the application involves conversion of a ratiometric signal proportional to +6 V such as provided by a resistive sensor (e.g., thermistor), then U2 and friends should be omitted, U1 pin 2 connected to -6 V, and C2 reduced to 1.6 nF. Then:

 Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RS232 meets VFC appeared first on EDN.

Applying AI to RF design

Thu, 11/14/2024 - 17:34
Introduction: Engineered systems

 

Human inventions, namely engineered systems, have relied on fundamental discoveries in physics and mathematics, e.g., Maxwell’s equations, Quantum mechanics, Information theory, etc., thereby applying these concepts to achieve a particular goal. However, engineered systems are rapidly growing in complexity and size, where the functionality of subcomponents may be nonlinear in nature and starting from these first principles is restrictive. MathWorks has steadily laid a foundation in modeling and simulation with MATLAB/Simulink for over four decades and now assists designers with these complex, multivariate systems with AI. 

Houman Zarrinkoub, MathWorks principal product manager for wireless communications, discussed with EDN the growing role AI plays in the design of next generation wireless systems

MATLAB’s toolboxes for wireless design

“So you’re building a wireless system and, at a basic level, you have a transmission back and forth between, for example, a base station and a cell phone,” said Houman, “this is known as a link.” 

To begin, Houman explains at a very basic level engineers are building the two subsystems (transmitter and receiver) that “talk” to each other with this link. There are the digital components that will sample, quantize, and encode the data and the RF components that will generate the RF signal, upconvert, downconvert, mix, amplify, filter, etc. MATLAB has an array of established toolboxes such as the 5G Toolbox, LTE Toolbox, Wi-Fi Toolbox, and Satellite communications Toolbox that already assist with the design, simulation, and verification of all types of wireless signals from 5G NR and LTE to DVB-S2/S2X/RCS2 and GPS waveforms. This is extended to subcomponents with the tools including (but not limited to) the RF Toolbox, Antenna Toolbox, and Phase Array System Toolbox. 

Now with AI, two main design approaches are used leveraging the Deep Learning Toolbox, Reinforcement Learning Toolbox, and Machine Learning Toolbox

AI workflow 

The workflow includes four basic steps that are further highlighted in Figure 1

  1. Data generation
  2. AI training
  3. Integration, simulation, and testing
  4. Deployment and implementation

These basic steps are necessary for implementing any deep learning model in an application, but how does it assist with RF and wireless design? 

Figure 1 MATLAB workflow for implementing AI in wireless system design. Source: MathWorks

 Data generation: Making a representative dataset 

It goes without saying that data generation is necessary in order to properly train the neural network. For wireless systems, data can either be obtained from a real system by capturing signals with an antenna or done synthetically on the computer. 

The robustness of this data is critical. “The keyword is making a representative dataset, if we’re designing for a wireless system that’s operating at 5 GHz we have data at 2.4 GHz, it’s useless.” In order to ensure the system is well-designed the data must be varied including signal performance in both normal operating conditions and more extreme conditions. “You usually don’t have data for outliers that are 2 or 3 standard deviations from the mean, but if you don’t have this data your system will fail when things shift out of the comfort zone,” explains Houman. 

Houman expands on this by saying it is best for designers to have the best of both worlds and use both a real world, hardware-generated dataset as well as the synthetic dataset to include some of those outliers. “With hardware, there are severe limitations where you don’t have time to create all that data. So, we have the Wireless Waveform Generator App that allows you to generate, verify, and analyze your synthetic data so that you can augment your dataset for training.” As shown in Figure 2, the app allows designers to select waveform types and introduce impairments for more real world signal scenarios.

Figure 2 Wireless Waveform Generator application allows users to generate wireless signals and introduce impairments. Source: MathWorks

Transfer learning: Signal discrimination 

Then, AI training is performed to either train a model that was built from scratch or, to  train an established model (e.g., AlexNet, GoogleNet) to optimize it for your particular task; this is known as transfer learning. As shown in Figure 3, pretrained networks can be reused in a particular wireless application by adding new layers that allow the model to be more fine-tuned towards the specific dataset. “You turn the wireless signal, and in a one-to-one manner, transform it into an image,” said Houman when discussing how this concept was used for wireless design.

Figure 3 Pretrained networks can be reused in a particular wireless application by adding new layers that allow the model to be more fine-tuned towards the specific dataset. Source: MathWorks

“Every wireless signal is IQ samples, we can transform them into an image by taking a spectrogram, which is a presentation of the signal in time and frequency,” said Houman, “we have applied this concept to wireless to discriminate between friend or foe, or between 5G and 4G signals.” Figure 4 shows the test of a trained system that used an established semantic segmentation network (e.g., ResNet-18, MobileNetv2, and ResNet-5). The test used over-the-air (OTA) signal captures with software-defined radio (SDR). Houman elaborated, “So you send a signal and you classify, and based on that classification, you have multiple binary decisions. For example, if it’s 4G, do this; if it’s 5G to this, if it’s none of the above, do this. So the system is optimized by the reliable classification of the type of signal the system is encountering.”

Figure 4 Labeled spectrogram outputted by a trained wireless system to discriminate between LTE and 5G signals. Source: MathWorks

Building deep learning models from scratch Supervised learning: Modulation classification with built CNN

Modulation classification can also be accomplished with the Deep Learning Toolbox where users generate synthetic, channel-impaired waveforms for a dataset. This dataset is used to train a convolutional neural network (CNN) and tested with hardware such as SDR with OTA signals (Figure 5). 

Figure 5 Output confusion matrix of a CNN trained to classify signals by modulation type with test data using SDR. Source: MathWorks

“With signal discrimination, you’re using more classical classification so you don’t need to do a lot of work developing those trained networks. However, since modulation and encoding is not found on the spectrogram, most people will then choose to develop their models from scratch,” said Houman, “in this approach, designers will use MATLAB with Python and implement classical building blocks such as rectifier linear unit (ReLU) to build out layers in their neural network.” He continues, “Ultimately a neural network is built on components, you either connect them in parallel or serially, and you have a network. Each network element has a gain and training will adjust the gain of each network element until you converge on the right answer.” He mentions that, while a less direct path is taken to obtain the modulation type, systems that combine these allow their wireless systems to have a much deeper understanding of the signals they are encountering and make much more informed decisions.

Beam selection and DPD with NN

Using the same principles neural networks (NNs) can be customized within the MATLAB environment to solve inherently nonlinear problems such as applying digital predistortion (DPD) to offset the nonlinearities in power amplifiers (PAs). “DPD is a classical example of a nonlinear problem. In wireless communications, you send a signal, and the signal deteriorates in strength as it leaves the source. Now, you have to amplify the signal so that it can be received but no amplifier is linear, or has constant gain across its bandwidth.” DPD attempts to deal with the inevitable signal distortions that occur when using a PA that is operating within its compression region by observing the output signal from the PA and using that as feedback for the alterations to the input signal so that the PA output is closer to ideal. “So the problem is inherently non-linear and many solutions have been proposed but AI comes along, and produces superior performance than other solutions for this amplification process,” said Houman. The MATLAB approach trains a fully connected NN as the inverse of the PA and uses it for DPD (NN-DPD), then, the NN-DPD is tested using a real PA and compared with a cross-term memory polynomial DPD. 

Houman goes on to describe another application for NN-based wireless design (Figure 6), “Deep learning also has a lot of applications in 5G and 6G where it combines sensing and communications. We have a lot of deep learning examples where different algorithms are used to position and localize users so you can send data that is dedicated to the user.” The use case that was mentioned in particular related to integrated sensing and communication (ISAC), “When I was young and programming 2G and 3G systems, the philosophy of communication was that I would send the signal in all directions, and if your receiver got that information, good for it; it can now decode the transmission. If the receiver couldn’t do that, tough luck,” said Houman, “With 5G and especially 6G, the expectations have risen, you have to have knowledge of where your users are and beamform towards them. If your beamwidth is too big, you lose energy. But, if your beamwidth is too narrow, if your users move their head, you miss them. So you have to constantly adapt.” In this solution, instead of using GPS signals, lidar, or roadside camera images, the base station essentially becomes the GPS locator; sending signals to locate users and based upon the returned signal, sends communications. 

Figure 6 The training phase and testing phase of a beam management solution that uses the 3D coordinates of the receiver. Source: MathWorks

Unsupervised learning: The autoencoder path for FEC 

Alternatively, engineers can follow the autoencoder path to help build a system from the ground up. These deep learning networks consist of an encoder and a decoder and are trained to replicate their input data to, for instance, remove noise and detect anomalies in signal data. The benefit of this approach is that it is unsupervised and does not require labeled input data for training. 

“One of the major aspects of 5G and 6G is forward error correction (FEC) where, when I send something to you, whether its voice or video, whether or not the channel is clean or noisy, the receiver should be able to handle it,” said Houman. FEC is a technique that adds redundant data to a message to minimize the number of errors in the received information for a given channel (Figure 7). “With the wireless autoencoder, you can automatically add redundancy  and redo modulation and channel coding based on estimations of the channel condition, all unsupervised.”

Figure 7 A wireless autoencoder system ultimately restricts the encoded symbols to an effective coding rate for the channel. Source: MathWorks

Reinforcement learning: Cybersecurity and cognitive radar 

“With deep learning and machine learning, where the process of giving inputs and receiving an output will all be performed offline,” explained Houman. “With deep learning, you’ve come up with a solution and you simply apply that solution in a real system.” He goes on to explain how reinforcement learning must be applied to a real system at the start. “Give me the data and I will update that brain constantly.” 

Customers in the defense industry will leverage Reinforcement Learning Toolbox to, for example, assess all the vulnerabilities of their 5G systems and update their cybersecurity accordingly. “Based upon the vulnerability, they will devise techniques to overcome the accessibility of the unfriendly agent to the system.” Other applications might include cognitive radar where cognitive spectrum management (CSM) would use reinforcement learning to analyze patterns in the spectrum in real-time and predict future spectrum usage based upon previous and real-time data. 

Integration, simulation, and testing

As shown in many of these examples, the key to the third step in the workflow is to create a unique dataset to test the effectiveness of the wireless system. “If you use the same dataset to train and test, you’re cheating! Of course it will match. You have to take a set that’s never been seen during training but is still viable and representative and use that for testing,” explains Houman, “That way, there is a confidence that different environments can be handled by the system with the training we did in the first step of data gathering and training.” The Wireless Waveform Generator App is meant to assist with both these stages. 

Deployment and implementation

The MathWorks approach to deployment works with engineers at the language level with a more vendor-agnostic approach to hardware. “We have a lot of products that turn popular languages such into MATLAB code, to train and test the algorithm, and then turn that back into the code that will go into the hardware. For FPGAs and ASICs, for example, the language is Verilog or VHDL. We have a tool called the HDL Coder that will take the MATLAB and Simulink model and turn that into low level VHDL code to go into any hardware.” 

Addressing the downsides of AI with the digital twin

The natural conclusion of the interview was understanding the “catch” of using AI to improve wireless systems. “AI takes the input, trains the model, and produces an output. In that process, it merges all the system components into one. All those gains, they change together, so it becomes an opaque system and you lose insight into how the system is working,” said Houman. While this process has considerable benefit, troubleshooting issues can be much more challenging than debugging with solutions that leverage the traditional, iterative approach where isolating problems might be simpler. “So, in MathWorks, we are working on creating a digital twin of every engineered system, be it a car, an airplane, a spacecraft, or a base station.” Houman describes this as striking a balance between the  traditional engineered system approach and an AI-based engineering solution, “Any engineer can compare their design to the all-encompassing digital twin and quickly identify where their problem is. That way, we have the optimization of AI, plus the explainability of model-based systems. You build a system completely in your computer before one molecule goes into the real world.”

Aalyia Shaukat, associate editor at EDN, has worked in the engineering publishing industry for over 8 years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Applying AI to RF design appeared first on EDN.

Shift in electronic systems design reshaping EDA tools integration

Thu, 11/14/2024 - 16:48

A new systems design toolset aims to create a unified user experience for board designers while adding cloud connectivity and artificial intelligence (AI) capabilities, which will enable engineers to adapt to rapidly changing design and manufacturing environments.

To create this highly integrated and multidisciplinary tool, Siemens EDA has combined its Xpedition system design software with Hyperlynx verification software and PADS Professional software for integrated PCB design. The solution also includes Siemens’ Teamcenter software for product lifecycle management and NX software for product engineering.

Figure 1 The new systems design toolset enhances integration by combining multiple design tools. Source: Siemens EDA

Evolution of systems design

Systems design—spanning from IC design and manufacturing to IC packaging and board design to embedded software—has been constantly evolving over the past decades, and so have toolsets that serve these vital tenets of electronics.

Take IC design, for instance, which is now carried out by multiple outfits. Then, there are PCBs that unified early on with large vendors offering front-to-back solutions. PCBs then entered an era of multidiscipline design, needing more design automation. Finally, we entered the modern era that encompasses cloud computing and AI.

Siemens EDA’s next-generation electronic systems design software takes an integrated and multidisciplinary approach to cater to this changing landscape. David Wiens, project manager for Xpedition at Siemens EDA, told EDN that this solution took five years to develop through extensive beta cycles with designers to validate generational shifts in technologies. It’s built on five pillars: Intuition, AI, cloud, integration, and security.

Figure 2 The next-generation electronic system design aims to deliver an intuitive, AI-enhanced, cloud-connected, integrated, and secure solution to empower engineers and organizations in today’s dynamic environment. Source: Siemens EDA

But before explaining these fundamental tenets of electronic systems design, he told EDN what drove this initiative in the first place.

  1. Workforce in transition

A lot of engineers are retiring, and their expertise is going with them, creating a large gap for young engineers. Then there is this notion that companies haven’t been hiring for a decade or so and that there is a shortage of new engineers. “The highly intuitive tools in this systems design solution aim to overcome talent shortages and enable engineers to quickly adapt with minimal learning curves,” Wiens said.

  1. Mass electrification

Mass electrification leads to a higher number of design starts, faster design cycles, and increased product complexity. “This new toolset adds predictive engineering and new support assistance using AI to streamline and optimize the design workflows,” said Wiens.

  1. Geopolitical and supply chain volatility

Wiens said that the COVID era introduced some supply chain challenges while some challenges existed before that due to geopolitical tensions. “COVID just magnified them.”

The new electronic systems design solution aims to address these challenges head-on by providing a seamless flow of data and information throughout the product lifecycle using digital threads. It facilitates a unified user experience that combines cloud connectivity and AI capabilities to drive innovation in electronic systems design.

Below is a closer look at the key building blocks of this unified solution and how it can help engineers to tackle challenges head-on.

  1. Intuitive

The new toolset boosts productivity with a modern user experience; design engineers can start with a simple user interface and switch to a complex user interface later. “We have taken technologies from multiple acquisitions and heritages,” said Wiens. “Each of those had a unique user experience, which made it difficult for engineers to move from one environment to the next.” So, Siemens unified that under a common platform, which allows engineers to seamlessly move from tool to another.

Figure 3 The new toolset allows engineers to seamlessly move from one tool to another without rework. Source: Siemens EDA

  1. AI infusion

AI infusion accelerates design optimization and automation. For instance, with predictive AI, design engineers can leverage simulation engines from a broader Siemens portfolio. “The goal is to expand engineering resources without necessarily expanding the human capital and compute power,” Wiens said.

Figure 4 The infusion of AI improves design process efficiency and leverages the knowledge of experienced engineers in the systems design environment. Source: Siemens EDA

Here, features like chat assistance systems allow engineers to ask natural language questions. “We have a natural language datasheet query, which returns the results in natural language, making it much simpler to research components,” he added.

  1. Cloud connected

While cloud-connected tools enable engineers to collaborate seamlessly across the ecosystem, PCB tools are practically desktop-based. In small- to mid-sized enterprises, some engineers are shifting to cloud-based tools, but large enterprises don’t want to move to cloud due to perceived lack of security and performance.

Figure 5 Cloud connectivity facilitates collaboration across the value chain and provides access to specialized services and resources. Source: Siemens EDA

“Our desktop tools are primary offerings in a simulation environment, but we can perform managed cloud deployment for design engineers,” said Wiens. “When designers are collaborating with outside engineering teams, they often struggle collaborating with partners. We offer a common viewing environment residing in the cloud.”

  1. Integration

Integration helps break down silos between different teams and tools in systems design. Otherwise, design engineers must spend a lot of time in rework to create the full model when moving from one design tool to another. The same thing happens between design and manufacturing cycles; engineers must rebuild the model in the manufacturing phase.

The new systems design toolset leverages digital threads across multiple domains. “We have enhanced integration with this release to optimize the flow between tools so engineers can control the ins and outs of data,” Wiens said.

  1. Security

Siemens, which maintains partnerships with leading cloud providers to ensure robust security measures, manages access control based on user role, permission, and location in this systems design toolset. The next-generation systems design offers rigid data access restrictions that can be configured and geo-located.

“It provides engineers with visibility on how data is managed at any stage in design,” said Wiens. “It also ensures the protection of critical design IP.” More importantly, security aspects like monitoring and reporting behavior and anomalies lower the entry barriers for tools being placed in cloud environments.

Need for highly integrated toolsets

The electronics design landscape is constantly changing, and complexity is on the rise. This calls for more integrated solutions that make collaboration between engineering teams easier and safer. These new toolsets must also take advantage of new technologies like AI and cloud computing.

With the evolution of the electronics design landscape, that’s how toolsets can adapt to changing realities such as organization flexibility and time to productivity.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Shift in electronic systems design reshaping EDA tools integration appeared first on EDN.

Taking a peek inside an infrared thermometer

Wed, 11/13/2024 - 17:46

Back in September, within the introduction to my teardown of a pulse oximeter, I wrote:

One upside, for lack of a better word, to my health setback [editor note: a recent, and to the best of my knowledge first-time, COVID infection over the July 4th holidays] is that it finally prompted me to put into motion a longstanding plan to do a few pandemic-themed teardowns.

That pulse oximeter piece was the kickoff to the series; this one, a dissection of an infrared thermometer, is the second (and the wrap-up, unless I subsequently think of something else!). These devices gained pervasive use during the peak period of the COVID-19 pandemic, courtesy of their non-contact subject measurement capabilities. As Wikipedia puts it:

At times of epidemics of diseases causing fever…infrared thermometers have been used to check arriving travelers for fever without causing harmful transmissions among the tested. In 2020 when [the] COVID-19 pandemic hit the world, infrared thermometers were used to measure people’s temperature and deny them entry to potential transmission sites if they showed signs of fever. Public health authorities such as the FDA in United States published rules to assure accuracy and consistency among the infrared thermometers.

And how do they work? Wikipedia again, with an introductory summary:

An infrared thermometer is a thermometer which infers temperature from a portion of the thermal radiation sometimes called black-body radiation emitted by the object being measured. They are sometimes called laser thermometers as a laser is used to help aim the thermometer, or non-contact thermometers or temperature guns, to describe the device’s ability to measure temperature from a distance. By knowing the amount of infrared energy emitted by the object and its emissivity, the object’s temperature can often be determined within a certain range of its actual temperature. Infrared thermometers are a subset of devices known as “thermal radiation thermometers”.

 Sometimes, especially near ambient temperatures, readings may be subject to error due to the reflection of radiation from a hotter body—even the person holding the instrument—rather than radiated by the object being measured, and to an incorrectly assumed emissivity. The design essentially consists of a lens to focus the infrared thermal radiation on to a detector, which converts the radiant power to an electrical signal that can be displayed in units of temperature after being compensated for ambient temperature. This permits temperature measurement from a distance without contact with the object to be measured. A non-contact infrared thermometer is useful for measuring temperature under circumstances where thermocouples or other probe-type sensors cannot be used or do not produce accurate data for a variety of reasons.

Today’s victim, like my replacement for the precursor pulse oximeter teardown subject, came to me via a May 2024 Meh promotion. A two-pack had set me back only $10, believe it or not (I wonder what they would have cost me in 2020?). One entered our home health care gear stable, while the other will be disassembled here. I’ll start with some stock photos:

Now for some as-usual teardown-opening box shots:

Speaking of opening:

The contents include our patient (of course), a set of AA batteries (which I’ll press into reuse service elsewhere):

and a couple of slivers of literature:

Now for the star of the show, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the Meh product page claims that the infrared thermometer is “small” and “lightweight” but isn’t any more specific than that). Front:

They really don’t think that sticker’s going to deter me, do they?

Back:

A closeup of the “LCD Backlit display with 32 record memory”, with a translucent usage-caution sticker from-factory stuck on top of it:

Right (as defined from the user’s perspective) side, showcasing the three UI control buttons:

Left:

revealing the product name (Safe-Mate LX-26E, also sold under the Visiomed brand name) and operating range (2-5 cm). The label also taught me something new; the batteries commonly referred to as “AAs” are officially known as “LR6s”:

Top:

Another sticker closeup:

And bottom, showcasing the aforementioned-batteries compartment “door”:

Flipping it open reveals a promising screw-head pathway inside:

although initial subsequent left-and-right half separation attempts were incomplete in results:

That said, they did prompt the battery-compartment door to fall out:

I decided to pause my unhelpful curses and search for other screw heads. Nothing here:

or here:

Here either, although I did gain a fuller look at the switches (complete with intriguing connections-to-insides traces) and their rubberized cover:

A-ha!

That’s more like it (complete with a trigger fly-away):

I was now able to remove the cap surrounding the infrared receiver module:

Followed by the module itself, along with the PCB it was (at the moment) connected to:

Some standalone shots of the module and its now-separated ribbon cable:

And of the other now-disconnected ribbon cable, this one leading to the trifecta of switches on the outside:

Here’s the front of the PCB, both in with-battery-compartment overview:

and closeup perspectives, the latter more clearly revealing its constituent components, such as the trigger switch toward the bottom, an IC from Chipsea Technologies labeled “2012p1a” toward the top, and another labeled:

CHIPSEA
18M88-LQ
2020C1A

at the top (reader insights into the identities of either/both of these ICs is greatly appreciated):

And here’s the piezo buzzer-dominant, comparatively bland (at least at first glance) backside:

which became much more interesting after I lifted away the “LCD Backlit display with 32 record memory”, revealing a more complex-PCB underside than I’d originally expected:

That’s all I’ve got for today. What did you find surprising, interesting and/or potentially underwhelming about the design? Let me (and your fellow readers) know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Taking a peek inside an infrared thermometer appeared first on EDN.

Simple 5-component oscillator works below 0.8V

Tue, 11/12/2024 - 17:15

Often, one needs a simple low voltage sinusoidal oscillator with good amplitude and frequency stability and low harmonic distortion; here, the Peltz oscillator becomes a viable candidate. Please see the Peltz oscillator Analog Devices Wiki page here and a discussion on my Peltz oscillator here.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Shown in Figure 1, the Peltz oscillator requires only two transistors, one capacitor, one inductor and one resistor. In this configuration, the output voltage is a ground referenced, direct coupled, low distortion sinewave, swinging above and below ground at ~1 Vbe, while operating from a low negative supply voltage (AAA battery).

Figure 1 Basic configuration of a Peltz oscillator with a low component count yielding a low distortion sinewave output.

The oscillating frequency is shown:

A simplified analysis shows the minimum negative supply voltage (Vee) is:

Where Vt is the Thermal Voltage (kT/q), Z is the total impedance “seen” at the parallel resonant LC network, Vbe is the base emitter voltage of Q1 [Vt*ln(Ic/Is)], and Is is the transistor saturation current.

Here’s an example with a pair of 2N3904s, a 470 µH inductor, 0.22 µF capacitor, and a 510 Ω bias resistor, powered from a single AAA cell (the oscillator actually works at ~0.7 VDC), producing a stable, low noise ~16 kHz sinewave as shown in Figure 2, Figure 3, and Figure 4.

Figure 2 Peltz oscillator output with a clean 16 kHz sinewave.

Figure 3 Spectral view of sinewave showing fundamental as well as 2nd and 3rd harmonics.

Figure 4 Zoomed in view of ~16 kHz sinewave.

Note the output frequency, peak to peak amplitude and overall waveform quality is not bad for a 5-element oscillator!

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Ex-elis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Simple 5-component oscillator works below 0.8V appeared first on EDN.

The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plans

Mon, 11/11/2024 - 17:28

I mentioned recently that for the third time in roughly a decade, a subset of the electronics suite in my residence had gotten zapped by a close-proximity lightning storm. Although this follow-up writeup, one of a planned series, was already proposed to (and approved by) Aalyia at the time, subsequent earlier-post comments exchanges with a couple of readers were equal parts informative and validating on this one’s topical relevance.

First off, here’s what reader Thinking_J had to say:

Only 3 times in a 10-year span, in the area SW of Colorado Springs?

Brian, you appear to be lucky.

https://www.cbsnews.com/colorado/news/lightning-dangers-traditional-beliefs-researchers-colorado-update-ideas-storms-denver/

My response:

Southwest of Golden (and Denver, for that matter), not Colorado Springs, but yes, the broader area is active each year’s “monsoon season”:

https://climate.colostate.edu/co_nam.html

The “monsoon season” I was referencing historically runs from mid-June through the end of September. Storms normally fire up beginning mid-afternoon and can continue overnight and into the next morning. As an example of what they look like, I grabbed a precipitation-plot screenshot during a subsequent storm this year; I live in Genesee, explicitly noted on the map:

Wild, huh?

Then there were the in-depth thoughts of reader “bdcst”, in a posting only the first half of which I’ve republished here for brevity (that said, I encourage you to read the post in its entirety at the original-published location):

Hi Brian,

Several things come to mind. First is, if you think it was EMP, then how will moving your copper indoors make a difference unless you live in a Faraday cage shielded home? The best way to prevent lightning induced surges from entering your equipment via your network connection, is to go to a fiber drop from your ISP, cable or telecom carrier. You could also change over to shielded CAT-6 Ethernet cable.

 At my broadcast tower sites, it’s the incoming copper, from the tower, or telephone system or from the power line itself that brings lighting induced current indoors. Even with decent suppressors on all incoming copper, the only way to dissipate most of the differential voltage from the large current spikes is with near zero ohms bonding between every piece of equipment and to a single very low impedance earth ground point. All metal surfaces in my buildings are grounded by large diameter flexible copper wire, even the metal entrance door is bonded to it bypassing the resistance of its hinges.

 When I built my home at the end of a long rural power line, I experienced odd failures during electrical storms. I built my own power line suppressor with the largest GE MOV’s I could find. That eliminated my lightning issues. Of course, surge suppressors must have very low resistance path to ground to be effective. If you can’t get a fiber drop for your data, then do install several layers of Ethernet suppressors between the incoming line and your home. And do install at least a small AC line suppressor in place of a two-pole circuit breaker in your main panel, preferably at the top of the panel where the main circuit breaker resides.

My response, several aspects of which I’ll elaborate on in this writeup:

Thanks everso for your detailed comments and suggestions. Unfortunately, fiber broadband isn’t an option here; I actually feel fortunate (given its rural status) to have Gbit coax courtesy of Comcast:

 https://www.edn.com/a-quest-for-faster-upstream-bandwidth/

 Regarding internal-vs-external wired Ethernet spans, I don’t know why, but the only times I’ve had Ethernet-connected devices fry (excluding coax and HDMI, which also have been problematic in the past) are related to those (multi-port switches, to be precise) on one or both ends of an external-traversed Ethernet span. Fully internal Ethernet connections appear to be immune. The home has cedar siding and of course there’s also insulation in the walls and ceiling, so perhaps that (along with incremental air gaps) in sum provides sufficient protection?

 Your question regarding Ethernet suppressors ties nicely into one of the themes of an upcoming planned blog post. I’ve done only rudimentary research so far, but from what I’ve uncovered to date, they tend to be either:

  1. Inexpensive but basically ineffective or
  2. Incredibly expensive, but then again, replacement plasma TVs and such are pricey too (http://www.edn.com/electronics-blogs/brians-brain/4435969/lightning-strike-becomes-emp-weapon-)

Plus, I’m always concerned about bandwidth degradation that may result from the added intermediary circuitry (same goes for coax). Any specific suggestions you have would be greatly appreciated.

 Thanks again for writing!

Before continuing, an overview of my home network will be first-time informative for some and act as a memory-refresher to long-time readers for whom I’ve already touched on various aspects. Mine’s a two-story home, with the furnace room, roughly in the middle of the lower level, acting as the networking nexus. Comcast-served coax enters there from the outside and, after routing through my cable modem and router, feeds into an eight-port GbE switch. From there, I’ve been predominantly leveraging Ethernet runs originally laid by the prior owner.

In one direction, Cat 5 (I’m assuming, given its age, versus a newer generation) first routes through the interstitial space between the two levels of the house to the far wall of the family room next to the furnace room, connecting to another 8-port GbE switch. At that point, another Ethernet span exits the house, is tacked to the cedar wood exterior and runs to the upper-level living room at one end of the house, where it re-enters and connects to another 8-port GbE switch. In the opposite direction, another Cat 5 span exits the house at the furnace room and routes outside to the upper-level master bedroom at the other end of the house, where it re-enters and connects to a five-port GbE switch. Although the internal-only Ethernet is seemingly comprised of conventional unshielded cable, judging from its flexibility, I was reminded via examination in prep for tackling this writeup that the external wiring is definitely shielded, not that this did me any protective good (unsurprisingly, sadly, given that externally-routed shielded coax cable spans from room to room have similarly still proven vulnerable in the past).

Normally, there are four Wi-Fi nodes in operation, in a mesh configuration comprised of Google Nest Wifi routers:

  1. The router, in the furnace room downstairs
  2. A mesh point in the master bedroom upstairs at one end of the house
  3. Another in the living room upstairs at the other end of the house
  4. And one more downstairs, in an office directly below the living room

Why routers in the latter three cases, versus less expensive access points? In the Google Nest Wifi generation, versus with the Google OnHub and Google Wifi precursors (as well as the Google Nest Wifi Pro successor, ironically), access points are only wirelessly accessible; they don’t offer Ethernet connectivity as an option for among other things creating a wired “mesh” backbone (you’ll soon see why such a backbone is desirable). Plus, Google Nest Wifi Routers’ Wi-Fi subsystems are more robust; AC2200 MU-MIMO with 4×4 on 5 GHz and 2×2 on 2.4GHz, versus only AC1200 MU-MIMO Wi-Fi 2×2 on both 2.4 GHz and 5 GHz for the Google Nest Wifi Point. And the Point’s inclusion of a speaker is a don’t-care (more accurate: a detriment) to me.

I’ve augmented the already-existing Ethernet wiring when we bought the house with two other notable additional spans, both internal-only. One runs from the furnace room to my office directly above it (I did end up replacing the original incomplete-cable addition with a fully GbE-complaint successor). The other goes through the wall between the family room and the earlier-mentioned office beyond it (and below the living room), providing it with robust Wi-Fi coverage. As you’ll soon see, this particular AP ended up being a key (albeit imperfect) player in my current monsoon-season workaround.

Speaking of workarounds, what are my solution options, given that the outdoor-routed Ethernet cable is already shielded? Perhaps the easiest option would be to try installing Ethernet surge protectors at each end of the two outdoors-dominant spans. Here, for example are some that sell for $9.99 a pair at Amazon (and were discounted to $7.99 a pair during the recent Prime Fall Days promotion; I actually placed an order but then canceled it after I read the fine print):

As the inset image shows and the following teardown image (conveniently supplied by the manufacturer) further details, they basically just consist of a bunch of diodes:

This one’s twice as expensive, albeit still quite inexpensive, and adds an earth ground strap:

Again, nothing but diodes (the cluster of four on each end are M7s; I can’t read the markings on the middle two), though:

Problem #1: diving into the fine print (therefore my earlier mentioned order cancellation), you’ll find that they only support passing 100 Mbit Ethernet through, not GbE. And problem #2; judging from the user comments published on both products, they don’t seem to work, at least at the atmospheric-electricity intensities my residence sees.

Ok, then, if my observation passes muster that internal-only Ethernet spans, even unshielded ones, are seemingly EMI-immune, why not run replacement cabling from the furnace room to both upper-level ends of the house through the interstitial space between the levels, as well as between the inner and outer walls? That may indeed be what I end up biting the bullet and doing, but the necessary navigation around (and/or through) enroute joists, ductwork and other obstacles is not something that I’m relishing, fiscally or otherwise. In-advance is always preferable to after-the-fact when it comes to such things, after all! Ironically, right before sitting down to start writing this post, I skimmed through the final print edition of Sound & Vision magazine, which included a great writeup by home installer (and long-time column contributor) John Sciacca. There’s a fundamentally solid reason why he wrote the following wise words!

A few of my biggest tips: Prewire for everything (a wire you aren’t using today might be a lifesaver tomorrow!), leave a conduit if possible…

What about MoCA (coax-based networking) or powerline networking? No thanks. As I’ve already mentioned, the existing external-routed coax wiring has proven vulnerable to close-proximity lightning, too. If I’m going to run internally routed cable instead, I’ll just do Ethernet. And after several decades’ worth of dealing with powerline’s unfulfilled promise due to its struggles to traverse multiple circuit breakers and phases, including at this house (which has two breaker boxes, believe it or not, the original one in the garage and a newer supplement in the furnace room), along with injected noise from furnaces, air conditioning units, hair dryers, innumerable wall warts and the like, I’ve frankly collected more than enough scars already. But speaking of breaker boxes, by the way, I’ve already implemented one of the earlier documented suggestions from reader “bdcst”, courtesy of an electrician visit a few years back:

The final option, which I did try (with interesting results), involved disconnecting both ends of the exterior-routed Cat 5 spans and instead relying solely on wireless backbones for the mesh access points upstairs at both ends of the house. As setup for the results to come, I’ll first share what the wired-only connectivity looks like between the furnace room and my office directly above it. I’m still relying predominantly on my legacy, now-obsolete (per Windows 8’s demise) Windows Media Center-based cable TV-distribution scheme, which has a convenient built-in Network Tuner facility accessible via any of the Xbox 360s acting as Windows Media Extenders:

In preparation for my external-Ethernet severing experiment, to maximize the robustness of the resultant wireless backbone connectivity to both ends of the house, I installed a fifth Google Nest Wifi router-as-access point in the office. It indeed resulted in reasonably robust, albeit more erratic, bandwidth between the router and the access point in the living room, first as reported in the Google Home app:

and then by Windows Media Center’s Network Tuner:

I occasionally experienced brief A/V dropouts and freezes with this specific configuration. More notably, the Windows Media Center UI was more sluggish than before, especially in its response to remote control button presses (fast-forward and -rewind attempts were particularly maddening). Most disconcerting, however, was the fact that my wife’s iPhone now frequently lost network connectivity after she traversed from one level of the house to the other, until she toggled it into and then back out of Airplane Mode.

One of the downsides of mesh networks is that, because all nodes broadcast the exact same SSID (in various Google Wifi product families’ case), or the same multi-SSID suite for other mesh setups that use different names for the 2.4 GHz, 5 GHz, and 6 GHz beacons, it’s difficult (especially with Google’s elementary Home utility) to figure out exactly what node you’re connected to at any point in time. I hypothesized that her iPhone was stubbornly clinging to the now-unusable Wi-Fi node she was using before versus switching to the now-stronger signal of a different node in her destination location. Regardless, once I re-disconnected the additional access point in my office, her phone’s robust roaming behavior returned:

But as the above screenshot alludes to, I ended up with other problems in exchange. Note, specifically, the now-weak backbone connectivity reported by the living room node (although, curiously, connectivity between the master bedroom and furnace room remained solid even now over Wi-Fi). The mesh access point in the living room was, I suspect, now wirelessly connected to the one in the office below it, ironically a shorter node-to-node distance than before, but passing through the interstitial space between the levels. And directly between the two nodes in that interstitial space is a big hunk of metal ductwork. Note, too, that the Google Nest Wifi system is based on Wi-Fi 5 (802.11ac) technology, and that the wireless backbone is specifically implemented using the 5 GHz band, which is higher-bandwidth than its 2.4 GHz counterpart but also inherently shorter-range. The result was predictable:

The experiment wasn’t a total waste, though. On a hunch, I tried using the Xfinity Stream app on my Roku to view Comcast-sourced content instead. The delivery mechanism here is completely different: streamed over the Internet and originating from Comcast’s server, versus solely over my LAN from the mini PC source (in all cases, whether live, time-shifted or fully pre-recorded, originating at my Comcast coax TV feed via a SiliconDust HDHomeRun Prime CableCARD intermediary). I wasn’t direct-connecting to premises Wi-Fi from the Roku; instead, I kept it wired Ethernet-connected to the multi-port switch as before, leveraging the now-wireless-backbone-connected access point also connected to the switch there instead. And, as a pleasant surprise to me, I consistently received solid streaming delivery.

What’s changed? Let’s look first at the video codec leveraged. The WTV “wrapper” (container) format now in use by Windows Media Center supersedes the DVR-MS precursor with expanded support for both legacy MPEG-2 and newer MPEG-4 video. And indeed, although a perusal of a recent recorded-show file in Window Explorer’s File Properties option was fruitless (the audio and video codec sections were blank), pulling the file into VLC Media Player and examining it there proved more enlightening. There were two embedded audio tracks, one English and the other Spanish, both Dolby AC3-encoded. And the video was encoded using H.264, i.e., MPEG-4 AVC (Part 10). Interestingly, again according to VLC, it was formatted at 1280×720 pixel resolution and a 59.940060 fps frame rate. And the bitrate varied over time, confirmative of VBR encoding, with input and demuxed stream bitrates both spiking to >8,000 kb/sec peaks.

The good news here, from a Windows Media Center standpoint, is two-fold: it’s not still using archaic MPEG-2 as I’d feared beforehand might have been the case, and the MPEG-4 profile in use is reasonably advanced. The bad news, however, is that it’s only using AVC, and at a high frame rate (therefore bitrate) to boot. Conversely, Roku players also support the more advanced HEVC and VP9 video codec formats (alas, I have no idea what’s being used in this case). And, because the content is streamed directly from Comcast’s server, the Roku and server can communicate to adaptively adjust resolution, frame rate, compression level and other bitrate-related variables, maximizing playback quality as WAN and LAN bandwidth dynamically vary.

For now, given that monsoon season is (supposedly, at least) over until next summer, I’ve reconnected the external Cat 5 spans. And it’s nice to know that when the “thunderbolt and lightning, very, very frightening” return, I can always temporarily sever the external Ethernet again, relying on my Rokus’ Xfinity Stream apps instead. That said, I also plan to eventually try out newer Wi-Fi technology, to further test the hypothesis that “wires beat wireless every time”. Nearing 3,000 words, I’ll save more details on that for another post to come. And until then, I as-always welcome your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plans appeared first on EDN.

Integrating digital isolators in smart home devices

Mon, 11/11/2024 - 09:37

Smart home devices are becoming increasingly popular with many households adopting smart thermostats, lighting systems, security systems, and home entertainment systems. These devices provide automation and wireless control of household functions, allowing users to monitor and control their homes from a mobile app or digital interface.

But despite the advantages of smart home devices, users also face an increased risk of electrical malfunctions that may result in electric shock, fire, or direct damage to the device. This article discusses the importance of integrating digital isolators in smart home devices to ensure safety and reliability.

Definition of a digital isolator

A digital isolator is an electronic device that provides electrical isolation between two circuits while allowing digital signals to pass between the circuits. By using electromagnetic or capacitive coupling, the digital isolator transmits data across the isolation barrier without requiring a direct electrical connection.

Digital isolators are often used in applications where electrical isolation is necessary to protect sensitive circuitry from high voltages, noise, or other hazards. They can be used in power supplies, motor control, medical devices, industrial automation, and other applications where safety and reliability are critical. Figure 1 shows a capacitive isolation diagram.

Figure 1 The capacitive isolation diagram includes the top electrode, bottom electrode, and wire bonds. Source: Monolithic Power Systems

Understanding isolation rating

The required isolation voltage is an important consideration when choosing a digital isolator, since it impacts the total solution cost. Isolators generally have one of two isolation classifications: basic isolation or reinforced isolation.

  • Basic isolation: This provides sufficient insulation material to protect a person or device from electrical harm; however, the risk of electrical malfunctions is still present if the isolation barrier is broken. Some devices use two layers of basic isolation as a protective measure in the case of the first layer breaking; this is called double isolation.
  • Reinforced isolation: This is equivalent to dual basic isolation and is implemented by strengthening the isolation barrier to decrease the chances of the barrier breaking compared to basic isolation.

Figure 2 shows the three types of isolation: basic isolation, double isolation, and reinforced isolation.

Figure 2 The three types of isolation are basic isolation, double isolation, and reinforced isolation. Source: Monolithic Power Systems

Here, creepage distance is the shortest distance between two conductive elements on opposite sides of the isolation barrier and is measured along the isolation surface. Clearance distance is a common parameter that is similar to creepage distance but is measured along a direct path through the air.

As a result, creepage distance is always equal to or greater than the clearance distance, but both are heavily dependent on the IC’s package structure. Parameters such as pin-to-pin distance and body width have a strong correlation with the isolation voltage for isolated components. Wider pin-to-pin spacing and packages have larger isolation voltages, but they also take up more board space and increase the overall system cost.

Depending on the system design and isolation voltage requirements, different isolation ratings are available, typically corresponding to the package type. Small outline integrated circuit (SOIC) packages often have 1.27-mm pin-to-pin spacing and are available in narrow body (3.9-mm package width) or wide body (7.5-mm package width) formats.

The wide-body package is commonly used for meeting reinforced 5-kVRMS requirements, while the narrow-body package is used in applications where the maximum withstand isolation voltage is 3k VRMS. In some cases, extra wide-body packages are used with >14.5-mm creepage for certain 800-V+ systems to meet the creepage and clearance requirements.

Figure 3 shows the clearance and creepage distances in an SOIC package.

Figure 3 Varying clearance and creepage distances are used in SOIC packages to meet design requirements. Source: Monolithic Power Systems

Safety regulations for digital isolators

Safety certifications such as UL 1577, VDE, CSA, and CQC play a pivotal role in ensuring the reliability and safety of digital isolators within various electronic systems. These certifications are described below:

  • UL 1577: This certification, established by Underwriters Laboratories, sets stringent standards to evaluate the insulation and isolation performance of digital isolators. Factors including voltage isolation, leakage current, and insulation resistance are examined to ensure compliance with safety requirements.
  • VDE: This certification is predominantly recognized in Europe and verifies the quality and safety of electrical products, including digital isolators, through rigorous testing methodologies. VDE certification indicates that the isolators meet the specified safety criteria and conform to European safety standards, ensuring their reliability and functionality in diverse applications.
  • Canadian Standards Association (CSA): This certification guarantees that digital isolators adhere to Canadian safety regulations and standards, ensuring their reliability and safety in electronic systems deployed across Canada.
  • China Quality Certification (CQC): The China Quality Certification GB 4943.1-2022 emphasizes conformity assessment and quality control in audio/video, information, and communication technology equipment.

These certifications collectively provide manufacturers, engineers, and consumers with the confidence that digital isolators have undergone comprehensive testing and comply with stringent safety measures, contributing to the overall safety and reliability of the electronic devices and systems in which they are utilized across global markets.

Features of digital isolators vs. optocouplers

Traditionally, the isolated transfer of digital signals has been carried out using optocouplers. These devices harness light to transfer signals through the isolation barrier, using an LED and a photosensitive device, typically a phototransistor. The signal on one side of the isolation barrier turns the LED on and off.

When the photons emitted by the LED impact the phototransistor’s base-collector junction, a current is formed in the base and becomes amplified by the transistor’s current gain, transmitting the same digital signal on the opposite side of the isolation barrier.

Digital isolators provide four key features that make them better than optocouplers in smart home devices:

  • Low-power consumption: Digital isolators don’t need to supply a light source, and instead use more efficient channels to transfer the signal. This makes digital isolators ideal for battery-powered devices such as smart thermostats and security sensors.
  • High-speed data transmission: Phototransistors have long response times, which limits the bandwidth of optical isolators. On the other hand, digital isolators can transfer signals much quicker, enabling fast and reliable communication between smart home devices and control systems.
  • Low electromagnetic interference (EMI): EMI can interfere with electronic devices in the home. By adopting capacitive isolation technology, digital isolators are more immune to EMI.
  • Wide operating temperature range: This makes digital isolators suitable for a variety of robust environments, including outdoor applications.

Types of digital isolation

There are two types of digital isolation that can be implemented: magnetic isolation and capacitive isolation. Magnetic isolation relies on a transformer to transmit signals, while capacitive isolation uses a capacitor to transmit signals across the isolator, which creates an electrical barrier. This barrier prevents direct current flow and provides isolation between the input and output circuits.

Capacitive isolation is the most commonly used method due to several advantages.

  • Higher data rates: Compared to magnetic isolation, the higher data rates of capacitive isolation can be used for applications that require fast and reliable communication.
  • Lower power consumption: Compared to magnetic isolation or optical isolation, capacitive isolation typically consumes less power, making it a more energy-efficient choice for battery-powered devices.
  • Smaller size: Capacitive isolators are typically smaller than magnetic isolators or optical isolators, which eases their integration into small electronic devices.
  • Lower cost: Capacitive isolators are typically less expensive than optical isolators, which rely on expensive optoelectronic components like LEDs and photodiodes.
  • Higher immunity to EMI: Compared to magnetic isolation, capacitive isolation is less susceptible to EMI, resulting in capacitive isolation being a more reliable choice in noisy environments.

Figure 4 shows a comparison of traditional optical isolation compared to magnetic and capacitive isolation.

Figure 4 Capacitive isolation offers key advantages over optical isolation and magnetic isolation. Source: Monolithic Power Systems

The type of digital isolation used depends on the application specifications, such as the required data rate, temperature range, or the level of electrical noise in the environment. Figure 5 shows a block diagram of a smart refrigerator, which requires three digital isolators.

Figure 5 The block diagram of a smart refrigerator that requires three digital isolators. Source: Monolithic Power Systems

Applications of digital isolators in smart home devices

Providing electrical isolation between the control system and appliance circuitry is crucial to ensure user safety as well as to protect smart home devices from outside interference or hacking. Some examples of smart home devices that integrate digital isolators include smart lighting systems, smart security systems, smart thermostats and smart home entertainment systems, which are described in further detail below.

Smart lighting systems

In smart lighting systems, digital isolators provide isolation between the control system and the high-voltage lighting circuitry. This prevents the user from coming into contact with high-voltage electrical signals.

Smart security systems

In smart home security systems, digital isolators provide isolation between the control system and the sensors or cameras. Isolating the sensitive control circuitry from the outside world addresses concerns regarding outside interference to the security system.

Smart thermostats

In smart thermostats, digital isolators provide isolation between the control system and the heating or cooling circuits. This minimizes damage to the control system from high-voltage or high-current signals in the heating or cooling circuits.

Smart home entertainment systems

In smart home entertainment systems like smart speakers, digital isolators provide isolation between the control system and the audio or video circuits. This achieves high-quality playback by preventing interference or noise in the audio or video signals.

George Chen is product marketing manager at Monolithic Power Systems (MPS).

Tomas Hudson is applications engineer at Monolithic Power Systems (MPS).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Integrating digital isolators in smart home devices appeared first on EDN.

Online tool programs smart sensors for AIoT

Fri, 11/08/2024 - 16:58

ST’s web-based tool, AIoT Craft, simplifies the development and provisioning of node-to-cloud AIoT projects that use the machine-learning core (MLC) of ST’s smart MEMS sensors. Intended for both beginners and seasoned developers, AIoT Caft helps program these sensors to run inference operations.

The MLC enables decision-tree learning models to run directly in the sensor. Operating autonomously without host system involvement, the MLC handles tasks that require AI skills, such as classification and pattern detection.

To ease the creation of decision-tree models, AIoT Craft includes AutoML, which automatically selects optimal attributes, filters, and window size for sensor datasets. This framework also trains the decision tree to run on the MLC and generates the configuration file to deploy the trained model. To provision the IoT project, the gateway can be programmed with the Data Sufficiency Module, intelligently filtering data points for transmission to the cloud.

As part of the ST Edge AI Suite, AIoT Craft offers customizable example code for in-sensor AI and sensor-to-cloud solutions. Decision tree algorithms can be tested on a ready-to-use evaluation board connected to the gateway and cloud.

AIoT Craft product page

STMicroelectronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Online tool programs smart sensors for AIoT appeared first on EDN.

Cortex-M85 MCUs empower cost-sensitive designs

Fri, 11/08/2024 - 16:58

Renesas has added new devices to its RA8 series of MCUs, combining the same Arm Cortex-M85 core with a streamlined feature set to reduce costs. The RA8E1 and RA8E2 MCU groups are well suited for high-volume applications, including industrial and home automation, mid-end graphics, and consumer products. Both groups employ Arm’s Helium vector extension to boost ML and AI workloads, as well as Trust Zone for enhanced security.

The RA8E1 group’s Cortex-M85 core runs at 360 MHz. These microcontrollers provide 1 Mbyte of flash, 544 kbytes of SRAM, and 1 kbyte of standby SRAM. Peripherals include Ethernet, octal SPI, I2C, USB FS, CAN FD, 12-bit ADC, and 12-bit DAC. RA8E1 MCUs come in 100-pin and 144-pin LQFPs.

MCUs in the RA8E2 group boost clock speed to 480 MHz and increase SRAM to 672 kbytes. They also add a 16-bit external memory interface. RA8E2 MCUs are offered in BGA-224 packages.

The RA8E1 and RA8E2 MCUs are available now. Samples can be ordered on the Renesas website or through its distributor network.

RA8E1 product page

RA8E2 product page

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cortex-M85 MCUs empower cost-sensitive designs appeared first on EDN.

GaN flyback switcher handles 1700 V

Fri, 11/08/2024 - 16:58

With a breakdown voltage of 1700 V, Power Integrations’ IMX2353F GaN switcher easily supports a nominal input voltage of 1000 VDC in a flyback configuration. It also achieves over 90% efficiency, while supplying up to 70 W from three independently regulated outputs.

The IMX2353F, part of the InnoMux-2 family of power supply ICs, is fabricated using the company’s PowiGaN technology. Its high voltage rating makes it possible for GaN devices to replace costly SiC transistors in applications like automotive chargers, solar inverters, three-phase meters, and other industrial power systems.

Like other InnoMux-2 devices, the IMX2353F provides both primary and secondary-side controllers, zero voltage switching without an active clamp, and FluxLink, a safety-rated feedback mechanism. Each of the switcher IC’s three regulated outputs is accurate to within 1%. By independently regulating and protecting each output, the IMX2353F eliminates multiple downstream conversion stages. The device has a switching frequency of 100 kHz and operates over a temperature range of -40°C to +150°C.

Prices for the IMX2353F start at $4.90 each in lots of 10,000 units. Samples and evaluation boards are available from Power Integrations and its authorized distributors.

IMX2353F product page

Power Integrations 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN flyback switcher handles 1700 V appeared first on EDN.

Data center power supply delivers 8.5 kW

Fri, 11/08/2024 - 16:58

Navitas has developed a power supply unit (PSU) that is capable of producing 8.5 kW of output power with 98% efficiency. Aimed at AI and hyperscale data centers, the PSU achieves a power density of 84.6 W/in.3 through the use of both GaN and SiC MOSFETs.

The PSU provides a 54-V output and complies with Open Compute Project (OCP) and Open Rack v3 (ORV3) specifications. It employs the company’s 650-V GaNSafe and 650-V Gen-3 Fast SiC MOSFETs configured in 3-phase interleaved PFC and LLC topologies.

According to Navitas, the shift to a 3-phase topology for both PFC and LLC enables the industry’s lowest ripple current and EMI. The power supply also reduces the number of GaN and SiC devices by 25% compared to the nearest competing system, reducing overall cost.

Specifications for the PSU include an input voltage range of 180 V to 264 V, a standby output voltage of 12 V, and an operating temperature range of -5°C to +45°C. Its hold-up time at 8.5 kW is 10 ms, with 20 ms possible through an extender.

Navitas will debut the 8.5-kW power supply design at electronica 2024.

Navitas Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Data center power supply delivers 8.5 kW appeared first on EDN.

Sensor powers AI detection in slim devices

Fri, 11/08/2024 - 16:57

The OV0TA1B CMOS image sensor from Omnivision fits 3-mm-high modules, ultrathin-bezel notebooks, webcams, and IoT devices. This low-power sensor is well-suited for AI-based human presence detection, facial authentication, and always-on devices. Additionally, it comes in monochrome and infrared versions to complement designs that include a separate RGB camera.

Featuring 2-µm pixels based on the company’s PureCel technology, the OV0TA1B sensor offers high sensitivity and modulation transfer function (MTF) for reliable detection and authentication. It delivers 30 frames/s at a resolution of 440×360 pixels in a compact 1/15.8-in. optical format.

In addition to the higher resolution, the sensor can also operate at a lower resolution of 220×180 pixels, consuming just 2.58 mW at 3 frames/s. The lower resolution and frame rate reduce power consumption, allowing it to meet the needs of energy-sensitive applications.

The OV0TA1B provides programmable controls for frame rate, mirroring, flipping, cropping, and windowing. It supports 10-bit RAW output in normal mode and 8-bit RAW output in always-on mode, along with static defect pixel correction and automatic black level calibration. 

Samples of the OV0TA1B image sensor are available now, with mass production to begin in Q1 2025.

OV0TA1B product page 

Omnivision

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sensor powers AI detection in slim devices appeared first on EDN.

Laser party lights

Fri, 11/08/2024 - 16:56

This is about a festive event accessory for lots of happy people with good cheer all around and in my opinion, a public safety hazard.

We were at a gala party one day where there were several hundred people. There were all kinds of food, there was music and there was this rotating orb in the center of the room which emitted decorative beams of light in constantly changing directions (Figure 1).

Figure 1 Party light at several different moments that emitted beams in several different directions.

Those beams of light were generated by moving lasers. They produced tightly confined light in whatever direction they were being aimed, just like the laser pointers you’ve undoubtedly seen being used in lecture settings.

I was not at ease with that (Figure 2).

Figure 2 A google search of the potential dangers of a laser pointer.

I kept wondering if when the decorative light beams would shine directly into someone’s eye, would that someone be in danger of visual injury? Might the same question be raised with respect to laser-based price checking kiosks in the stores (Macy’s or King Kullen for example) or for cashiers at their price scanning check out stations?

Everyone at the party went home happy.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Laser party lights appeared first on EDN.

Investigating a vape device

Thu, 11/07/2024 - 16:54

The ever-increasing prevalence of lithium-based batteries in various shapes, sizes and capacities is creating a so-called “virtuous circle”, leading to lower unit costs and higher unit volumes which encourage increasing usage (both in brand new applications and existing ones, the latter as a replacement for precursor battery technologies), translating into even lower unit costs and higher unit volumes that…round and round it goes. Conceptually similarly, usage of e-cigarettes, aka so-called “vape” devices, is rapidly growing, both by new  and existing users of cigarettes, cigars, pipes and chewing tobacco. The latter are often striving to wean themselves off these conventional “nicotine delivery platforms” and away from their well-documented health risks but aren’t yet able or ready to completely “kick the habit” due to nicotine’s potent addictive characteristics (“vaping” risks aren’t necessarily nonexistent, of course; being newer, however, they’re to date less thoroughly studied and documented).

What’s this all got to do with electronics? “Vapes” are powered by batteries, predominantly lithium-based ones nowadays. Originally, the devices were disposable, with discard-and-replacement tied to when they ran out of oft (but not always) nicotine-laced, oft-flavored “juice” (which is heated, converting it into an inhalable aerosol) and translating into lots of perfectly good lithium batteries ending up in landfills (unless, that is, the hardware hacker community succeeds in intercepting and resurrecting them for reuse elsewhere first). Plus, the non-replaceable and inherently charge-“leaky” batteries were a retail shelf life issue, too.

More recent higher-end “vape” devices’ batteries are capable of being user-recharged, at least. This characteristic, in combination with higher capacity “juice” tanks, allows each device to be used longer than was possible previously. But ultimately, specifically in the absence of a different sort of hardware hacking which I’ll further explore in the coming paragraphs, they’re destined for discard too…which is how I obtained today’s teardown victim (a conventional non-rechargeable “vape” device is also on my teardown pile, if I can figure out how to safely crack it open). Behold the Geek Bar Pulse, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

One side is bland:

The other is also seemingly so:

at least until you flip the “on” switch at the bottom, at which time it turns into something reminiscent of an arcade video game (thankfully not accompanied by sounds):

The two-digit number at the top indicates that the battery is still a bit more than halfway charged. Its two-digit counterpart at the bottom however, reports that its “juice” tank is empty, therefore explaining why it was discarded and how it subsequently ended up in my hands (not exactly the result of “dumpster diving” on my part, but I did intercept it en route to the trash). To that latter point, and in one of those “in retrospect I shouldn’t have been surprised” moments, when researching the product prior to beginning my dissection, I came across numerous web pages, discussion group threads and videos about both it and alternatives:

with instructions on how to partially disassemble rechargeable “vape” devices, specifically to refill their “juice” tanks with comparatively inexpensive fluid and extend their usable life. Turns out, in fact, that this device’s manufacturer has even implemented a software “kill switch” to prevent such shenanigans, which the community has figured out how to circumvent by activating a hidden hardware switch.

Anyhoo, let’s conclude our series of overview shots with the top, containing the mouthpiece nozzle from which the “vape” aerosol emits:

and the bottom, encompassing the aforementioned power switch, along with the USB-C recharging connector:

That switch, you may have already noticed, is three-position. At one end is “off”. In the middle is normal “on” mode, indicated in part by a briefly visible green ring around the display:

And at the other end is “pulse” mode, which emits more aerosol at the tradeoffs of more quickly draining the battery and “juice” tank, and is differentiated by both a “rocket” symbol in the middle of the display and a briefly illuminated red ring around it:

By the way, the power-off display sequence is entertaining, too:

And now, let’s get inside this thing. No exposed screws, of course, but that transparent side panel seems to be a likely access candidate:

It wasn’t as easy as I thought, but thanks to a suggestion within the first video shown earlier, to pop off the switch cover so that the entire internal assembly could then move forward:

I finally got it off, complete with case scratches (and yes, a few minor curses) along the way:

Quick check: yep, still works!

Now to get those insides out. Again, my progress was initially stymied:

until I got the bright (?) idea of popping the mouthpiece off (again, kudos to the creator of that first video shown earlier for the to-do guidance):

That’s better (the tank is starting to come into view)…

Success!

Front view of the insides, which you’ve basically already seen:

Left side, with our first unobstructed view of the tank:

Back (and no, it wasn’t me who did that symbol scribble):

Right side:

Top, showing the aerosol exit port:

And bottom, again reminiscent of a prior perspective photo:

Next, let’s get that tank off:

One of those contacts is obviously, from the color, ground. I’m guessing that one of the others feeds the heating element (although it’s referred to on the manufacturer’s website as being a “dual mesh coil” design, I suspect that “pulse” mode just amps—pun intended—up the output versus actually switching on a second element) and the third routes to a moisture or other sensor to assess how “full” the “tank” is.

To clarify (or maybe not), let’s take the “tank” apart a bit more:

More (left, then right) side views of the remainder of the device, absent the tank:

And now let’s take a closer look at that rubber “foot”, complete with a sponge similar to the one earlier seen with the mouthpiece, that the tank formerly mated with:

Partway through, another check…does it still work?

Yep! Now continuing…

Next, let’s again use the metal “spudger”, this time to unclip the display cover from the chassis:

Note the ring of multicolor LEDs around the circumference of the display (which I’m guessing is OLED-fabricated: thoughts, readers?):

And now let’s strive to get the “guts” completely out of the chassis:

Still working?

Amazing! Let’s next remove the rest of the plastic covering for the three-position switch:

Bending back the little plastic tab at the bottom of each side was essential for further progress:

Mission accomplished!

A few perspectives on the no-longer-captive standalone “guts”:

It couldn’t still be working, after all this abuse, could it?

It could! Last, but not least, let’s get that taped-down battery out the way and see if there’s anything interesting behind it:

That IC at the top of the PCB that does double-duty as the back of the display is the Arm Cortex-M0+- and flash memory-based Puya F030K28. I found a great writeup on the chip, which I commend to your attention, with the following title and subtitle:

The cheapest flash microcontroller you can buy is actually an Arm Cortex-M0+

Puya’s 10-cent PY32 series is complicating the RISC-V narrative and has me doubting I’ll ever reach for an 8-bit part again.

“Clickbait” headlines are often annoying. This one, conversely, is both spot-on and entertaining. And given the ~$20 retail price point and ultimately disposable fate for the device that the SoC powers, $0.10 in volume is a profitability necessity! That said, one nitpick: I’m not sure where Geek Bar came up with the “dual core” claim on its website (not to mention I’m amazed that a “vape” device supplier even promotes its product’s semiconductor attributes at all!).

And with that, one final check; does it still work?

This is one rugged design! Over to you for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Investigating a vape device appeared first on EDN.

Test solutions to confront silent data corruption in ICs

Thu, 11/07/2024 - 14:57

While semiconductor design engineers become more aware of silent data corruption (SDC) or silent data errors (SDE) caused by aging, environmental factors, and other issues, embedded test solutions are emerging to address this subtle but critical challenge. One such solution applies embedded deterministic test patterns in-system via industry-standard APB or AXI bus interfaces.

Siemens EDA’s in-system test controller—designed specifically to work with the company’s Tessent Streaming Scan Network (SSN) software—performs deterministic testing throughout the silicon lifecycle. Tessent In-System Test is built on the success of Siemens’ Tessent MissionMode technology and Tessent SSN software.

Figure 1 The Tessent In-System Test software with embedded on-chip in-system test controller (ISTC) enables the test and diagnosis of semiconductor chips throughout the silicon lifecycle. Source: Siemens EDA

Tessent In-System Test enables seamless integration of deterministic test patterns generated with Siemens’ Tessent TestKompress software. That allows chip designers to apply embedded deterministic test patterns generated using Tessent TestKompress with Tessent SSN directly to the in-system test controller.

The resulting deterministic test patterns are applied in-system to provide the highest test quality level within a pre-defined test window. They also offer the ability to change test content as devices mature or age through the silicon lifecycle.

Figure 2 Tessent In-System Test applies high-quality deterministic test patterns for in-system/in-field testing during the lifecycle of a chip. Source: Siemens EDA

These in-system tests with embedded deterministic patterns also support the reuse of existing test infrastructure. They allow IC designers to reuse existing IJTAG- and SSN-based patterns for in-system applications while improving overall chip planning and reducing test time.

“Tessent In-System Test technology allows us to reuse our extensive test infrastructure and patterns already utilized in our manufacturing tests for our data center fleet,” said Dan Trock, senior DFT manager at Amazon Web Services (AWS). “This enables high-quality in-field testing of our data centers. Continuous monitoring of silicon devices throughout their lifecycle helps to ensure AWS customers benefit from infrastructure and services of the highest quality and reliability.”

The availability of solutions like the Tessent In-System Test shows that silent data corruption in ICs is now on designers’ radar and that more solutions are likely to emerge to counter this issue caused by aging and environmental factors.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Test solutions to confront silent data corruption in ICs appeared first on EDN.

Negative time-constant and PWM program a versatile ADC front end

Wed, 11/06/2024 - 15:57

A variety of analog front-end functions typically assist ADCs to do their jobs. These include instrumentation amplifiers (INA), digitally programmable gain (DPG), and sample and holds (S&H). The circuit in Figure 1 is a bit atypical in merging all three of these functions into a single topology controlled by the timing from a single (PWM) logic signal.

Figure 1 Two generic chips and five passives make a versatile and unconventional ADC front end

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1’s differential INA style input starts off conventionally, consisting of tera-ohm impedance and picoamp bias CMOS followers U1a and U1b. The 916x family op-amps are pretty good RRIO devices for this job, with sub-mV input offset, 110 dB CMR, 11 MHz gain-bandwidth, 33 V/µs slew rate, and sub-microsecond setting time. They’re also inexpensive. Turning this into a high CMR differential input, however, is where the circuit starts to get unconventional. The ploy in play is the “flying capacitor”. 

During the logic-0 interval of the PWM, through switches U2a and U2b both ends of capacitor C are driven by the unity-gain follower amplifiers with CMR limited only by the amplifier’s 110 dB = 300,000:1. Unlike a typical precision differential INA input, no critical resistor matching is involved. A minimum duration interval of a microsecond or two is adequate to accurately capture and settle to the input signal. When the PWM input transitions to logic-1, one end of C is grounded (via U2b) while the other becomes the now single-ended input to U1c (via U2a). Then things get even less conventional.

The connection established from U1c’s output to C through U1c and R1 creates positive feedback that causes the voltage captured on C to multiply exponentially with a (negative) time-constant of:

Tc = (R1 + (U2 on resistance)) C
= (14.3 kΩ + 130) 0.001 µF = 14.43 µs
= 10 µs / ln(2)

Due to A1c’s gain = R3 / R2 + 1 = 2 the current through R1 from Vc:

IR1 = (Vc – 2Vc) / R1
= Vc / -R1

Thus, R1 is made effectively negative which makes R1C negative and for any time T after the 0-1 transition of PWM the familiar exponential decay of:

V(T) = V(0) e-(T/RC)

becomes with a negative R1:

= V(0) e-(T/-R1C) = Vc(0) e– -(T / 14.43 µs) = Vc(0) e(T / 14.43 µs)
= Vc(0) 2(T / 10 µs )

Therefore, taking U1c’s gain of 2.00:

Vout = Vc(0) 2((T / 10 µs) + 1)

For example, if a 7-bit 1 MHz PWM is used, then each 1µs increment in the duration of the logic-1 period will equate to a gain increment of 20.1 = 1.072 = 0.60 dB.  So, a 100 PWM 1-count would create a gain of 2((T / 10 µs) + 1)  = 66.2 dB = 2048. Having 100 available programmable gain settings is a useful and unusual feature.

 Note that R1 and C should be precision with low-tempco types like metal film and C0G so that the gain/time relationship will be accurate and stable. The 14.43 µs = 11 kHz roll-off of R1C interacts with the 11 MHz gain bandwidth of U1c to provide ~60 dB of closed loop gain. This is adequate for 10-bit acquisition accuracy.

During this PWM = 1 exponential gain phase, the U2c switch causes the output capacitor and U1d to track Vc, which is captured and held for input to the connected ADC during the subsequent PWM = 0 phase. While the front end of the circuit is acquiring the next sample.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Negative time-constant and PWM program a versatile ADC front end appeared first on EDN.

Smart TV power-ON aid

Tue, 11/05/2024 - 16:59

Editor’s note: This design idea offers a solution to the common issue of a TV automatically restarting after a power outage. Oftentimes, power may be restored when the consumer is not present and unknowingly left running. This could be due to several reasons, including the HDMI-CEC settings on the TV or simply an automatic restore factory setting. While it a useful setting to have enabled, it would be helpful to ensure the TV will not be automatically turned on when power is restored after a power outage.

Introduction

Present day TV designers take ample care in power supply design such that TV comes “ON” automatically after a power shut down and resumption, if TV was “ON” before power shut down. If the TV was “OFF” before power shut down, it continues to be “OFF”, even after power resumption. This is an excellent feature; one can continue to watch TV after a brief shut down and resumption without any manual intervention.

At times, this can lead to certain inconveniences in case of long power shutdowns. The last time this happened to us, we were watching TV, and the power suddenly went off. At some point during this power outage, we had to leave and came back home after two days. The power may have resumed a few hours after we left. However, as per its design, the TV turned “ON” automatically and was “ON” for two days. This caused discomfort to neighbors until we returned and switched the TV off. What a disturbance to others!

Wow the engineering world with your unique design: Design Ideas Submission Guide

TV power-ON aid

I designed the “TV Power-ON aid” gadget in Figure 1 to overcome this problem. Mains power is fed to this gadget. Power is fed to the TV through this gadget. Once the SW1 switch/button is pushed, the TV receives power, as long as mains power is there. If power goes “OFF” and resumes within say, a half hour, the TV will receive power from the mains without any manual intervention, like the original design. If the power resumes after a half hour, where it is likely you may not be near the TV at that time, the power will not be extended to TV automatically. Instead, you will have to push the button SW1 once to feed power to TV. This gadget saves us from causing discomfort to the neighbors from an unattended TV blasting shows: a problem anybody can face during a long power outage when he/she was not present in the house.

Figure 1 TV power-ON aid circuit. Connect mains power to J1. Take power to TV from J2. Connect power supply pins of U2, U3, and U4 to V1. The grounds of these ICs must be connected to the minus side of the battery. These connections are not shown in the image.

Circuit description

The first time, you will have to press momentary push button SW1 once. Relay RL2 gets energized and its “NO” contact closes, shorting SW1. Hence, the relay continues to be “ON” and power is extended to TV.

When mains power goes off, RL2 relay de-energizes. Through the “NC” contact of relay RL2, the battery (3X 1.5 V alkaline batteries) become connected to the OFF-delay timer circuit formed by U1(555), U2 (4011), U3 (4020), and U4(4017). As soon as the battery gets connected, this circuit switches “ON” the relay RL1 through MOSFET Q1 (IRLZ44N). Its “NO” contact closes and shorts SW1.

The timer circuit holds this relay for approximately a half hour. (The time can be adjusted by suitable selection of C2). If power resumes in this half an hour period, since SW1 is shorted by RL1contact, the power gets fed to TV automatically. If the power resumes after a half hour, since RL1 gets de-energized due to the OFF-delay timer action, its contact which is connected across SW1, is opened and power is not extended to TV. This is a safe condition. When you come back, you can push the button SW1 to feed power to TV. The RL1 coil voltage is 5 V and the voltage of RL2 is either 230 V AC or 110 V as needed.

The U1 circuit works as an oscillator. The U3 circuit works as frequency divider. This frequency is counted by U4 circuit. When time delay reaches around 30 minutes, its Q9 goes high. Hence U2C output goes “LO” and RL1 gets de-energized. Whenever power goes off, timer circuit gets battery voltage through “NC” contact of RL2. When power resumes, battery is disconnected from timer circuit, thus saving battery power. A Lithium-ion battery and charger circuit can be added in place of alkaline batteries, if desired.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Smart TV power-ON aid appeared first on EDN.

Apple’s fall 2024 announcements: SoC and memory upgrade abundance

Mon, 11/04/2024 - 17:33

Two years ago, Apple skipped its typical second second-half-of-year event, focusing solely on a September 2022 unveil of new iPhones, smartwatches, and earbuds. Last year I thought the company might pull the same disappearing act, but it ended up rolling out the three-member M3 SoC family, along with an M3-based subset of its systems suite. And this year? Mid-September predictably brought us new iPhones, smartwatches, and earbuds. But as October’s end drew near with nothing but silence from Cupertino (tempting leaks from Russian vloggers aside), I wondered if this year would be a repeat of the prior or a reversion to 2022 form.

Turns out, October 2024 ended up being a conceptual repeat of 2023 after all…well, sort of. Apple waited until last Thursday (as I write these words on Halloween) to cryptically “tweet” (or is that “X”?) an “exciting week of announcements ahead, starting on Monday morning”. And indeed, the company once again unveiled new SoCs and systems (plus software updates) this year. But the news virtually (of course) dribbled out over three days this time, versus dropping at one big (online) event. That said, there were in-depth (albeit, as usual, with a dallop of hype) videos accompanying many of the releases. Without further ado, and in chronological order:

The iPad mini 7

The first unveil in the October 2024 new-product sequence actually happened two weeks ago, when Apple rolled out its latest-generation tiny tablet. That the iPad mini 7 would sooner-or-later appear wasn’t a surprise, although I suppose Apple could have flat-out killed the entire iPad mini line instead, as it’s already done with the iPhone mini. The iPad mini 6 (an example of which I own) is more than four years old at this point, as I’d mentioned back in May. And specifically, it’s based on the archaic A15 Bionic SoC and only embeds 4 GBytes of RAM, both of which are showstoppers to the company’s widespread Apple Intelligence support aspirations.

SoC update (to the A17 Pro) and memory update (to 8 GBytes, reflective of deep learning model storage requirements) aside, the iPad mini 7 pretty much mirrors its predecessor, although it does now also support the Apple Pencil, and the USB-C bandwidth has been boosted to 10 Gbps. Claimed improvements to the “jelly scrolling” display behavior seen by some iPad mini 6 users (truth be told, I never noticed it, although I mostly hold mine in landscape orientation) are muddled by iFixit’s teardown, which suggests the display controller location is unchanged.

And by the way, don’t be overly impressed with the “Pro” qualifier in the SoC’s moniker. That’s the only version of the A17 that’s been announced to date, after all. And even though it’s named the same as the SoC in the iPhone 15 Pros, it’s actually a defeatured variant, with one less (5, to be precise) GPU core, presumably for yield maximization reasons.

O/S updates

Speaking of Apple Intelligence, on Monday the company rolled out “.1” updates to all of its devices’ operating systems, among other things adding initial “baby step” AI enhancements. That said, Europe users can’t access them at the moment, presumably due to the European Union’s privacy and other concerns, which the company hopes to have resolved by next April. And for the rest of us, “.2” versions with even more enabled AI capabilities are scheduled for release this December.

A couple of specific notes on MacOS: first off, in addition to iterating its latest-generation MacOS 15 “Sequoia” O/S, Apple has followed longstanding extended-support tradition by also releasing patches for the most recent two prior-generation operating system major versions, MacOS 13 (“Ventura”) and MacOS 14 (“Sonoma”). And in my case, that’s a great thing, because it turns out I won’t be updating to “Sequoia” any time soon, at least on my internal storage capacity-constrained Mac mini. I’d been excited when I read that MacOS 15.1 betas were enabling the ability to selectively download and install App Store software either to internal storage (as before) or to an external drive (as exists in my particular setup situation).

But as it turns out, that ability is only enabled for App Store-sourced programs 1 GByte or larger in size, which is only relevant to one app I use (Skylum’s Luminar AI which, I learned in the process of writing this piece, has been superseded by Luminar Neo anyway). Plus, the MacOS “Sequoia” upgrade from “Sonoma” is ~12 GBytes, and Apple historically requires twice that available spare capacity before it allows an update attempt to proceed (right now I have not 25 GBytes, but only 7.5 GBytes, free on the internal SSD). And the Apple Intelligence capabilities aren’t relevant to Intel-based systems, anyway. So…nah, at least for now.

By the way, before proceeding with your reading of my piece, I encourage you to watch at least the second promo video above from Apple, followed by the perusal of an analysis (or, if you prefer, take-down) of it by the always hilarious (and, I might, add, courageous) Paul Kafasis, co-founder and CEO of longstanding Apple developer Rogue Amoeba Software, whose excellent audio and other applications I’ve mentioned many times before.

The 24” iMac

Apple rolled out some upgraded hardware on Monday, too. The company’s M1-based 24” iMac, unveiled in April 2021, was one of its first Apple Silicon-based systems. Apple then skipped the M2 SoC generation for this particular computing platform, holding out until last November (~2.5 years later), when the M3 successor finally appeared. But it appears that the company’s now picking up the pace, since the M4 version just showed up, less than a year after that. This is also the first M4-based computer from Apple, following in the footsteps of the iPad Pro tablet-based premier M4 hardware surprisingly (at least to me) released in early May. That said, as with the defeatured A15 Bionic in the iPad mini 7 mentioned earlier in this writeup, the iMac’s M4 is also “binned”, with only eight-core CPU and GPU clusters in the “base” version, versus the 9- or 10-core CPU and 10-core GPU in the iPad Pros and other systems to come that I’ll mention next.

The M4 24” iMac comes first-time standard with 16 GBytes of base RAM (to my earlier note about the iPad mini’s AI-driven memory capacity update…and as a foreshadow, this won’t be the last time in this coverage that you encounter it!), and (also first-time) offers a nano-texture glass display option. Akin to the Lightning-to-USB-C updates that Apple made to its headphones back in mid-September, the company’s computer peripherals (mouse, keyboard and trackpad) now instead recharge over USB-C, too. The front camera is Center Stage-enhanced this time. And commensurate with the SoC update, the Thunderbolt ports are now gen4-supportive.

The Mac mini and M4 Pro SoC

Tuesday brought a more radical evolution. The latest iteration of the Mac mini is now shaped like a shrunk-down Mac Studio or, if you prefer, a somewhat bigger spin on the Apple TV. The linear dimensions and overall volume are notably altered versus its 2023 precursor, from:

  • Height: 1.41 inches (3.58 cm)
  • Width: 7.75 inches (19.70 cm)
  • Depth: 7.75 inches (19.70 cm)
  • Volume: 84.7 in3 (1,389.4 cm3)

to:

  • Height: 2.0 inches (5.0 cm)
  • Width: 5.0 inches (12.7 cm)
  • Depth: 5.0 inches (12.7 cm)
  • Volume: 50 in3 (806.5 cm3)

Said another way, the “footprint” area is less than half of what it was before, at the tradeoff of nearly 50% increased height. And the weight loss is notably too, from 2.6 pounds (1.18 kg) or 2.8 pounds (1.28 kg) before to 1.5 pounds (0.67 kg) or 1.6 pounds (0.73 kg) now. I also should note that, despite these size and weight decreases, the AC/DC conversion circuitry is still 100% within the computer; Apple hasn’t pulled the “trick” of moving it to a standalone PSU outside. That said, legacy-dimension “stacked” peripherals won’t work anymore:

And the new underside location of the power button is, in a word and IMHO, “weird”.

The two “or” qualifiers in the earlier weight-comparison sentence beg for clarification which will simultaneously be silicon-enlightening. Akin to the earlier iMac conversation, there’s been a SoC generation skip, from the M2 straight to the M4. The early-2023 version of the Mac mini came in both M2 and M2 Pro (which I own) SoC variants. Similarly, while this year’s baseline Mac mini is powered by the M4 (in this case the full 10 CPU/10 GPU core “spin”), a high-end variant containing the brand new M4 Pro SoC is also available. In this particular (Mac mini) case, the CPU and GPU core counts are, respectively, 12 and 16. Memory bandwidth is dramatically boosted, from 120 GBytes/sec with the M4 (where once again, the base memory configuration is 16 GBytes) to 273 GBytes/sec with the M4 Pro. And the M4 Pro variant is also Apple’s first (and only, at least for a day) system that supports latest-generation Thunderbolt 5. Speaking of connectors, by the way, integrated legacy USB-A is no more, though. Bring on the dongles.

MacBook Pros, the M4 Max SoC and a MacBook Air “one more thing”

Last (apparently, we shouldn’t take Apple literally when it promises an “exciting week of announcements ahead”) but definitely not least, we come to Wednesday and the unveil of upgraded 14” and 16” MacBook Pros. The smaller-screen version comes in variants based on M4, M4 Pro and brand-new M4 Max SoC flavors. This time, if you dive into the tech specs, you’ll notice that the M4 Pro is “binned” into two different silicon spins, one (as before in the Mac mini) with a 12-core CPU and 16-core GPU, and a higher-end variant with a 14-core CPU and 20-core GPU. Both M4 Pro versions deliver the same memory bandwidth—273 GBytes/sec—which definitely can’t be said about the high-end M4 Max. Here, at least on the 14” MacBook Pro, you’ll again find a 14-core CPU, although this time it’s mated to a 32-core GPU, and the memory bandwidth further upticks to 410 GBytes/sec.

If you think that’s impressive (or maybe just complicated), wait until you see the 16” MacBook Pro’s variability. There’s no baseline M4 option in this case, only two M4 Pro and two M4 Max variants. Both M4 Pro base “kits” come with the M4 Pro outfitted with a 14-core CPU and 20-core GPU. The third variant includes the aforementioned 14-core CPU/32-core GPU M4 Max. And as for the highest-end M2 Max 16” MacBook Pro? 16 CPU cores. 40 GPU cores. And 546 GBytes/sec of peak memory bandwidth. The mind boggles at the attempt at comprehension.

Speaking (one last time, I promise) of memory, what about that “one more thing” in this section’s subhead? Apple has bumped up (with one notable Walmart-only exception) the baseline memory of its existing MacBook Air mobile computers to 16 GBytes, too, at no price increase from the original 8 GByte MSRPs (or, said another way, delivering a 16 GByte price cut), bringing the entire product line to 16 GBytes minimum. I draw two fundamental conclusions:

  • Apple is “betting the farm” on memory-demanding Apple Intelligence, and
  • If I were a DRAM supplier previously worried about filling available fab capacity, I’d be loving life right about now (although, that said, I’m sure that Apple’s purchasing department is putting the screws on your profit margins at the same time).

With that, closing in on 2,000 words, I’ll sign off for now and await your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s fall 2024 announcements: SoC and memory upgrade abundance appeared first on EDN.

Pages