EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 6 min ago

Alphawave demos 3-nm UCIe subsystem at 24 Gbps

Thu, 03/14/2024 - 20:44

Alphawave Semi announced the successful bring-up of its first chiplet-connectivity silicon platform on TSMC’s advanced 3-nm process. The silicon-proven Universal Chiplet Interconnect Express (UCIe) subsystem, capable of operating at 24 Gbps per lane, was demonstrated at the recent Chiplet Summit in Santa Clara, CA.

Combining PHY IP and interface controller IP, the UCIe 1.1-compliant subsystem delivers high bandwidth density at very low power and with low latency. Its configurable die-to-die (D2D) controller supports streaming, PCIe/CXL, AXI-4, AXI-S, CXS, and CHI protocols. In addition, the PHY can be configured for TSMC’s chip-on-wafer-on-substrate (CoWoS) and integrated fanout (InFO) packaging technologies. Built-in bit error rate (BER) health monitoring ensures reliable operation.

“Achieving 3nm silicon-proven status for our 24-Gbps UCIe subsystem is a key milestone for Alphawave Semi, as it is an essential piece of our chiplet connectivity platform tailored for hyperscaler and data-infrastructure applications,” said Letizia Giuliano, VP IP Product Marketing at Alphawave Semi. “We are thankful to our TSMC team for their outstanding support, and we look forward to accelerating our mutual customers’ high-performance chiplet-based designs on TSMC’s leading-edge 3nm process.”

Read more about the UCIe subsystem on Alphawave Semi’s blog.

Alphawave Semi 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Alphawave demos 3-nm UCIe subsystem at 24 Gbps appeared first on EDN.

Cache coherent interconnect IP pre-validated for Armv9 processors

Thu, 03/14/2024 - 11:57

Modern system-on-chip (SoC) designs require multiple interconnects for optimal performance, and here, cache coherent and non-coherent interconnects work together. In fact, it’s imperative that SoCs have an efficient combination of cache-coherent and non-coherent operations.

While SoC parts like accelerators and peripherals generally don’t require cache coherency, sharing a coherent view of memory and I/O is critical, so the processor has access to the most recent data without having to go off-chip. Arteris claims that its non-coherent FlexWay interconnect IP and Ncore cache coherent network-on-chip (NoC) IP seamlessly work together to offer SoC designers robust architectural flexibility.

The latest version of its cache-coherent NoC IP works with multiple processor IPs, including RISC-V and the next-generation Armv9 Cortex processor. Arteris has pre-validated Armv9 Cortex processor IP for its Ncore cache coherent interconnect IP, and the resulting validation system boots Linux on a multi-cluster Arm design and executes test suites to validate critical cache coherency cases.

It also supports multiple protocols, including CHI-E, with which the latest Armv9 processors are closely associated. Other protocols are CHI-B and ACE coherent, plus ACE-Lite and AXI* IO coherent interfaces. That allows chip designers to secure their investment in older architectures and evolve in a cost-effective manner.

Ncore can scale across a mix of fully coherent, I/O-coherent, non-coherent, memory and peripheral interfaces using a variety of NoC topologies. Source: Arteris

Next, Ncore cache coherent interconnect IP has achieved ISO 26262 certification from exida, a certification agency specializing in functional safety standards for the automotive industry. Previously, Arteris supported safety and designers would do their own hardware checking in terms of safety process. However, this Ncore version is certified, meaning that interconnect design is out-of-box ready with ISO 26262 certification.

On the software side, Ncore has a very logical user interface flow to accelerate design efficiency. The flow starts at the architectural level with chip specifications and system assembly configuration options. Then, it goes to the automatic mapping process of NoC library elements, followed by optimization and refinement before RTL is generated.

Moreover, compared to the manual approach, NCoR maintains a database of inputs that SoC architectures require. So, once the initial configuration is classified, which can be iterated, SoC designers can revisit each segment, making the job of managing SoC specifications a straightforward task.

Charles Janac, president and CEO of Arteris, says that SoC designers are challenged by the growing complexity resulting from the number of processing elements, multiple protocols, and functional safety requirements of modern electronics. “Our latest release of a production-proven Ncore marks an important milestone toward our ultimate cache coherent interconnect IP vision to connect any processor, using any protocol and topology.”

Ncore supports direct connections for heterogeneous, asymmetric systems and other flexible connectivity options, ensuring adaptability to various applications across automotive, industrial, communications, and enterprise computing markets. Arteris claims that Ncore can save SoC design teams upward of 50 years of engineering effort per project compared to manually generated interconnect solutions.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cache coherent interconnect IP pre-validated for Armv9 processors appeared first on EDN.

Power delivery for a load that is driven with multiple sources

Wed, 03/13/2024 - 15:26

When a load is driven simultaneously by more than one source with each source being of its own frequency, the individual load power deliveries from those sources are independent of each other. Whatever power any one of those sources would provide to the load all by itself, that power delivery will not be affected by the presence of the absence of the other sources.

Imagine a stack of voltage sources connected in series and feeding into some load resistance, R. It could look something like Figure 1.

Figure 1 A stack of voltage sources connected in series and feeding into some load resistance, R.

Of course, we could have more sources, say four, five or more, but three is a nice and convenient number. For the sake of discussion, we can meaningfully call the voltage from this stack of three a “triplet”. We further say of our triplet that each source is delivering its voltage at a different frequency. The frequency of the DC source is of course, zero.

The instantaneous power delivered to R is the instantaneous voltage at the top of the stack squared and then divided by R. The value of R is not of concern for now, so we will just look at that stack-top voltage which is our triplet.

When we square the triplet expression, we get several components per the following algebra in Figure 2.

Figure 2 Squaring the triplet expression to obtain the instantaneous power delivered to R.

Just as a double check of this algebra to demonstrate equality, by choosing deliberately different frequencies W1 and W2, we can graphically plot the triplet squared and then plot the sum of the derived terms as shown in Figure 3. We see that they are indeed identical.

Figure 3 Graphical check of squaring the triplet where, by choosing different frequencies (W1 and W2), we can graphically plot the triplet squared and plot the sum of the derived terms. From this, we can visually confirm that they are identical.

Getting back to the algebra, the results of squaring the triplet are shown above. The value of the first line is never negative, only positive, but the values of the second and third lines swing back and forth from positive to negative, to positive to negative, and so on and so on.

The energy delivered to R is the integral of the power over time. The integral for the first line is positive which means that R does indeed receive energy from the terms of that first line, but the integrals of the second and third lines each come to zero. As time goes on, the positive swings of the second and third lines giveth while the negative swings of the second and third lines taketh away. Therefore, the integrals of those two lines come to zero which means that those two lines deliver no energy to the load and no energy delivery means no power delivery.

Only the terms of the first line deliver power to R where that power is shown in Figure 4.

Figure 4 The power delivery to R. As shown in the image, only the terms of the first line deliver power (to R).

The upshot of all this is that each voltage source of our triplet delivers as much power to R as it would deliver if it were connected to R all by itself. The power delivered by each source is independent of the presence or absence of each of the other sources.

If we’d had four sources or five sources or more, it wouldn’t matter. As long as their frequencies are not equal, the power deliveries of each source would still be independent of all of the others.

With more sources, the algebra would be more complex, but their independence of each other would remain the case.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power delivery for a load that is driven with multiple sources appeared first on EDN.

Non-linear digital filters Uses cases and sample code

Wed, 03/13/2024 - 14:59

Most embedded engineers writing firmware have used some sort of digital filters to clean up data coming from various inputs such as ADCs, sensors with digital outputs, other processors, etc. Many times, the filters used are moving average (boxcar), finite impulse response (FIR), or infinite impulse response (IIR) filter architectures. These filters are linear in the sense that the outputs scale linearly to the amplitude of the input. That is, if you double the amplitude of the input stream the output of the filter will double (ignoring any offset). But there are many non-linear filters (NLF) that can be very useful in embedded systems and I would bet that many of you have used a few of them before. A NLF does not necessarily respond in a mathematically linear fashion to its inputs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In some cases, FIRs and IIRs can struggle with things like impulse noise and burst noise that can cause the output to react in an unacceptable way. Non-linear filters can offer protection to the data stream. Depending on your application they may be used as a stand-alone filter or as a pre-filter before the FIR, IIR, or boxcar filter.

The examples in this article are assuming one-dimensional streaming, signed or unsigned, integers (including longs and long longs). Some examples may be applicable to floats but others are not. Streaming is mentioned as it is assumed the data will be coming continuously from the source and these filters will process the data and send it out one-for-one, in real-time. In other words, we can’t just toss bad data, we need to send some value to replace the input. Some examples, though, may allow for oversampling and in that case, it can then decimate the data. For example, a sensor may send data at a rate 10 times faster than needed then process the 10 samples before sending out 1 sample to the next stage.

Another assumption for this discussion, is that we are designing for small embedded systems that are required to process incoming samples in real-time. Small in the sense that we won’t have a large amount of memory or a high MIPS rating. For that reason, we will avoid using floats.

So, let’s take a look at some of the non-linear filters and see where they are useful.

Bounds checking filter

This is one you may have used before but may not have considered it a filter. These filters are also often referred to as bounds checking, clipping, range checking, limiting, or even sanity checking. We are not referring to pointer checks but to data checking of incoming data or data that has been modified by previous code.

Here is a simple example piece of code:

#define Upper_Limit 1000 #define Lower_Limit -1000 int limit_check(int n) { if (n < Lower_Limit) n = Lower_Limit; else if (n > Upper_Limit) n = Upper_Limit; return n; }

Listing 1

You can see that if the integer n is greater than 1000 then 1000 is returned. If it is less than -1000 then -1000 is returned. If it is between 1000 and -1000, inclusive, the original value of n is returned. This would limit large impulse noise values from passing through your system, i.e., it filters the data.

When combined with another filter like a FIR, IIR, or temporal filter (described below), the limit value could be scaled based on the running filter value. If an out of range sample is detected, based on this moving limit, the bounds checker could return the latest filter output instead of a fixed limit or the suspect sample.

Some systems may provide some variation of bounds checking as a predefined function call or a macro.

Soft clipping filter

This is related to bounds checking but instead of just limiting a value after a certain level is reached, it slowly starts to back off the output value as the input approaches the maximum or minimum value. This type of soft clipping is often used in audio signal processing applications.

Soft clipping can be accomplished by something like a sigmoid function or a hyperbolic tangent function. The issue here is that these methods require significant processing power and will need fast approximation methods.

Soft clipping typically distorts a good portion of the input to output relationship, so it isn’t appropriate for use in most sensor inputs measuring things like temperatures, circuit voltages, currents, light levels, or other metrological inputs. As such, we will not discuss it further except to say there is lots of information on the web if you search “soft clipping”.

Truncated mean filter

The truncated mean, or trimmed mean, is a method where you take in a set of, at least 3, readings, toss the maximum and minimum reading, and average the rest. This is similar to the method you see in some Olympic judging. For embedded projects it is good at removing impulse noise. One method to implement this filter is by sorting, but in most applications in a small processor, this may be computationally expensive so for anything larger than 5 samples, I would suggest scanning the list to find the min and max. While running the scan, also calculate the total of all the entries. Lastly, subtract the min and max from the total and divide that value by the number of entries minus 2. Below is an example of such a function executing on an array of input values. At the end of the code there is an optional line to do rounding if needed.

#include int TruncatedMean(int inputArray[], unsigned int arraySize) { int i = 0; int min = INT_MAX; int max = 0; int total = 0; int mean = 0; for (i = 0; I < arraySize; i++) { if (inputArray[i] < min) min = inputArray[i]; if (inputArray[i] > max) max = inputArray[i]; total = total + inputArray[i]; } //mean = (total - min - max) / (arraySize - 2); // The previous truncates down. To assist in rounding use the following line mean = (total - min - max + ((arraySize - 2)/2)) / (arraySize - 2); return mean ; }

Listing 2

If you have only 3 values, it may be advantageous, in computation time, to rewrite the c code to remove looping as seen in this code example for 3 values.

int TruncatedMean_3(int a, int b, int c) { int mean = 0; if ((a<=b) && (a>=c) || ((a<=c) && (a>=b)) ) mean = a; else if ((b<=c) && (b>=a) || ((b<=a) && (b>=c)) ) mean = b; else mean = c; return mean; }

Listing 3

Note that the truncated mean, using at least 5 samples, can also be implemented to remove more than one maximum and one minimum if desired—which would be good for burst noise. Also note that you can implement this as a sliding function or an oversampling function. A sliding function, like a moving average, slides out the oldest input and inserts the new input and then executes the function again. So, you get one output for every input. Alternatively, an oversampling function takes in an array of values, finds the mean, and then gets a fresh array of new values to process. So, every array of input samples generates only one output and then you’ll need to get a new set of input values before calculating a new mean.

Median filtering

A median filter finds the middle value in a set of samples. This may be useful for various types of noise sources. In a large set of samples, the sample array would be sorted and then the middle indexed variable would be read. For example, say we have an array of 7 samples (samples[0 to 6])—we sort them and then the median is samples[3]. Note that sorting could be problematic in a small embedded system due to execution speed so median filtering should be used judiciously. For 3 samples, the code is the same as the code example function “TruncatedMean_3”(listing 3) above. For larger groups, listing 4 shows an example piece of c code for finding the median. At the bottom of the code, you will see the setting of median based on the number of samples being odd or even. This is needed because the median for an even number of samples is defined as the average of the middle two values. Depending on your need you may want to add rounding to this average.

#define numSamples 6 int sample[numSamples] = {5,4,3,3,1,0}; int Median( int sample[], int n) { int i = 0; int j = 0; int temp = 0; int median = 0; // First sort the array of samples for (i = 0; i < n; ++i){ for (j = i + 1; j < n; ++j){ if (sample[i] > sample[j]){ temp = sample[i]; sample[i] = sample[j]; sample[j] = temp; } } } // If numSamples is odd, take the middle number // If numSamples is even, average the middle two if ( (n & 1) == 0) { median = (sample[(n / 2) - 1] + sample[n / 2]) / 2; // Even } else median = sample[n / 2]; // Odd return(median); }

Listing 4

Just as in the truncated mean filter, you can implement this as a sliding function or an oversampling function.

Majority filtering

Majority filters, also referred to as mode filters, extract the value from a set of samples that occurred the most times—majority voting. (This should not be confused with “majority element” which is the value occurring more than the number-of-samples/2.) Listing 5 shows a majority filter for 5 samples.

#define numSamples 5 int Majority( int sample[], int n) { unsigned int count = 0; unsigned int oldcount = 0; int majoritysample = sample[0]; int i = 0; int j = 0; for (i = 0; i < numSamples; i++) { count = 0; for (j = i; j < numSamples; j++) { if (sample[i] == sample[j]) count++; } if (count > oldcount) { majoritysample = sample[i]; oldcount = count; } } return majoritysample; }

 Listing 5

The code uses two loops, the first grabbing one element at a time, and the second loop then stepping through the list and counting how many samples match the value indexed by the first loop. This second loop holds on to the highest match count, found along the way, and its sample value until the first loop steps through the entire array. If there are more than one set of sample values with the same count (i.e., {1, 2, 2, 0, 1}, two 2s and two 1s) the one found first in will be returned as the majority.

 Note that the majority filter may not be applicable to typical embedded data as the dynamic range (from sensors) of the numbers would normally be 8, 10, 12 bits or greater. If the received sample uses a large portion of the dynamic range, the chance samples from a small set may be matching is minimal unless the signal being measured is very stable. Due to this dynamic range issue, a modification of the majority filter may be useful. By doing a right shift on the binary symbols, the filter can then match symbols close to each other. For example, say we have the numbers (in binary) 00100000, 00100011, 01000011, 00100001. None of these match one another—they are all different. But, if we shift them all right by 2 bits, we get 00001000, 00100011, 00010000, 00001000. Now three of them match. We can now average the original values of the matching symbols creating a modified median value.

Again, as in the truncated mean filter, you can implement this as a sliding function or an oversampling function.

 Temporal filtering

This is a group of filters that react to a signal based more on time than amplitude. This will become clearer in a minute. We will refer to these different temporal filters as mode 1 through mode 4 and we begin with mode 1:

Mode 1 works by comparing the input sample to a starting filtered value (“filterout”) then, if the sample is greater than the current filtered value, the current filtered value is increased by 1. Similarly, if the sample is less than the current filtered value, the current filtered value is decreased by 1. (The increase/decrease number can also be any reasonable fixed value (e.g., 2, 5, 10, …). The output of this filter is “filterout”. You can see that the output will slowly move towards the signal level thus changes are more related to time (number of samples) than to the sample value.

Now, if we get an unwanted impulse, it can only move the output by 1 no matter what the sample’s amplitude is. This means burst noise and impulse noise is greatly mitigated. This type of filter is very good for signals that move slowly versus the sample rate. It’s works very well filtering things like temperature readings by an ADC, especially in an electrically noisy environment. It performed very well on a project I worked on to extract a very slow moving signal sent on a power line (a very noisy environment and the signal was about -120 dB below the line voltage). Also, it’s very good for creating a dynamic digital reference level such as the dc bias level of an ac signal or a signal controlling a PLL. Listing 6 illustrates the use of the mode 1 temporal filter to smooth the value “filterout”.

#define IncDecValue 1 int sample = 0; int filterout = 512; // Starting value call your “getsample” function here… if (sample > filterout) filterout = filterout + IncDecValue; else if (sample < filterout) filterout = filterout - IncDecValue;

Listing 6

If the sample you are filtering is an int, you may want to do a check to make sure the filtered value doesn’t overflow/underflow and wrap around. If your sample is from a sensor or ADC that is 10 or 12 bits, this is not an issue and no check is needed.

Mode 2 is the same as Mode 1 but instead of a single value for the increase/decrease number, two or more values are used. One example is using a different increase/decrease value depending on the difference between the sample and the current filtered value (“filterout”). If they are close we use ±1, and if they are far apart we use ±10. This has been successfully used to speed up the startup of a temporal filtered control for a VCO used to match a frequency from a GPS signal.

#define IncDecValueSmall 1 #define IncDecValueBig 10 #define BigDiff 100 int sample = 0; int filterout = 100; // Starting value call your “getsample” function here… if (sample > filterout) { if ((sample - filterout) > BigDiff) filterout = filterout + IncDecValueBig; else filterout = filterout + IncDecValueSmall; } else if (sample < filterout) { if ((filterout - sample) > BigDiff) filterout = filterout - IncDecValueBig; else filterout = filterout - IncDecValueSmall; }

Listing 7

The increment/decrement value could also be a variable that is adjusted by the firmware depending on various internal factors or directly by the user.

Mode 3 is also very similar to Mode 1 but instead of increasing by ±1, if the sample is greater than the current filtered signal, the current filtered signal is increased by a fixed percentage of the difference between the current filtered and the sample. If the sample is less than the current filtered signal, the current filtered signal is decreased by a percentage. Let’s look at an example. Say we start with a current filtered value (“filterout”) of 1000 and are using 10% change value. Then we get a new sample of 1500. This would result in an increase of 10% of 1500-100 or 50. So the current filtered value is now 1050. If the next sample is 500, and we used -10% we would get a new current filtered of 995 (1050 minus 10% of 1050-500).

#define IncPercent 10 // 10% #define DecPercent 10 // 10% int sample = 0; int filterout = 1000; // Starting value call your “getsample” function here… if (sample > filterout) { filterout = filterout + (((sample - filterout) * IncPercent) / 100); } else if (sample < filterout) { filterout = filterout - (((filterout - sample) * DecPercent) / 100); }

Listing 8

One thing to watch for is overflow in the multiplications. You may need to use longs when making these calculations. Also note that it may be useful to make “IncPercent” and “DecPercent” a variable that may be adjusted via an internal algorithm or by user intervention.

To speed up this code on systems lacking a 1 or 2 cycle divide: instead of scaling IncPercent and DecPercent by 100, scale it by 128 ( 10 % would be ~13) Then the “/100” In the code would be “/128” which the compiler would optimize to a shift operation.

Mode 4 is comparable to Mode 3 except, like Mode 2, there are two or more levels that can come into play depending on the difference between the sample and the current output value (“filterout”). In the code in listing 9, there are two levels.

#define IncPctBig 25 // 25% #define DecPctBig 25 // 25% #define IncPctSmall 10 // 10% #define DecPctSmall 10 // 10% int sample = 0; int filterout = 1000; // Stating value call your “getsample” function here… if (sample > filterout) { if ((sample - filterout) > BigDiff) { filterout = filterout + (((sample - filterout) * IncPctBig) / 100); } else filterout = filterout + (((sample - filterout) * IncPctSmall) / 100); } else if (sample < filterout) { if ((filterout - sample) > BigDiff){ filterout = filterout - (((filterout - sample) * DecPctBig) / 100); } else filterout = filterout - (((filterout - sample) * DecPctSmall) / 100); }

Listing 9

One interesting thought is that temporal filters could also be used to generate statistics on things like impulse and burst noise. They could count the number of occurrences over a period of time and calculate stats such as impulses/sec. This could be done by adding another compare for samples being very much larger, or smaller, than the “filterout” value.

Pushbutton filtering

You may not think of this as a filter, but it is a filter for 1-bit symbols. Pushbuttons, switches, and relays have contacts that bounce open and closed for several milliseconds when pressed. If these are not filtered by external hardware (normally an RC filter) you will have to debounce (filter) it in code. There are a multitude of ways to do this. There are many discussions and much code on the web, but I think Jack Ganssle’s may have the best document at: (http://www.ganssle.com/debouncing-pt2.htm)

Using NLFs in your own projects

Although this is not a comprehensive list of NLFs, I hope this gives you a flavor of the concept. I’m sure many of you have created unique NLF’s for your own projects. Perhaps you would like to share them with others in the comments below.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

 Phoenix Bonicatto is a freelance writer.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Non-linear digital filters Uses cases and sample code appeared first on EDN.

Charge pump halves voltage to double current “efficiency”

Tue, 03/12/2024 - 13:50

Capacitor type charge pumps are a well-known, simple, efficient, cost-effective (and therefore popular!) method for inverting and multiplying voltage supply rails. Perhaps less well known, however, is that they also work just as well for dividing voltage (while multiplying current). Figure 1 illustrates a Vout = Vin/2, Iout = Iin*2 example pump built around the venerable xx4053 CMOS triple SPDT switch.

Figure 1 xx4053 based, 100kHz, voltage-halving, current-doubling charge pump.

Here’s how it works.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The R1C1 time constant couples the Vin/ppv square wave found at U1pin14 to U1pin9, creating an Fpump oscillator frequency of (approximately):

Fpump = 1 / (2 * 100k * 68 pF * loge(2)) = 100 kHz

During the Fpump negative half-cycle (U1pin4 = 0), the upper (U1pin14) end of C2 is connected to Vin while the lower end (U1pin15) end is connected to Vout, thus charging C2 to:

Vc2-= Vin – Vout

 Then, during the following Fpump positive half-cycle (U1pin4 = Vin), the upper end of C2 connects to Vin while the lower end connects to Vout, and:

Vc2 = Vout

 This deposits a quantity of charge onto C3 of:

Q+ = C2((Vin – Vout) – Vout) = C2(Vin – 2Vout)

 During the subsequent negative half-cycle, again:

Vc2 = Vin – Vout

Depositing another charge onto C3 of:

Q- = C2 ((Vin – Vout) – Vout) = C2(Vin – 2Vout)

Thus, each full cycle of Fpump deposits a net charge onto C3 of:

Q = Q+ + Q- = 2 * C2(Vin – 2Vout)

 Which, if Iout = 0, forces Q = 0 and therefore:

Vin – 2Vout = 0

Vout = Vin / 2

 However, for the (much more interesting) case of Iout > 0:

Q = Iout / 100 kHz

2 * C2(Vin – 2Vout) = Iout / 100 kHz

Vin – 2Vout = Iout / 100 kHz / 2 / C2

Vout = (Vin – (Iout / 100 kHz / 2 / C2)) / 2

In other words, Vout droops a bit as the output is loaded. This is because, for a finite C2 Q is also finite, but also to the fact that the U1a and U1b internal switches have non-zero ON resistances.

The combined effect on Vout versus Iout amounts to an effective impedance of 150 Ω for Vin = 5 V and is plotted in Figure 2, along with current multiplication “efficiency”. Note that the latter soars past unity due to the fact that only half of the dollops of C2 charge (the Q+) are drawn from the Vin rail, while the Q- are supplied from residual voltage on C2, causing zero additional drain from the rail.

Figure 2 Current multiplying charge-pump Vout and Iout/In current “efficiency” for Vin = 5 V.

So, what is it good for?

Figure 3 suggests one useful application, generating nominally symmetrical +/-Vin/2 bipolar rails from a single positive source with minimal current draw from the source.

Figure 3 Current doubling charge pump plus voltage inverter makes an efficient bipolar rail splitter.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Charge pump halves voltage to double current “efficiency” appeared first on EDN.

Singaporean chiplet specialist plans new fabrication site in Italy

Tue, 03/12/2024 - 13:16

Silicon Box, an advanced packaging upstart focused on chiplets, is moving to Italy after setting up a $2 billion packaging facility in Singapore in July 2023. The chiplet specialist has announced to set up another manufacturing facility in Northern Italy to cater to Europe’s existing and planned wafer fabrication clusters in France, Germany, and Italy.

While fabs like TSMC and Samsung Foundry as well as OSATs such as Amkor and ASE Technology are eying opportunities in chiplets business amid their expertise in advanced packaging technologies, what makes Silicon Box prominent is its sole focus on chiplet fabrication and packaging.

For a start, Silicon Box claims to bring effective chiplet integration capabilities through its Singapore site. It’s important to note that while the Singapore-based firm specializes in advanced packaging technologies like other OSATs, it uses panel packaging instead of standard wafer approach. The panel-level production leads to higher yield and is tailored for chiplet interconnects.

In other words, Silicon Box’s advanced packaging capabilities are not limited to chiplet integration; the company employs advanced interconnection through proprietary, large-format manufacturing. So, its standardized packaging process facilitates the shortest chiplet-to-chiplet interconnection with better thermal and electrical performance.

The new chiplet production facility in Italy plans to replicate Silicon Box’s foundry in Singapore.

Silicon Box’s next hop to Italy shows that the upstart founded by Marvell co-founders in 2021 is confident in the growing demand for chiplets and their manufacturing capacity. The firm plans to invest $3.6 billion in this new chiplet manufacturing facility in Northern Italy while creating approximately 1,600 semiconductor jobs.

Moreover, a close collaboration with European fabs will boost resilience and cost efficiency for the brand-new chiplet supply chain. A new chiplet packaging facility in Europe also bodes well for the ecosystem that is still in its infancy and needs more focused efforts to make chiplet production viable.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Singaporean chiplet specialist plans new fabrication site in Italy appeared first on EDN.

The TiVo RA2400 Stream 4K: A decent idea, plagued by usage delay

Mon, 03/11/2024 - 15:31

I’ve torn down a lot of streaming multimedia receiver devices over the years, most recently both HD (1080p) and 4K variants of Google’s Chromecast with Google TV. The list of victims also includes a bunch of Rokus in both “box” and “stick” form factors, Amazon’s Fire TV Stick, and an Apple TV (plus a few others with proprietary operating systems, including Google’s own prior-generation Chromecasts). But until today, I’m pretty sure I’ve only taken apart one other “pure” Android TV-based player, that one being the grandfather of them all, Google’s Nexus Player.

What do I mean by “pure”? Consider, for example, that Amazon’s Fire TV devices run (at least for the moment) the Android-derived Fire OS. Google TV, similarly, has an Android TV foundation, on top of which the company has (simplistically speaking) notably revamped the user interface and feature set, innately integrating (for example) Google Home facilities for smart home control purposes, along with making Live TV support front-and-center. But the Nexus Player’s Android TV UI obviously hearkened back to its Android roots; in fact, it originally ran Android 5. And the UI of today’s teardown victim, TiVo’s RA2400 Stream 4K (which, going forward I’ll refer to as “RA2400” for short)  is similarly Android TV-ish in its characteristics.

Why do Android TV-based products like the RA2400 still exist, if Google TV is supposedly a superior successor? Some of the answer, I suspect, has to do with longevity; Android TV has been around for a few months shy of a decade now, whereas the first Google TV-based Chromecast only started shipping in late 2020. And some of it, I also suspect, has to do with higher licensing fees that Google may charge for Google TV versus Android TV, as well as a more restrictive list of licensees. Whatever the reason(s), plenty of Android TV-based devices are still available for sale, which isn’t necessarily a good thing from a consumer standpoint.

Why? Android’s maturity and ubiquity, along with its open-source foundation, make it straightforward to develop apps that run on top of the O/S. This software might unfortunately also include malware and other undesirable code, enabled by unpatched vulnerabilities in out-of-date software stacks (if, say, the manufacturer goes out of business or maybe just decides to redirect its support attention to more lucrative newer products). At minimum, that no-name Android TV box you bought on eBay or elsewhere might be doing bitcoin mining on the side, piggybacking on your network connection and sucking up your electricity in the process. More critically, it might directly act as an attack vector for infecting other devices on your LAN and/or, by opening firewall holes via UPnP or other more malicious means, expose the entire LAN to WAN-based attacks, too.

That’s why, if you’re going to bring an Android TV-based device into your residence, it’s best to go with a “brand name” supplier like, say…well, TiVo, for example. I was admittedly surprised to find out in researching the RA2400 that it’s still available for sale, given that it was introduced in May 2020. Four years is forever in the consumer electronics industry, particularly for a product whose initial reviews called out its sluggish performance. Applications generally get more resource-intensive over time, not less, which would tend to increasingly hamper performance over time. But for whatever reason, the RA2400 is still alive and kicking; its advanced-at-the-time 4K resolution support doesn’t hurt.

My unit was a seller-refurbished device sold by VIP Outlet on eBay, which I bought two years (and a few weeks) ago promotion-priced at $21.25 plus tax ($25 minus 15%) solely with a future teardown in mind. That might sound like a good deal, and in fact it is in at least some sense, given that the RA2400 originally was priced at $50. Then again, however, as I wrote these words, new units were selling for $24.99 at both Amazon and Best Buy (in both cases marked down from the usual $39.99, which is what it’s selling for on TiVo’s website right now).

It obviously took a while for the RA2400 to rise to the top of my teardown pile! And in finally cracking open the box a few weeks ago, I found several surprising omissions (hold that thought). The packaging on my refurb, as you can see, was quite spartan.

I’ll save you five more photos’ worth of plain white box panels, instead focusing on the sticker affixed to one side:

Opening the box lid provides our first look at our “patient”:

Underneath, in a bubble wrap baggie, are a male-to-male HDMI-to-mini HDMI cable (which doesn’t seem to come with new units, or to serve a useful purpose for that matter, so I’m guessing this was a VIP Outlet mix-up) and a USB-A to micro-USB power cable (but no wall wart, although it looks from the documentation that one comes with new units, so this was apparently just another “seller refurbished” miss).

And speaking of omissions, can you tell yet what else isn’t in the box that should be? For a clue, take another look at that online documentation, either in HTML or (if you prefer) PDF format. Now take a look at the “stock” photo I showed you earlier. See the remote control there? See it here? No? Exactly. Sigh.

Onward. Freed from its cardboard and clear-plastic constraints, the RA2400 (with dimensions of 77 x 53 x 16 mm) comes into full view, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

On one end is the aforementioned micro-USB power input:

Coming out the other end is the beefy HDMI jack:

Along one side, and admittedly only barely visible in this shot, is a small button which, when held down for a few seconds, enables manual pairing with the remote control (assuming one exists…did I mention that mine was missing its remote control?), and when pressed a bit longer, initiates a factory reset:

(Full disclosure: I’m being a bit harsh about the missing remote control and wall wart, because I never intended to actually use the RA2400, only to take it apart. Frankly, considering all my fancy-pants video gear, the HDMI to mini-HDMI cable I got in exchange was a net sum gain. On the other hand, if I was a normal consumer hoping to use the RA2400, I’d be pretty bummed…)

And along the other side is another interesting advanced feature (for a 2020-era product, at least), a USB-C connection (with USB 2.0-only bandwidth, by the way):

This is not, TiVo’s documentation makes clear, an alternative power input path, nor is it an alternative video output option. It is, instead, a means of hardware-expanding (along with associated software support, dependent in some cases on third-party Android TV drivers and the like) the RA2400 to handle, for example, a wired Ethernet adapter, a game controller, a keyboard or mouse, a storage device, or multiples of these via a USB-C hub intermediary.

Last but not least, here’s a top view:

And a bottom view:

With a closeup of the label revealing, among other things the FCC ID (2AOVU-IPA1104HDW):

Before diving in, one more thing. The penny in the prior photos obscured, I suspect, just how funky the RA2400’s enclosure is. Check out the unique asymmetry!

Oh well. I was impressed. You all probably just want me to get to the getting-inside.

Peeling off the label from the bottom:

unfortunately didn’t expose any convenient screw heads to view:

but it did draw my attention to something I’d previously overlooked; the thin seam running along the underside periphery:

Betcha know what comes next, yes?

Bingo!

The PCB now also pops right out of the other half of the case:

I’m quite certain you’ve already noticed the Faraday cages on both sides of the PCB. And anyone who’s read one of my teardowns before definitely knows what comes next. Let’s flip the PCB back over to its backside first (as I’ve mentioned before, since these things are designed to dangle from the back of a TV there really is no consistent “top” or “bottom”, but my convention is that “top” is associated with the TiVo logo impression side of the now-removed case, with “bottom” in proximity to the now-removed label side of the now-removed case…phew):

Note how pristine both the cage and PCB still are. Are you proud of my atypical disassembly-force restraint and deft technique?

The shiny IC in the right section, labeled AP6398S, is a SIP module implementing both Bluetooth and Wi-Fi functions, based on Broadcom’s BCM43598. I suspect that at least some of you have already noticed the PCB-embedded antennae in the upper right and lower right-and-left quadrants of the PCB, yes? And in the sorta-center section are, at top, Amlogic’s S905Y2 application processor (can I just say for the record that rarely do I see a system’s “guts” documented so thoroughly in a consumer-intended product page? Here’s even more detail), comprised of, among other things, a quad-core 1.8 GHz Arm Cortex-A53 CPU core and an Arm Mali-G31 MP2 GPU core, and below it, a Nanya NT5AD512M16A4 1 GByte DDR4-2666 SDRAM.

Flip the PCB back over, deftly pop the top off its Faraday Cage too:

and we can inventory the remainder of the notable (IMHO, at least) bill of materials:

Along the left side are the USB-C connector and, below it, a 37.4 (MHz, I’m assuming) crystal oscillator. Along the right is the pairing-and-reset switch. And in the middle are a very faintly marked Samsung KLM8G1GETF-B041 8GByte eMMC flash memory module and, below it, another Nanya NT5AD512M16A4 1 GByte DDR4-2666 SDRAM.

To get a better look at the sides-located components, as well as to gain another perspective on that shiny “box” at the top of both sides of the PCB, out of which the HDMI cable juts, I’ll share some side views now. PCB top-up first:

Now bottom-up:

Those metal blobs on the sides of the “box” are not, I’m pretty confident, solder; those are welding remnants. The reinforcement necessity is understandable when you consider, reiterating what I mentioned earlier, that “these things are designed to dangle from the back of a TV” (not to mention that they’re likely predominantly disconnected from the TV by grabbing the main body and yanking). I was pretty sure there was nothing underneath but solder joints (with the “box” intended to mute high-frequency signal emissions). I even got a chuckle when I checked out the FCC certification report’s internal photo set and noticed they didn’t bother trying to tackle breaking apart the welds, either. However, I still got the tops pried away enough to peek underneath:

See what I told you? Solder. Along with strong adhesive, of course. Fini! Let me know about anything else that caught your eye in the comments.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The TiVo RA2400 Stream 4K: A decent idea, plagued by usage delay appeared first on EDN.

Comparison of 3 step-down converters to predict EMC issues

Mon, 03/11/2024 - 06:15

Step-down converters’ switch-node voltage waveform defines the electromagnetic compatibility (EMC) behavior for automotive CISPR 25 Class 5 measurements. The ringing frequency in the switch-node waveform is an important signal on the EMC receiver, where a higher ringing amplitude on the switch node often causes EMC issues. Understanding the switch-node waveform enables predicting the converter’s EMC characteristics as well as optimizing EMC filter design at an early design stage.

This article compares three automotive step-down converters to provide practical advice on using switch-node waveforms to predict EMC characteristics for automotive CISPR 25 Class 5 measurements. This is helpful to optimize EMC filter design and PCB layout to meet CISPR 25 Class 5 standards.

Switch-node measurements

Switch-node waveforms are used to compare the EMC characteristics among three automotive step-down converters. Figure 1 shows the switch-node measurement on an evaluation board using an active voltage probe.

Figure 1 Use an active voltage probe for the switch-node measurement on the evaluation board. Source: Monolithic Power Systems

The switch-node voltage waveform typically has a rising time and falling time between 700 ps and 2 ns. This requires a minimum oscilloscope bandwidth of about 1 GHz on the voltage probe tip, where the voltage can be measured with an active probe or a passive probe that has the necessary bandwidth.

For both variants, the ground connection to the PCB must be as short as possible to ensure that the measured ringing on the switch node does not include the additional ringing from the long probe ground connection.

Figure 2 shows the correct voltage probe tip position for the switch-node measurement on the evaluation board. Connect the GND tip as close as possible to the IC’s PGND pin and connect the probe input tip as close as possible to the IC’s switch-node pin. Solder the active probe tip with a 0.7-pF input capacitance directly to the component pads via removable gold-plated measuring tips.

Figure 2 Position the probe tip correctly for the switch-node measurement on the evaluation board. Source: Monolithic Power Systems

Histogram and time trend

Figure 3 shows a step-down converter’s switch-node voltage (yellow trace), fSW histogram (pink trace), and time trend (orange trace).

Figure 3 The dual frequency spread spectrum of the MPQ4371-AEC1 includes the switch-node voltage, fSW histogram, and time trend. Source: Monolithic Power Systems

The oscilloscope measures the switch-node voltage for each trigger event across a period of 400 µs and calculates the frequency of each switching cycle. Each calculated frequency is accumulated in the histogram. The total duration of this test is about 10 minutes. For the last trigger event, the measured frequencies are represented as time trend fSW vs. time.

The measured frequencies in Figure 3 verify the fSW vs. time relationship from the MPQ4371-AEC1 datasheet. The time trend waveform confirms the specified dual frequency spread spectrum modulation frequencies of 15 kHz and 120 kHz. By verifying proper IC operation, these frequencies provide an overview of the expected fSW values for CISPR 25 Class 5 measurements.

Voltage waveform

Step down converter’s switch-node voltage waveform is measured with an active probe. Figure 4 shows the rising and the falling edges of MPQ4371-AEC1, in which both waveforms are overlaid on the oscilloscope by an alternating rising and falling trigger. The rising edge has a rising time of 922 ps and a step response with a 273 MHz resonance frequency and a 3.2 V peak-to-peak voltage.

Figure 4 The switch-node voltage waveform for MPQ4371-AEC1 has rising and falling edges. Source: Monolithic Power Systems

The MPQ4371-AEC1 step-down converter’s Quiet-FET technology enables combining fast slewing edges without excessive ringing. Quiet-FET technology does not significantly degrade efficiency like a snubber or bootstrap resistor (RBST), and instead uses a minimum two-step sequential switching action to turn on the internal MOSFETs.

The resonance frequency is determined by the parasitic hot-loop inductances and capacitances. The equivalent hot-loop series inductances (ESL) are defined by the following:

  • ESL of the 100 nF, 0603-sized MLCC (about 800 pH)
  • ESL of the high-side MOSFET (HS-FET) and low-side MOSFET (LS-FET)
  • ESL of the package lead frame
  • ESL of the PCB traces between the MLCC and IC’s VIN and PGND pins (about 700 pH/mm)

The switch-node waveform can also be predicted using a simulation of the PCB hot-loop network.

Frequency domain

Figure 5 shows a fast Fourier transformation (FFT) of step-down converter’s switch-node waveform. The average fSW of 420 kHz is distributed between 384 kHz and 456 kHz (green markers) and corresponds to the measured histogram from Figure 3. The switch-node resonance frequency at 273 MHz is distributed between 250 MHz and 300 MHz (red markers) due to dual frequency spread spectrum modulation and corresponds to Figure 4.

Figure 5 A fast Fourier transformation is applied to the MPQ4371-AEC1’s switch-node waveform. Source: Monolithic Power Systems

Radiated emissions (RE) antenna for CISPR 25 Class 5

The vertical monopole, biconical, and log periodic antenna measurements in CISPR 25 Class 5 can be analyzed. Figure 6 shows the radiating switching inductance at peak CISPR 25 (blue) and average CISPR 25 (yellow), where the analyzer resolution bandwidth (RBW) = 9 kHz, fSW = 420 kHz, input voltage (VIN) = 13.5 V, output voltage (VOUT) = 3.3 V, and load current (ILOAD) = 2.5 A. The dual FSS modulation is helpful to maintain RE below the limits.

Figure 6 The vertical monopole antenna measurement of MPQ4371-AEC1 passes CISPR 25 Class 5. Source: Monolithic Power Systems

Figure 7 shows the radiating objects (for example, the harness or radiating traces on the PCB) at peak CISPR 25 (blue) and average CISPR 25 (yellow), where RBW = 120 kHz, fSW = 420 kHz, VIN = 13.5 V, VOUT = 3.3 V, and ILOAD = 2.5 A.

Figure 7 The biconical antenna measurement of MPQ4371-AEC1 passes CISPR 25 Class 5. Source: Monolithic Power Systems

Figure 8 shows the switch-node resonance frequencies between 250 MHz and 300 MHz (corresponding to Figure 4 and Figure 5) at peak CISPR 25 (blue) and average CISPR 25 (yellow), where RBW = 120 kHz, fSW = 420 kHz, VIN = 13.5 V, VOUT = 3.3 V, and ILOAD = 2.5 A. There is no RE that exceeds the 250 MHz to 300 MHz resonance frequency range.

Figure 8 The log periodic antenna measurement of the MPQ4371-AEC1 passes CISPR 25 Class 5. Source: Monolithic Power Systems

Figure 9 shows the 1.2 GHz switch-node resonance frequency within RE at peak CISPR 25 (blue), average CISPR 25 (yellow), and the noise level (gray), where RBW = 120 kHz, fSW = 2.2 MHz, VIN = 13.5 V, VOUT = 3.3 V, and ILOAD = 2.5 A.

Figure 9 The log periodic antenna measurement of the MPQ4323M-AEC1 step-down converter passes CISPR 25 Class 5. Source: Monolithic Power Systems

Switch-node waveform for MPQ4323M-AEC1

The MPQ4323M-AEC1’s integrated, 100 nF, hot-loop MLCCs reduce the internal parasitic inductances, which shifts the resonance frequency to higher values and reduces the resonance amplitude. Figure 10 shows an example of a fast slewing, switching converter combined with low internal parasitic inductances. This improves the switch-node waveform and reduces RE.

Figure 10 A fast-slewing switching converter combined with low parasitic inductances improves the switch-node waveform of the MPQ4323M-AEC1 step-down converter. Source: Monolithic Power Systems

Switch-node example on a 2-layer PCB

Figure 11 shows two different step-down converters soldered on the same 2-layer PCB. The left curve shows the MPQ4326-AEC1 with frequency spread spectrum modulation on a 2-layer PCB, with a switch-node resonance at 450 MHz. The right curve shows a step-down converter in a suboptimal set-up without FSS modulation and a 320 MHz resonance. The two converters are compared on the same PCB and with the same external components.

Figure 11 Two step-down converters are compared in a switch-node example on a 2-layer PCB. Source: Monolithic Power Systems

The step-down converter with the suboptimal set-up indicates undesirable resonance on the rising edge (red arrow), meaning there is a timing difference between the HS-FET and LS-FET. This resonance is caused by using a 2-layer PCB instead of a 4-layer PCB. Compared to a 4-layer PCB, a 2-layer PCB layout has higher parasitic inductances within the hot loop, which increases the resonance amplitude and changes the location of the switch-node resonance.

The increased amplitude is observed with both converters. In addition, the 2-layer PCB does not have the important solid ground layer directly under the top layer, resulting in a larger resonance amplitude and stronger RE.

FFT of step-down converters on a 2-layer PCB

Figure 12 shows the FFT of the switch-node voltage waveforms for the MPQ4326-AEC1 (with FSS modulation) and step-down converter with the suboptimal set-up (without FSS modulation) from Figure 11.

Figure 12 A fast Fourier transformation is applied to the switch-node voltage waveforms for the MPQ4326-AEC1 (with FSS modulation) and step-down converter with a suboptimal set-up (without FSS modulation). Source: Monolithic Power Systems

MPQ4326-AEC1 uses frequency spread spectrum modulation, while the step-down converter with the suboptimal set-up is set to a constant fSW. Typically, FSS modulation results in lower fundamentals and harmonics. Whether FSS modulation or a constant frequency is more advantageous depends on the requirements of the application. However, FFT shows the differences between the two methods.

MPQ4326-AEC1’s FFT shows the switch-node resonance at 450 MHz, and the step-down converter with the suboptimal set-up shows the switch-node resonance at 320 MHz. These switch-node resonance frequencies can be found in the CISPR 25 Class 5 measurements.

Understand switch-node waveform

This article analyzed the relationship between the switch-node voltage waveform and the frequency domain, using MPQ4323M-AEC1, MPQ4326-AEC1, and MPQ4371-AEC1 automotive step-down converters as examples. Understanding the switch-node waveform enables predicting PCB behavior for CISPR 25 Class 5 measurements. The measured resonance frequency shows up in RE measurements, enabling improved EMC filter design for suppressing the resonance frequency.

Furthermore, it is possible to assess expected frequency range interferences at an early stage by understanding the switch-node waveform. This helps find a suitable step-down converter according to the application specifications, shorten development times, and reduce costs by simplifying component selection for the EMC filter.

Ralf Ohmberger is a staff applications engineer at Monolithic Power Systems (MPS).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Comparison of 3 step-down converters to predict EMC issues appeared first on EDN.

Multichannel driver controls automotive LEDs

Fri, 03/08/2024 - 01:12

A PWM linear LED driver, the AL1783Q from Diodes, provides independent control of brightness and color on all three of its channels. Used for automotive interior and exterior lighting, the AL1783Q delivers 250 mA per channel to support higher LED current ranges in a wider range of lighting applications.

The device allows vehicle occupants to change interior lighting colors to suit their mood. It simultaneously enables animated turn-indicator signals and exterior grill lighting for different road conditions. Three external REF pins are used to set LED current for each channel, while 40-kHz PWM provides independent dimming control.

Since higher voltage rails are often used to power vehicle subsystems, the AL1783Q operates from a 55-V rail, allowing it to accommodate increasing LED chain voltages. Protection functions include undervoltage lockout, overvoltage, and overtemperature, as well as LED open and short-circuit detection.

Qualified to AEC-Q100 requirements, the AL1783Q operates over a temperature range of -40°C to +125°C. It comes in a TSSOP-16EP package that has an exposed cooling pad for improved heat dissipation. The AL1783Q LED driver costs $0.43 each in lots of 2500 units.

AL1783Q product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Multichannel driver controls automotive LEDs appeared first on EDN.

Octal high-side switches minimize footprint

Fri, 03/08/2024 - 01:12

Two 8-channel high-side switches from ST combine smart features with typical on-resistance of just 110 mΩ/channel to preserve system efficiency. Housed in tiny 8×6-mm packages, the IPS8200HQ and IPS8200HQ-1 provide output current of 0.7 A and 1.0 A, respectively, on each channel. They can control capacitive, resistive, or inductive loads with one side connected to ground.

Each device operates from 10.5 V to 36 V and includes 3.3-V/5-V compatible logic inputs. For added design flexibility, the switches are controlled via a parallel or 4-wire serial (SPI) interface. Typical applications include programmable logic controllers, distributed I/O, industrial PC peripherals, and CNC machines.

The IPS8200HQ and IPS8200HQ-1 integrate LED drivers to indicate the status of each output channel. An embedded 100-mA DC/DC voltage regulator powers the LED driver, SPI logic, and input circuitry. It can also be used to supply external components, such as optocouplers or digital isolators. In addition, the switches offer multiple device protection features. 

The IPS8200HQ and IPS8200HQ-1 switches are in production now, with prices starting at $5.11 each in lots of 1000 units.

IPS8200HQ product page

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Octal high-side switches minimize footprint appeared first on EDN.

Infineon hones SiC MOSFET trench technology

Fri, 03/08/2024 - 01:12

Infineon’s 650-V and 1200-V CoolSiC G2 MOSFETs improve stored energy and charges by up to 20% compared to the previous generation. This second generation of CoolSiC trench MOSFETs continues to harness the performance attributes of silicon carbide, facilitating reduced energy loss and higher efficiency during power conversion.

CoolSiC G2 includes improvements in key figures-of-merit for both hard-switching operation and soft-switching topologies. The fast switching capability of these devices is increased by more than 30%, and thermal capability is now 12% better than the previous devices.

The large portfolio of CoolSiC G2 MOSFETs is suitable for all common combinations of AC/DC, DC/DC, and DC/AC stages. These low on-resistance SiC MOSFETs can be used in photovoltaic inverters, energy storage systems, EV charging, power supplies, and motor drives.

Datasheets and purchase information for CoolSic G2 MOSFETs can be accessed via the product page link below.

CoolSic G2 product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Infineon hones SiC MOSFET trench technology appeared first on EDN.

Low-voltage analog switches ease system design

Fri, 03/08/2024 - 01:12

The digital control pins of Nexperia’s NMUX130x series of 1.5-V to 5.5-V analog switches are compatible with 1.8-V logic thresholds across the entire supply range. Since the control pins operate independently of the VCC range, no additional components are required for voltage translation. The series includes AEC-Q100 qualified variants for automotive use, as well as general-purpose versions to address consumer and industrial applications.

The NMUX1308 is an 8-channel multiplexer/demultiplexer, whereas the NMUX1309 offers a dual 4-channel multiplexer/demultiplexer. All analog signal pins are bidirectional. Integrated injection-current control limits output voltage shifts on the active channel to under 5 mV when an overvoltage event happens on disabled signal channels.

IOFF protection circuitry on digital control pins and analog switch pins enhances overall system safety. Standard devices operate over a temperature range of -40°C to +85°C. Automotive qualified parts operate over a temperature range of -40°C to +125°C. Packaging options for the switches include both leaded and leadless options.

NMUX1308 product page

NMUX1309 product page  

Nexperia

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Low-voltage analog switches ease system design appeared first on EDN.

System emulates hundreds of AI accelerators

Fri, 03/08/2024 - 01:11

Keysight’s AI Data Center Test Platform aims to fast-track the design and deployment of artificial intelligence network infrastructure. According to the company, the system speeds validation and optimization of AI network fabric and improves benchmarking of new AI infrastructures with unprecedented scale and efficiency.

The AI test platform is an 800/400GE solution with lossless fabric validation. Keysight claims it is faster to deploy and offers deeper insights than GPU-based systems, emulating high-scale AI workloads with measurable fidelity.

To simplify benchmarking and validation, the platform uses prepackaged methodologies delivered as applications. These applications have been built through partnerships with key AI operators and AI infrastructure vendors.

The platform also offers a choice of test engines. Users can choose between AI workload emulation on Keysight hardware load appliances and software engines or real AI accelerators to compare benchmarking results.

For more information about the AI Data Center Test Platform, obtain a quote, or request a demo, click here.

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post System emulates hundreds of AI accelerators appeared first on EDN.

Supersized log-scale audio meter

Thu, 03/07/2024 - 17:07

At the end of the DI for a Simple log-scale audio meter, I promised to show how to upgrade it to work better. With these fixes, it now has near-digital performance, with faster response and smoother operation. Even this supersized version comes in two flavours, one comparatively simple, the other, maxed out. It can now out-perform the standard peak program meter (PPM) specs (for which this a good reference), and has a span of over 60 dB with easy setting of the desired minimum and maximum levels.

Wow the engineering world with your unique design: Design Ideas Submission Guide

While the goal for the original version was to produce something simple and functional, the aim of this DI is to see how closely we can match the performance of a few lines of DSP code, no matter how much hardware it may take. The original used just one dual op-amp; this approach inflates that to two quad packs. Over the top: of course. Instructive fun: definitely, at least for us analogeeks.

The underlying principle is the same as before—force current through a diode, measure the resulting voltage, which is proportional to the logarithm of the input, and capture the peak value—but the implementation is different. Figure 1 shows the basic circuit.

Figure 1 We take the log of the input signal; its peak level is captured on C2, which is discharged slowly and linearly; and temperature- and level-corrections are applied in the current source which drives the meter.

The audio input to be measured is now applied through R1, a 10k fixed resistor rather than a thermistor. The thermistor gave compensation for the diodes’ tempco by scaling the (linear) input; with the fixed resistor, we’ll apply an offset to the (logged) signal later in the circuit to achieve the same result. A1’s output is a logarithmically-squashed version of the input. For now, we only need its positive peaks.

A2 and Q1 form a simple peak detector. Whenever A1.OUT is greater than the voltage on C2, A2/Q1 dumps current into C2 until the voltages match. Using a transistor rather than a diode greatly improves the speed; as drawn, with R2 = 22R, it will capture a single half-cycle at 20 kHz, as shown in Figure 2 which is way faster than the PPM spec calls for. (For a slower, more realistic response, increase R2. 1k5 gives a ~5 ms response time to within 1 dB of the final reading.) This may seem to be several op-amps short of a “proper” peak detector, but it does the job in hand: it’s been Muntzed. (Muntz? Who he? This will explain.) Taking A2.IN- directly from C2, which might seem more usual, leads to overshoot or slows the response, depending on the value of the series resistor.

Figure 2 The attack or integration time is very fast; the decay or return time, much slower, and linear.

Now that we have charged C2 fast, we need to discharge it slowly. A3 buffers its voltage, with D3/R4 bootstrapping R3 to give a linear fall in voltage equivalent to 20 dB in 1.7 s, which, more by happy accident than by design, is exactly what we want.

Now we pass the signal through D4, whose tempco of about -2 mV/°C compensates for that of D1/2. It also drops the level by its VF of about 600 mV, which needs restoring. D5 is shown as a generic 1.25 V shunt stabiliser, and its exact type or value is not critical. (I used an LM385 which was to hand; with a clean, stable negative supply rail, it can be designed out.) It provides an accurate source for offsetting not only D4’s VF, but also the signal as a whole, to set the meter needle’s zero point. R8 allows adjustment of this from about -62 dBu (R8 = 10k) to +1 dBu (R8 = zero).

A4 drives the meter movement, buffering the voltage from D4, the offset-voltage compensation being applied through R9. A4 drives current through the meter into R11, the resulting voltage across that being fed back through R10 to close the feedback loop. The meter has D6 in series with it to prevent underswings, and D5 catches negative swings on A4. (Shame we can’t do the same for A2.)

Calibration is simple. Apply the minimum input level at the input, or apply a DC voltage corresponding to the minimum negative peak value to the signal end of R1, and adjust R8 for zero indication on the meter. Now apply the maximum level—I chose +10 dBu—and set R11 for full-scale deflection. R8 must be set first, then R11.

Temperature stability is good. According to LTspice, the tempco is zero at around +1 dBu input and reasonable at other levels, giving a reading correct within 1 dB down at -50 dB or so for 15 to 35°C. Frustratingly, I could only get better compensation by adding extra resistors and a thermistor in a network around R10, the values differing according to the desired span: too many interactions. An extra stage could have fixed this, but . . . Figure 3 shows the response of the meter, both simulated and live.

Figure 3 Simulated and measured responses when set up for a 50 dB span with a +10 dBu maximum reading, showing the effects of temperature and op-amp offset.

We now have a high-performance meter, with near-digital accuracy and even precision. But it’s still only half-wave sensing, and has a couple of residual bugs. For full-wave operation, we can add inverter A5, etc., to the output of A1, along with a second peak-detection stage, A6 and Q2, effectively paralleled with A2 and Q1, to add in the contribution from positive-going inputs: see Figure 4. If A1 and A5 have zero offset voltage or if a few trimmer-derived millivolts are applied to A2.IN+ and A5.IN+, C3 can be omitted. The input offsets inherent in real-world (and cheap) op-amps limit the span, as they lead to inaccuracies at low levels, where the signal to be measured is comparable with them.

Another way of adding bipolar detection would have been to use a full-wave rectifier at the input, but the extra op-amp offsets made this approach too inaccurate without messy trimming.

Figure 4 Extra components can be added for full-wave detection.

This circuit responds faster than a meter movement can follow. C2 may be charged almost instantaneously by a transient, but its voltage will decay by an indicated 11.8 dB/second (or 20 dB in 1.7 s). Thus, if the meter takes 85 ms to respond, it will under-read that transient by 1 dB. Figure 5 shows how to cure this.

Figure 5 Final additions: a “power-on reset”, and a monostable to give ~100 ms hold time after a peak to allow the meter movement to catch up.

A7 and A8 form a monostable, which is triggered by a sharp increase in C2’s voltage and generates a positive pulse at A7.OUT. Connecting this to R4, which no longer goes to Vs-, via a diode cures the problem: while A7.OUT is low, C2 will discharge in the normal way, but while it is high, C2’s discharge path is effectively open-circuited. As shown, and with +/-6 V rails, this hold time is ~100 ms. Adjust C5 or R16 to vary this. The result can be seen in Figure 2.

A final touch is a power-on reset, also shown in Figure 5. (Digital circuits usually have them, so why should we be left out?) A sharp rise of the positive rail turns on Q3—which may be almost any n-MOSFET—for a few hundred milliseconds, clamping C2 to ground while the circuitry stabilises. Without this, C2 may charge to a high level at power-on, taking many seconds to recover.

Although a 100 µA meter movement is shown, A4 will comfortably drive several milliamps. Select or adjust R11 to suit.

While you may not want to build a complete meter like this, the techniques and ideas used here may well come in handy for other projects. But if you do, be sure to use an ebonite-cased movement, complete with polished brass inlays, and with a pointer based on a Victorian town-hall clock’s minute hand. Electro-punk lives!

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Supersized log-scale audio meter appeared first on EDN.

Sorting out USB-C power supplies: Specification deceptions and confusing implementations

Wed, 03/06/2024 - 17:07

Upfront in late November 2023’s most recent edition of the “Holiday Shopping Guide for Engineers” series was my recommendation to pick up a recently-introduced Raspberry Pi 5. But here we are, two months later as I write these words, and the Raspberry Pi 5 is still essentially sold out (echoing, ironically, my commentary introducing that shopping guide section, wherein I documented the longstanding supply constraints of its Raspberry Pi 4 precursor). I know. In my defense, however weak, I’ll note that I did write those words 1.5 months earlier, in mid-October (that excuse didn’t work, did it?). That said, the Raspberry Pi Foundation swears that production will ramp dramatically very soon, with supply improving shortly thereafter. Will it? I don’t know.

I bet at least some of you think that I get “special treatment” with the tech companies in constrained-supply situations like these, don’t you? Ha! Just two weeks ago, I finally gave up waiting on retailer supply and purchased a brand-new 8 GB Raspberry Pi 5 board plus an official case from a guy on eBay. He said he’d accidentally bought two of each and didn’t need the spare combo. Whatever. I didn’t get reseller-marked-up too badly, compared to most of the ridiculous pricing I’m seeing on eBay and elsewhere right now. The 8 GB board MSRP is $80, while that of the case is $10. I paid $123.39 plus tax for the combo, which probably left him with a little (but only a little) profit after covering his hardware costs plus the tax and shipping (or gas) he paid.

Don’t get me started on the Active Cooler shown in the first photo, which, if I wasn’t such a trusting fellow, I might think it doesn’t actually exist. Regardless, I still needed a power supply. A 5 V/3 A supply with a USB-C output such as the Raspberry Pi 15W USB-C Power Supply (standard “kit” for the Raspberry Pi 4, for example) might also work for the Raspberry Pi 5, especially if you only boot off a SD card and don’t have a lot of hooked-up, power-sucking peripherals:

That said, the Raspberry Pi 5’s bootup code will still grumble at you via displayed messages indicating that “current draw to peripherals will be restricted to 600mA.” And if you want to boot off a USB flash stick instead, you’ll need to tweak the config.txt prose first. Don’t even think about trying to boot off the m.2 NVMe SSD HAT (speaking of suspect vaporware) with only a 15 W PSU. And in general, you and I both know that the very first things I’ll likely do when I fire up my board are to run lengthy benchmarks on it, constrain its ventilation flow and see when clock throttling kicks in, try overclocking it, and otherwise abuse it. So yeah…27 W (or more).

The Raspberry Pi 27 W USB-C Power Supply shown above, in its white color option (black is also available) and UK plug option (among several others also available), in all cases matching the variants available with its 15 W sibling, was one obvious candidate. But…I know this is going to surprise you…it’s also near-impossible to track down right now. No problem, I thought. I have a bunch of 30 W USB-C wall warts lying around; I’ll just use one of them. Which, more than 500 words in, is where today’s story really begins.

Problem #1 centers on the term “wall wart”. More accurately, as the Wirecutter points out, I should probably be calling them “chargers” because fundamentally that’s all they are: power sources for recharging the batteries integrated within various otherwise-untethered devices (laptops, smartphones, tablets, smartwatches, etc.). Can you not only recharge a widget’s integrated battery but also simultaneously power that widget from the same charger? Sure, if the output power is high enough to handle this simultaneous-energy multitasking.

But trying to run a non-battery-powered device from a charger can be a recipe for disaster, specifically when that charger’s output power is close to what the device demands (such as my suggested 30 W charger for a Raspberry Pi 5 that wants to suck 27 W). Why? Chargers aren’t exactly known for being predictable in output as the power demands of whatever’s on the other end of the USB-C (which I’m using as an example here, although the concept’s equally relevant to USB-A and other standards) cable increase. As you near supposed “30 W”, for example, the output voltage might sag or, at minimum, exhibit notable ripple. The output current might also droop. Not a huge deal if all you’re doing is recharging a battery; it’ll just take a little longer than it might otherwise. But try to directly power a Raspberry Pi 5 with one? Iceberg dead ahead!

About that “30 W” (Problem #2)…if the wall wart has only one output, you can safely surmise that you’ll get a reasonable facsimile of that power metric out of it. But what if there are two outputs? Or more? And what if you only tap into one of the outputs? Will you get the full spec’d power, or not? The answer is “it depends”, and unfortunately the vendors don’t make it easy to get more precise than that. Here’s an example: remember the 30 W single-port USB-C GaN charger that I dissected around a year ago? Well, VOLTME also makes a two-output 35 W model:

Kudos to the company, as this graphic shows:

When either output is used standalone, it delivers the full 35 W. Use both outputs at the same time, on the other hand, and each is capable of 18 W max. Intuitive, yes? Unfortunately, as far as I can tell, VOLTME’s the exception here, not the norm. Take, for example, the two-output 70 W Spigen GaN charger that I take with me on trips:

It’s smaller and lighter than the single-output conventional-circuitry charger that came with my MacBook Pro. It’s also got enough “umph” (and outputs) to juice up both my laptop and my iPad Pro. Plus, its AC prongs are collapsable; love ‘em when jamming the adapter in my bag. All good so far. But one of the outputs is only 60 W max when used standalone and only 50 W max when used in tandem with the other (20 W max). The more powerful output is the bottom of the two in the above photo. And it’s not marked as such on the front panel for differentiation purposes. Inevitably, in the absence of visual cues to the contrary, I end up plugging my laptop into the upper, weaker output port instead.

Problem #3, particularly for 5 V devices on the other end of the cable, involves inconsistent output power at various output voltages. Let’s look back at that 30 W VOLTME teardown again:

I’ve written (more accurately, I suppose, ranted) before about USB-PD (Power Delivery), which supports upfront negotiation between the “source” and “sink” on their respective voltage and current capabilities-and-requirements, leading to the potential for higher output power. Programmable power supply (PPS), an enhancement to USB PD 3.0, supports periodic renegotiation as, for example, a battery nears full charge. Quoting from a Belkin white paper on the topic:

Programmable Power Supply (PPS) is a standard that refers to the advanced charging technology for USB-C devices. It can modify in real time the voltage and current by feeding maximum power based on a device’s charging status. The USB Implementers Forum (USB-IF), a nonprofit group that supports the marketing and promotion of the Universal Serial Bus (USB), added PPS Fast Charging to the USB PD 3.0 standard in 2017. This allows data to be exchanged every 10 seconds, making a dynamic adjustment to the output voltage and current based on the condition of the receiving device’s specifications. PPS’ main advantage over other standards is its capability to lower conversion loss during charging. This means that less heat is generated, which lengthens the device battery’s lifespan.

I mention this because the above photo indicates that this charger support PPS. But let’s backtrack and focus on its supported USB-PD options. It’s a 30 W charger, right? Well:

  • 20 V x 1.5 A = 30 W
  • 15 V x 2 A = 30 W
  • 12 V x 2.5 A = 30 W

The next one isn’t exactly 30 W, but I’d argue that close still counts not only in horseshoes and hand grenades but also with inexpensive-but-still-impressive chargers:

  • 9 V x 3 A = 27 W

But what’s the deal with that last one?

  • 5 V x 3 A = 15 W

Hmmm…mebbe just a quirk of this particular charger? How about this big bad boy from Anker?

Single output. 100 W. Surely, it’ll pump out more than 3 A at 5 V, right? Nope:

  • 5 V x 3 A = 15 W
  • 9 V x 3 A = 27 W
  • 12 V x 3 A = 36 W
  • 15 V x 3 A = 45 W
  • 20 V x 5 A = 100 W

And just determining this information necessitated tedious searching for a user manual online at a third-party site. I couldn’t even find mention of the product (via either its 317 product code or A2672 model number) on the manufacturer’s own website! And at this point, I’ll cut to the chase: they’re pretty much all like this.

That a charger will only output 100 W to a device that indicates it can handle 20 V is no shortage of smoke and mirrors in and of itself. But I’m actually willing to give the charger suppliers at least something of a “pass” here. Consumers value not only output power but also size, weight, and the all-important price tag, among other things. These factors likely constrain per-port (if not per-device) output current to 5 A or so. If I’m a portable computer manufacturer and I need 100vW of input power to support not only AC-connected operation but also in-parallel battery recharge at a reasonable rate, I’m going to make darn sure my device can handle a 20 V input!

But what about this seeming 3 A limitation for the 5 V output option? It’s not universal, obviously, since the Raspberry Pi 27 W USB-C power supply supports the following options:

  • 1 V x 5 A = 25.5 W
  • 9 V x 3 A = 27 W
  • 12 V x 2.25 A = 27 W
  • 15 V x 1.8 A = 27 W

In contrast, BTW, the official Raspberry Pi 15 W USB-C power supply only does this:

  • 1 V x 3.0 A = 15.3 W

My guess as to the root cause of this 5 V@3 A preponderance comes from a clue in a post on the Electrical Engineering Stack Exchange site that I stumbled across while researching this writeup:

The question is about USB Type-C connectivity.

The Type-C connectivity provides two methods of determining source capability.

The primary method is the value of pull-up on HOST side on CC pins. Type-C specifications define three levels of capability: 500/900 mA (56k pull-up to 5V), 1.5 A (22k pull-up), and 3A (10k pull-up). The connecting device pulls down this with 5.1k to ground, and the resulting voltage level tells the device how much current it can take over the particular connection. When the host sees the pull-down, it will turn on “+5Vsafe” VBUS. This is per Type-C protocol.

The secondary method is provided by nearly independent Power Delivery specification. If the consumer implements PD, it still need to follow Type-C specifications for CC pull-up-down protocol, and will receive “+5Vsafe” VBUS.

Only then the consumer will send serial PD-defined messages over CC pin to discover source capabilities. If provider responds, then negotiations for power contract will proceed.

If the consumer is not PD-agnostic, no messages will be generated and no responses will be returned, and no contract will be negotiated. The link power will stay at the default “Safe+5VBUS” power schema, per DC levels on CC pins.

Here’s the irony…my Raspberry Pi 4 board that I mentioned earlier? It’s the rare, early “Model A” variant, which contained an insufficient number and types of resistors to work correctly with some USB-C cables. But that’s not what’s going on here. As the above explanation elucidates, USB-C chargers must (ideally) at minimum support 5 V@3 A for broadest device compatibility. What I’m guessing mostly happens beyond this point is that charger manufacturers focus their development attention on other voltage/current combinations enabled by the secondary compatibility negotiation, leaving the 5 V circuitry implementation well enough alone as-is.

Agree or disagree, readers? Anything more to add here? I look forward to your thoughts in the comments! Meanwhile, I have a Raspberry Pi 27 W USB-C power supply on order from an overseas supplier…and I wait…

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sorting out USB-C power supplies: Specification deceptions and confusing implementations appeared first on EDN.

Cadence taps into structural analysis by acquiring BETA CAE

Wed, 03/06/2024 - 03:53

Soon after Synopsys snapped up multiphysics simulation toolmaker Ansys, Cadence has responded by announcing the acquisition of engineering simulation supplier BETA CAE Systems International AG for approximately $1.24 billion. BETA CAE, which provides simulation software solutions for automotive and other industries like aerospace and industrial, is based in Lucerne, Switzerland.

BETA CAE’s solutions encompass the entire simulation and analysis flow for multiphysics system simulations, spanning mechanical/structural, computational fluid dynamics (CFD), and electromagnetics (EM). Take, for instance, ANSA, a multidisciplinary computer-aided engineering (CAE) pre-processor that facilitates functionality for full-model build-up in an integrated environment.

Cadence, which entered the multiphysics space several years ago, apparently wants to expand its multiphysics system analysis portfolio and enter structural analysis, the largest system analysis segment. The EDA firm aims to combine its computational software expertise with BETA CAE’s technology to tap into the structural analysis segment.

That’s especially critical for automotive, where the convergence of electrical and mechanical designs is further driven by an increasing shift toward electric vehicles (EVs). BETA CAE has a strong presence in the automotive and aerospace markets, and its customers include Honda Motor, General Motors, Stellantis, Renault, Volvo, and Lockheed Martin.

Multi-domain engineering simulation solutions recently came into the limelight after Cadence’s archrival Synopsys acquired Ansys. Their critical importance amid the mechanical and electrical hyperconvergence is once more affirmed by Cadence’s decision to buy BETA CAE. The acquisition, subject to regulatory approval, is expected to close in the second quarter of 2024.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cadence taps into structural analysis by acquiring BETA CAE appeared first on EDN.

No floating nodes

Tue, 03/05/2024 - 16:50

The schematic below was screen-shot from a LinkedIn group. I heard alarm bells go off in my head when I saw it (Figure 1).

Figure 1 A suggested application note diagram found in a LinkedIn group with a floating node between C4 and C5 that could lead to voltage breakdown.

Capacitors C4 and C5 are placed in series with each other so that their common node has no DC path to anywhere. When I worked on some spacecraft projects, this was absolutely a forbidden thing to do because any floating node like this could drift to an indeterminately high voltage and lead to voltage breakdown.

Even in an earthly milieu, this can be a problem. Imagine something being used or merely being transported or shipped in a thunderstorm environment. Dr. Frankenstein’s lightning bolts could do some real harm.

I have no idea why in the above schematic the series pair of C4 and C5 wasn’t simply made a single 0.5 pF capacitance.

The basic badness of letting something float has been looked at before in “Design precaution: Leave nothing floating”.

It looks like someone didn’t get the message.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post No floating nodes appeared first on EDN.

The promise of OTS-only memories for next-gen compute

Tue, 03/05/2024 - 16:35

For several decades, the semiconductor industry has been looking for alternative memory technologies to fill the gap between dynamic random-access memory (DRAM), the compute system’s main memory, and NAND flash—the system’s storage medium—in traditional high-performance computing system architectures.

Such an alternative memory—historically referred to as storage class memory—should outperform DRAM in terms of density and cost and, at the same time, be accessible much faster than NAND flash. The demand for these memories has recently been fueled by a surge in data-intensive applications like generative AI, requiring vast amounts of data to be accessed quickly.

The 1-PCM/1-OTS device: An intermediate solution

Around 2015, the answer came from a new type of non-volatile memory technology, called 3D XPoint, with phase-change memory (PCM) cells arranged at the ‘cross points’ of word and bit lines. PCM memory cells are made of chalcogenide ‘phase-change’ materials, such as germanium antimony telluride (GeSbTe), sandwiched between two electrodes. The material can quickly and reversibly switch between a high-conductive crystalline phase and a low-conductive amorphous phase, and this resistance contrast is used to store information.

Each PCM memory cell is put in series with a selector device, which is needed to address/select the memory cell in the array for program and read operations and to avoid interactions with adjacent cells. While previous versions of PCM used a transistor as a selector device, 3D XPoint memory makers took a different approach: they used so-called ovonic threshold switching (OTS) devices, made of the same class of material—chalcogenides—as the PCM bit cell itself.

The technology became available as a commercial product under the brand name Optane from 2017 onwards. While the first generation was introduced at the NAND side of the DRAM-NAND gap, a later generation was pushed toward the DRAM side. This move was facilitated by the simultaneous introduction of the double data rate (DDR) memory interface, providing a much-needed increase in the speed and bandwidth at which data could be transferred between the PCM memory and the memory controller.

Despite the performance improvement, the technology struggled to deliver the required speed, power, and reliability and to retain its place in the memory market. The power issue mainly arises from the high current needed to switch the PCM bit cell. But there were also constraints related to size and cost. One of the major bottlenecks came from the device architecture itself—the ‘serial’ combination of the bit cell and the OTS selector device.

On first consideration, the 1-PCM/1-OTS outperforms DRAM in terms of cost and area, fostered by the ability to stack the memory array on top of the peripheral circuit. However, these benefits would fade out when one would further increase the density by scaling the bit cell and stacking multiple cross-point layers.

The presence of the additional selector device in series with the PCM bit cell would lead to high-aspect-ratio structures and induce expensive lithography and patterning steps in each of the stacked 2D planar layers. Not to mention the increase in complexity when aiming for true 3D devices, where PCM and OTS materials are mounted on a vertical ‘wall’ by conformal deposition—in a 3D-NAND-like fashion. In 2022, the product was withdrawn from the market.

The OTS selector: its role and operation in a cross-point array

When resistive types of memories such as PCM are arranged in a cross-point array, reading, and writing of the memory cells ideally takes place only on the selected cell, leaving the rest of the cells unaffected. However, in reality, sneak currents run through the unselected cells during memory operation, degrading selectivity and leading to incorrect information retrieval.

Selector devices—usually transistors or diodes—are, therefore, connected serially with each resistive memory element. Their role is to address (or select) the memory bit cell for programming/reading and suppress unwanted sneak currents.

Figure 1 Illustration of the role of a selector device (S) in a cross-point architecture is shown along with resistive memory elements (R). On top, sneak currents run through the unselected cells without a selector, while on bottom, a selector device serially connected to a resistive memory element prevents the occurrence of unwanted sneak currents. Source: imec

Ovonic threshold switching (OTS) devices can be a good alternative to transistor-based selectors. OTS devices are named after Stanford Ovshinsky, who discovered reversible electrical switching phenomena in various amorphous chalcogenide materials in the late 1960s. About 50 years later, interest in these materials led to the development of the OTS selector, an OTS material sandwiched between two metal electrodes.

When the applied voltage exceeds a specific threshold voltage (Vth), the OTS material experiences a fast drop in resistivity, enabling a high current to flow. This current (Ion) is used to program and read the serially connected memory cell. The other devices in the array are biased in such a way that the voltage is only half of the threshold voltage. At this voltage, the (leakage) current (or Ioff) is extremely low (due to the OTS behavior), and this prevents the undesired programming of adjacent cells.

Figure 2 In a typical I-V characteristic of an OTS selector device, at half the threshold voltage, the Ioff current is sufficiently low to prevent interaction with adjacent cells. Source: imec

OTS selectors have several advantages compared to transistor-based solutions. Unlike transistors, which are three-terminal devices, OTS devices are two-terminal devices. This considerably saves area and enables higher densities. The fabrication of an OTS device is also less expensive. Moreover, OTS materials exhibit a high non-linearity—enabled by the low off current at half the threshold voltage—leading to high selectivity.

In addition, they have a large drive current (Ion), can operate at high speed, and have a sufficiently high endurance. And they enable a 3D-compatible solution by stacking 2D planar arrays or enabling true 3D solutions.

The performance and scalability of OTS selectors have improved much over the years, thanks to the past efforts to enable successive generations of the 1-PCM/1-OTS-based Optane memory. In 2015, imec began investigating and developing improved versions of the OTS selector. For example, engineering the material stack for enhanced performance and (thermal) stability, developing new process flows, exploring 3D integration routes, and examining the underlying physical mechanism.

Turning point: the observation of a memory effect in OTS devices

While trying to identify the switching mechanism in OTS selectors, researchers at imec observed an interesting phenomenon. When applying a voltage pulse of a certain polarity—so, either a positive or a negative voltage pulse—they observed that the threshold voltage of the OTS device changed noticeably if the previous pulse had the opposite polarity.

In other words, the threshold voltage seemed to ‘remember’ the polarity of the previous pulse, even after several hours. This discovery opened doors to the development of ‘OTS-only memories’ that exploit this polarity-induced shift in threshold voltage to store and read information. The beauty of the concept? This single element can act as a memory and a selector in cross-point architectures.

Figure 3 In the graph showing the polarity-induced shift in OTS devices, if the read pulse has a different polarity compared to the write pulse, a larger threshold voltage is observed compared to a write-read sequence with the same polarity. Source: imec

This new memory technology can potentially overcome some of the limitations of 1-PCM/1-OTS memories. Having only one material system for selection and memory makes these devices much easier to fabricate and integrate, benefiting cost and density, especially in 3D configurations. In addition, the current needed to write the device promises to be much lower than the current needed for switching PCM cells, resulting in a more energy-efficient memory technology.

Figure 4 The material system of the OTS-only memory (right) is much simpler than the material system needed to fabricate 1S1R cells (left). Source: imec

Imec was the first to publicly report this memory effect in SiGeAsTe-based OTS devices in 2021. After more extensive work, an alternative, Se-based material system led to a practically usable memory window of 1 V, defined by the shift in threshold voltage.

Meanwhile, other research groups have started to report a similar observation, using a variety of names to describe the memory: OTS-only memory, self-selecting memory, self-rectifying memory, or selector-less memory. This also led to an increased number of contributions at the recent 2023 IEDM conference, illustrating the growing interest of the semiconductor community in this promising OTS-only memory technology.

Making OTS-only memory technology suitable for CXL memories

A few years ago, the introduction of memory technologies toward the DRAM side of the DRAM-NAND gap was further supported by introducing the compute express link (CXL) interconnect. This open industry standard interconnect offers low-latency and high-bandwidth connections between the memory and the processor in high-performance computing applications. It also resulted in a new name for the class of memories in the DRAM-NAND gap: CXL memories.

While the OTS device had been optimized for selector applications, new requirements were imposed on the technology to be suitable as a CXL memory. The challenge is to find the most optimal tradeoff between endurance, retention, and power consumption. For CXL-type applications, power consumption (mainly determined by the current needed to switch the memory element) and endurance (targeting at least 1012 write/read cycles before failure) are the most critical parameters, while some compromise is allowed on the retention.

The retention time determines how long the memory can remain in a well-defined state without being refreshed. For CXL-type applications, a retention of a few hours or days is sufficient. This means the stored information must be refreshed periodically but less frequently than in ‘leaky’ DRAM devices.

Imec’s OTS-only memory devices are made of a SiGeAsSe OTS material system sandwiched between carbon-based bottom and top electrodes. The devices, manufactured on a 300-mm wafer, are scalable and easy to fabricate and integrate. They exhibit an endurance of >108 cycles, fast read/write operation ensuring low latency (read and write pulses are as short as 10 ns), and an ultra-low write current <15 µA (i.e., <0.6 MA/cm2).

The latter corresponds to a ~10x energy reduction compared to a typical PCM device. With a half-bias non-linearity NL1/2 ~104, good selectivity is provided, also when operated in memory mode. The polarity-induced voltage shift persists over time, allowing the achievement of a reasonable retention time (>1 month at room temperature). The memory can operate at positive and negative read polarity, showing memory windows of around 1 V and 0.5 V, respectively.

Figure 5 TEM image of the fabricated SiGeAsSe device is shown along with C-based electrodes. Source: T-ED

Figure 6 Demonstration of switching at ultra-low write current with sufficiently large memory window is shown on left and memory window for both read polarities as a function of write current on right. Source: T-ED

Material research a route to 3D integration

The above results highlight the potential of OTS-only memories for CXL applications. So, imec has identified critical directions for further research to advance the devices toward industrial uptake.

Material research is needed for several reasons. First, current OTS material systems contain elements such as As and Se that are toxic and not environmentally friendly. Finding alternative eco-friendly material systems that perform as good, or even better, than current OTS materials therefore is a priority.

Second, material and device design optimizations are needed to improve the reliability to further enhance the endurance to >1012 and lower the cell-to-cell variability. In addition, the threshold voltage is observed to drift over time, contributing to a cycle-to-cycle variability and impacting the retention time.

Reliability improvement goes hand in hand with a fundamental understanding of the physical mechanism that determines the polarity effect in OTS-only memories. So far, this mechanism is not completely clear. Learning what causes the threshold voltage shift is crucial to explain and predict the observed failures and identify the fundamental tradeoffs that limit device performance.

Figure 7 Cartoon of an OTS-only memory is shown in a true 3D architecture. Source: imec

Finally, imec is exploring routes toward true 3D integration, which will be needed to boost the density of the memory bit cells for next-gen compute system architectures.

Daniele Garbin is an R&D Engineer with research interests in OTS and various emerging memory device technologies.

Gouri Sankar Kar is VP of memory and program director of exploratory logic at imec.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The promise of OTS-only memories for next-gen compute appeared first on EDN.

Efficient voltage doubler is made from generic CMOS inverters

Mon, 03/04/2024 - 17:15

When a design needs auxiliary voltage rails and the associated current loads are modest, capacitor pump voltage multipliers are often the simplest, cheapest, and most efficient way to make them.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The simplest of these is the diode pump voltage doubler. It consists of just two diodes and two capacitors but has the inherent disadvantages of needing a separately sourced square wave for drive and of producing an output voltage that’s at least two diode drops less than twice the supply rail. Active switching (typically with CMOS FETs) is required to avoid this inefficiency and accurately double the supply.

CMOS voltage doubler chips are available off the shelf. An example is the Maxim MAX1682. It serves well in applications where the current load isn’t too heavy, but it (and similar devices) isn’t particularly cheap. The 1682 costs nearly $4 in singles, creating the temptation to see if we can do better, considering that generic CMOS switch chips (like the 74AC14) can be had in singles for 50 cents.

A plan to do so begins with Figure 1, showing a simplified sketch of a CMOS logic inverter.

Figure 1 Simplified schema of typical basic CMOS gate I/O circuitry showing clamping diodes and complementary FET switch pair.

Notice the input and output clamping diodes. These are put there by the fabricator to protect the chip from ESD damage, but a diode is a diode and can therefore perform other useful functions, too. Similarly, the P-channel FET pair was intended to connect the V+ rail to the output pin when outputting a logic ONE, and the N-channel for connection to V- to pin for a ZERO. But CMOS FETs will willingly conduct current in either direction. Thus, current running from pin to rail works equally well as from rail to pin. 

Figure 2 shows how these basic facts relate to charge pumping and voltage multiplication.

Figure 2 Simplified voltage doubler, showing driver device (U1), commutation device (U2), and coupling (Cc), pump (Cp), and filter (Cf) capacitors.

Imagine two inverters interconnected as shown in Figure 2 with a square-wave control signal coupled directly to U1’s input and through DC blocking cap Cc to U2 with U2’s input clamps providing DC restoration.

Consider the ONE half cycle of the square-wave. Both U1 and U2 N-channel FETs will turn on, connecting the U2 end of Cp to V+ and the U1 end to ground, charging Cp to V+. Note the reversed polarity of current flow from U2’s output pin due to Cp driving the pin negative.

Now consider what happens when the control signal reverses to ZERO.

The P FETs will turn ON while the N FETs turn OFF. This forces the charge previously accepted by Cc to be dumped to Cf through U2’s output and V+ pin, thus completing a charge-pumping cycle that delivers a quantum of positive charge to be deposited on Cf. Note reversed current flow through U2 occurs again. The cycle repeats with the next alternation of the control signal, and so on, etc., etc.

During startup, until sufficient voltage accumulates on Cf for normal operation of U2’s internal circuitry and FET gate drive, U2 clamp diodes serve to rectify the Cp drive signal and begin the charging of Cf until the FETs can take over.

So much for theory. Translation of Figure 2 into a complete voltage doubler is shown in Figure 3.

Figure 3 Complete voltage doubler: 100 kHz pump clock set by R1C1, Schmidt trigger , driver (U1), and commutator (U2)

A 100 kHz pump clock is output on pin 2 of 74AC14 Schmidt trigger U1. This signal is routed to the five remaining gates of U1 and (via coupling cap C2) the six gates of U2. Positive charge transfer occurs through C3 into U2 and from there accumulates on filter cap C5.

Even though Schmidt hysteresis isn’t really needed for U2, another AC14 was chosen for it in pursuit of matched switching delay times, thus improving efficiency-promoting synchronicity of charge transfer. Some performance spec’s (V+ = 5V) are:

  • Impedance of 10 V output: 8.5 Ω
  • Maximum continuous load: 50 mA
  • Efficiency at 50 mA load: 92%
  • Efficiency at 25 mA load: 95%
  • Unloaded power consumption: 440 µW
  • Startup time < 1 millisecond

So, what happens if merely doubling V+ isn’t enough? As Figure 4 illustrates, this design can be easily cascaded to make an efficient voltage tripler. Extension to even higher multiples is also possible.

Figure 4 Adding four inexpensive parts suffices to triple the supply voltage.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Efficient voltage doubler is made from generic CMOS inverters appeared first on EDN.

Parsing PWM (DAC) performance: Part 3—PWM Analog Filters

Fri, 03/01/2024 - 18:27

Editor’s Note: This is a four-part series of DIs proposing improvements in the performance of a “traditional” PWM—one whose output is a duty cycle-variable rectangular pulse which requires filtering by a low-pass analog filter to produce a DAC. The first part suggests mitigations and eliminations of common PWM error types. The second discloses circuits driven from various Vsupply voltages to power rail-rail op amps and enable their output swings to include ground and Vsupply. This third part pursues the optimization of post-PWM analog filters.

 Part 1 can be found here.

 Part 2 can be found here.

Recently, there has been a spate of design ideas (DIs) published (see Related Content) which deals with microprocessor-generated pulse width modulators driving low-pass filters to produce DACs. Approaches have been introduced which address ripple attenuation, settling time minimization, limitations in accuracy, and enable outputs to reach and include ground and supply rails. This is the third in a series of DIs proposing improvements in overall PWM-based DAC performance. Each of the series’ recommendations are implementable independently of the others. This DI addresses low pass analog filters.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The PWM output

Spectrally, the PWM output consists of a desirable DC (average) portion and the remainder—undesirable AC signals. With a period of T, these signals consist of energy at frequencies n/T, where n = 1, 2, 3, etc., that is, harmonics of 1/T. If the PWM switches between 0 and 1, for every harmonic n there exists a duty cycle corresponding to a peak signal level of (2/π)/n. This shows the futility of an attenuation scheme which focuses on a notch or band reject type of filter—there will always be a significant amount of energy that is not attenuated by such. The highest amplitude harmonic is the first, n = 1. At the very least, this harmonic must be attenuated to an acceptable level, α. Any low pass filter that accomplishes this will apply even more attenuation to the remaining harmonics which are already lower in level than the first. In summary, the search for the best filter will focus on what are called all-pole low pass filters, which is another way of saying low pass filters which lack notch and band-reject features.

The skinny on low pass all-pole filters

Analog filters can be defined as a ratio of two polynomials in the complex (real plus imaginary) variable s:

Where I ≤ K. The terms zi and pi are referred to respectively as the zeroes and the poles of the filter. K is the order (first, second, etc.) of the filter as well as the number of its poles. All-pole filters of unity gain at DC can be specified simply as:

Filter types include Butterworth, Bessel, Chebyshev, and others. These make different trade-offs between the aggressiveness of attenuation with increasing stop-band frequency and the rapidity of settling in response to a time domain impulse, step, or other disturbance. Improving one of these generally denigrates the other. Tables of poles for various orders and types of these filters can be found in the reference [1]. Values given are for filters which at approximately 1 radian per second (2π Hz) exhibit 3 dB of attenuation with respect to the level at DC. This point is considered to be the transition between the low frequency pass and high frequency stop bands. Multiplying all poles by a frequency scaling factor (FSF) will cause the filter to attenuate 3 dB at 2π·FSF Hz. The frequency response of a filter can be calculated by substituting j·2π·f for s in H(s) and taking the magnitude of the sum of the real and imaginary parts. Here, j = √-1 and f is the frequency in Hz.

The time domain response of a filter to a change in PWM duty cycle reveals how quickly it will settle to the new duty cycle average. For a filter of unity gain at DC, this involves subtracting from 1 the inverse Laplace transform of H(s)/s. A discussion of Laplace transforms, their inverses, and practical uses is beyond the scope of this DI. These inverse transforms can, however, be readily determined by using a web-based tool [2].

Requirements of an optimal filter

A filter must attenuate the maximum value over all duty cycles (2/π) of the PWM first harmonic by a factor of α. A b-bit PWM has a resolution of Full-Scale·2-b. So, for the first harmonic peak to be no greater than ½ LSB, α should be set to (π/2)·2-(b+1). Asking for more attenuation would slow the filter response to a step change in duty cycle. From the time domain perspective, the time ts should be minimized for the filter to settle to +/- α · Full Scale in response to a duty cycle change from Full Scale to zero.

Towards an optimal filter

Consider a 12-bit PWM clocked from a 20 MHz source. The frequency of its first harmonic is F0 = 4883 Hz, and its α is 1.917·10-4. 3rd, 5th, and 7th order filters of types Bessel, Linear Phase .05° and .5° Equiripple error, Gaussian 6 dB and 12 dB, Butterworth, and .01 dB Chebyshev are considered. These are roughly in order of increasingly aggressive attenuation with frequency coupled with increasing settling times. Appropriate FSFs are needed to multiply the poles (listed in reference [1]) of each filter to achieve attenuation α at F0 Hz. Excel’s Solver [3] was used to find these factors. The scaled values were divided by 2π to convert them to Hertz and applied to LTspice’s [4] 2ndOrderLowpass filter objects in its Special Functions folder to assemble complete filters. The graph in Figure 1 shows the frequency responses of 24 scaled filters. These include 3rd, 5th, and 7th order versions of the filter types listed above. These filters were named after the mathematicians who developed the math describing them (I have for some reason failed to find any information about Mr. or Ms. Equiripple). Additionally, there are the same three orders of one more filter type that was developed by the author and will be described later. Although the author makes no claims of being a mathematician, for want of an alternative, these have been named Paul filters. (An appalling choice, I’m sure you’ll agree.)

Figure 1 The frequency response of 24 scaled filters including include 3rd, 5th, and 7th order versions of the 7 filter types listed above (Bessel, Linear Phase, Equiripple, Gaussian, Butterworth, Chebyshev and the Paul filter developed by the author) where the value of α is depicted by the horizontal red line.

In Figure 1, the value of α is depicted by the horizontal line. It and all the filter responses intersect at a frequency of F0 (the PWM’s first harmonic) satisfying the frequency response attenuation requirement. Figure 2 is the Bessel filter portion of the LTspice file which generates the above graph. The irregular pentagons are LTspice’s 2ndOrderLowPass objects. The resistors and capacitors implement first order sections. H = 1 is the filter’s gain at DC.

Figure 2 The Bessel filter portion of the LTspice file which generates the response in Figure 1, U1-U6 are LTspice’s 2ndOrderLowPass objects, resistors and capacitors implement first order sections, and H = 1 is the filter’s gain at DC.

By changing the “.ac dec 100 100 10000” command in the file to “.tran 0 .01 0”, replacing the “SINE (0 1) AC 1” voltage source with a pulsed source “PULSE(1 0 0 1u 1u .0099 .01)” and running the simulation, the response of these filters to a duty cycle step from 1 V to 0 V is obtained as shown in Figure 3.

Figure 3 Replacing the AC voltage source with a pulsed source to change the duty cycle step of the filter response from 1 V to 0 V.

Oh, what a lovely mess! The vertical scale is the common log of the absolute value of the response—absolute value because the response oscillates around zero, and log because of the large dynamic range between 1 and α, the latter of which is again shown as a horizontal line.

Which filter’s absolute response settles (reaches and remains less than α) in the shortest period of time? To find the answer to that question, use is made of LTspice’s “Export data as text” feature under the “File” option made available by right-clicking inside the plot. This data is then imported into Excel. Each filter’s data is parsed backwards in time starting from 10 ms. The first instants when the responses exceed α are recorded. These are the times that the filters require to settle to α. (As can be seen, there were some that require more than 10 ms to do so.) For each filter order, it was determined which type had the shortest settling time. Table 1 shows the settling times to ½ LSB for 8-bit through 16-bit PWMs of 3rd, 5th, and 7th orders of filters of various types.

Table 1 Settling times to ½ LSB for 8-bit through 16-bit PWMs of 3rd, 5th, and 7th orders for various types of filters. The fastest settling times are shown in bold red while those that failed to settle within 10 ms are grey and listed as “> 10 ms”.

The entries in each table row with the fastest settling time is shown in bold red. Those which failed to settle within 10 ms are listed as > 10 ms and are greyed-out. In general, the 7th orders settled faster than the 5th orders, which were noticeably faster than the 3rd’s. Also, those with the lower Q sections settled faster than the higher Q alternatives (again, see the tables in reference [1]). The Chebyshev filters with ripples greater than .01 dB (not depicted) for instance, had higher Q’s than all the ones listed above and had hopelessly long settling times.

As a group, the Paul filters settled the fastest, but that does not preclude the selection of another filter in an instance when it settles faster. Still, it’s worth discussing how the Pauls were developed. Starting with the 3rd, 5th, and 7th order frequency-scaled Bessel poles, the Excel Solver evaluated the inverse Laplace transforms of the filters’ functions H(s). It was instructed to vary the pole values while minimizing the maximum value of the filter response after a given time ts. This was made subject to the constraint that the amplitude response of |H(2πj·F0)| be α, where F0 = 20MHz / 212 and α = (π/2)·2-(12+1). If the maximum response exceeded α for a given ts, ts was increased. Otherwise ts was reduced. Several runs of Solver led to the final set of filter poles. It is interesting that even though the optimization was run for a 12-bit PWM only, settling times at other bit lengths between 8 and 16 is still rather good and in most cases superior to those of the other well-known filters. The Paul filter poles and Qs are listed in Table 2.

Table 2 The poles and Qs for 3rd, 5th, and 7th order Paul filter.

Table 3 includes FSFs for the poles of the well-known filters. The unscaled poles are given in the tables of reference [1]. The scaled poles are characteristic of filters which also attenuate a frequency of F0 by a factor of α.

Table 3 The FSFs for the poles of the well-known filters in the tables of reference [1] for the values of α and F0.

 Implementing a filter

A starting point for the implementation of a filter whose poles are taken from a reference table is to apply to those poles an appropriate FSF.  These factors are given for well-known filters in Table 3 for an attenuation, α, at a frequency of F0 Hz. In Table 2, the Paul filter poles have already been scaled as such. For any of these filters, to change the α from a frequency F0 to F1 Hz, the poles should be multiplied by an FSF of F1/F0.

In settling quickly to the small value of α, some of the biggest errors in filter performance are due to component tolerances. To limit these errors, resistors should be metal film, 1% at worst with 0.1% preferred.  Capacitors should be NPO or C0G for temperature and DC voltage stability, 2% at worst and 1% preferred. Smaller value resistors result in a quieter design and lead to smaller offset voltages due to op amp input bias and offset currents. However, these also require larger-valued, bigger, and more expensive capacitors. Keep these restrictions in mind when proceeding with the following steps.

For a first order section with pole ω:

  1. Start by guessing values of R and C such that RC = 1/ω.
  2. Choose a standard value NPO or COG capacitor close to that value of C.
  3. Calculate R’ = 1/(ω·C) where C is that standard value capacitor.
  4. Choose for R the next smaller standard value of R’ and make up the difference with another smaller resistor in series. Although this will not compensate for the components’ 1% and 2% tolerances, it will yield a result which is optimal on average.
  5. Connect one terminal of R to the PWM output and the other to the capacitor C (ground its other side) and to the input of a unity gain op amp. If gain is required in the aggregate filter, it is this op amp which should supply it rather than one which implements a second order section; unlike second order sections, gain in this op amp has no effect on the R-C section’s AC characteristics because there is no feedback to the passive components. The output of this op amp should drive the cascade of remaining second order sections (Figure 4).

Figure 4 Recommended configuration where one terminal of R is connected to the PWM output, and the other is connected to the capacitor C (ground its other side) and to the input of a unity gain op amp.

For second order sections with pole ω and quality factor Q, error sources are again component values. Errors can be exacerbated by the choice of a filter topology. A second order Sallen Key [5] section with the least sensitivity employs an op amp configured for unity gain as shown in Figure 5.

Figure 5 A second order Sallen Key section with the least sensitivity employs an op amp configured for unity gain.

To select component values:

  1. Start by choosing values of R and C such that RC = 1/ω.
  2. Choose standard values of C1 and C2 similar to C such that C1 / C2 is as large as possible, but no larger than 4Q2. Creating a table of all possible capacitor ratios is helpful in selecting the optimal ratio.
  3. Calculate D = (1 – 4Q2·C2/C1)0.5 and W = 2·Q·C2·ω
  4. For R1a, select a standard resistor value slightly less than (1 + D)/W and add R1b in series to make up the difference.
  5. For R2a, select a standard resistor value slightly less than (1 – D)/W and add R2b in series to make up the difference.
  6. If there are more than one second order section, the sections should be connected in order of decreasing values of Q to minimize noise.

A PWM filter example

Consider a 5th order Paul filter with an attenuation of α at a frequency F1 = F0/2. Each of the ω values in the Paul filter table would be multiplied by an FSF of F1/F0 = ½, but the Q’s would be unchanged. The following schematic shown in Figure 6 satisfies these constraints.

Figure 6 A 5th order Paul filter scaled to operate at F0/2 Hertz.

 Designing PWM analog filters

A set of tables listing settling times to within ½ LSB of 8 through 16-bit PWMs of period 204.8 µs (1/4883 = 1/F0 Hz) has been generated for 3rd, 5th, and 7th order versions of eight different filter types. These filters attenuate the peak value of steady state PWM-induced ripple to ½ LSB. From these listings, the filter with the fastest settling time is readily selected. These filters can be adapted to a new PWM period by multiplying their poles by a scaling factor equal the ratio of the old to new periods. New settling times are obtained by dividing the ones in the tables by that same ratio.

Pole scaling factors for the operation of well-known filters at F0 are supplied in a separate table. The poles of these filters are available in reference [1] and should be multiplied by the relevant factor to accomplish this. A new “Paul” filter (already scaled for F0 operation) has been developed which in most cases has faster settling times than the well-known ones while providing the necessary PWM ripple attenuation. As with the others, it too can be scaled for operation at different frequencies.

It should be noted that component tolerances will lead to filters with attenuations and settling times which differ somewhat from the calculations presented. Still, it makes sense to employ filters with the smallest calculated settling time values.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

 References

  1. http://www.analog.com/media/en/training-seminars/design-handbooks/basic-linear-design/chapter8.pdf%20 (specifically Figures 8.26 through 8.36. This reference does a great job of describing the differences between the filter response types and filter realization in general.)
  2. https://www.wolframalpha.com/input?i=inverse+Laplace+transform+p*b%5E2%2F%28%28s%5E2%2Bb%5E2%29*%28s%2Bp%29%29
  3. https://support.microsoft.com/en-us/office/define-and-solve-a-problem-by-using-solver-5d1a388f-079d-43ac-a7eb-f63e45925040
  4. https://www.analog.com/en/design-center/design-tools-and-calculators/ltspice-simulator.html
  5. https://www.ti.com/lit/an/sloa024b/sloa024b.pdf
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Parsing PWM (DAC) performance: Part 3—PWM Analog Filters appeared first on EDN.

Pages