EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 32 хв тому

Silicon carbide (SiC) counterviews at APEC 2024

Чтв, 03/21/2024 - 11:06

At this year’s APEC in Long Beach, California, Wolfspeed CEO Gregg Lowe’s speech was a major highlight of the conference program. Lowe, the chief of the only vertically integrated silicon carbide (SiC) company and cheerleader of this power electronics technology, didn’t disappoint.

In his plenary presentation, “The Drive for Silicon Carbide – A Look Back and the Road Ahead – APEC 2024,” he called SiC a market hitting the major inflection point. “It’s a story of four decades of American ingenuity at work, and it’s safe to say that the transition from silicon to SiC is unstoppable.”

Figure 1 Lowe: The future of this amazing technology is only beginning to dawn on the world at large, and within the next decade or so, we will look around and wonder how we lived, traveled, and worked without it. Source: APEC

Lowe told the APEC 2024 attendees that the demand for SiC is exploding, and so is the number of applications using this wide bandgap (WBG) technology. “Technology transitions like this create moments and memories that last a lifetime, and that’s where we are with SiC right now.”

Interestingly, just before Lowe’s presentation, Balu Balakrishnan, chairman and CEO of Power Integrations, raised questions about the viability of SiC technology during his presentation titled “Innovating for Sustainability and Profitability”.

Balakrishnan’s counterviews

While telling the Power Integrations’ gallium nitride (GaN) story, Balakrishnan narrated how his company started heavily investing in SiC 15 years ago and spent $65 million to develop this WBG technology. “One day, sitting in my office, while doing the math, I realized this isn’t going to work for us because of the amount of energy it takes to manufacture SiC and that the cost of SiC is so much more than silicon,” he said.

“This technology will never be as cost-effective as silicon despite its better performance because it’s such a high-temperature material, which takes a humongous amount of energy,” Balakrishnan added. “It requires expensive equipment because you manufacture SiC at very high temperatures.”

The next day, Power Integrations cancelled its SiC program and wrote off $65 million. “We decided to discontinue not because of technology, but because we believe it’s not sustainable and it’s not going to be cost-effective.” he said. “That day, we switched over to GaN and doubled down on it because it’s low-temperature, operates at temperatures similar to silicon, and mostly uses same equipment as silicon.”

Figure 2 Balakrishnan: GaN will eventually be less expensive than silicon for high-voltage switches. Source: APEC

So, why does Power Integrations still have SiC product offerings? Balakrishnan acknowledged that SiC can go to higher voltages and power levels and is a more mature technology than GaN because it started earlier.

“There are certain applications where SiC is very attractive today, but I’ll dare to say that GaN will get there sometimes in the future,” he added. “Fundamentally, there isn’t anything wrong with taking GaN to higher voltages and power levels.” He mentioned a 1,200 GaN device Power Integrations recently announced and claimed that his company plans to announce another GaN device with even a higher voltage very soon.

Balakrishnan recognized that there are problems to be solved. “But these challenges require R&D efforts rather than a technology breakthrough,” he said. “We believe that GaN will get to the point where it’ll be very competitive with SiC while being far less expensive to build.”

Lowe’s defense

In his speech, Lowe also recognized the SiC-related cost and manufacturability issues, calling them near-term turbulence. However, he was optimistic that undersupply vs demand issues encompassing crystal boules, substrate capability, wafering, and epi will be resolved by the end of this decade.

“We will continue to realise better economic value with SiC by moving from 150-mm to 200-mm wafers, which increases the area by 1.7x and decreases the cost by about 40%,” he said. His hopes for resolving cost and manufacturability issues also seemed to lie in a huge investment in SiC technology and the automotive industry as a major catalyst.

For a reality check on these counterviews about the viability of SiC, a company dealing with both SiC and GaN businesses could offer a balanced perspective. Hence, Navitas’ booth at APEC 2024, where the company’s VP of corporate marketing, Stephen Oliver, explained the evolution of SiC wafer costs.

He said a 6-inch SiC wafer from Cree cost nearly $3,000 in 2018. Fast forward to 2024, a 7-inch wafer from Wolfspeed (renamed from Cree) costs about $850. Moving forward, Oliver envisions that the cost could come down to $400 by 2028 while being built on 12-inch to 15-inch SiC wafers.

Navitas, a pioneer in the GaN space, acquired startup GeneSiC in 2022 to cater to both WBG technologies. At the show, in addition to Gen-4 GaNSense Half-Bridge ICs and GaNSafe, which incorporates circuit protection functionality, Navitas also displayed Gen-3 Fast SiC power FETs.

In the final analysis, Oliver’s viewpoint about SiC tilted toward Lowe’s pragmatism in SiC’s shift from 150-mm to 200-mm wafers. The recent technology history is a testament to how economy of scale has been able to manage cost and manufacturability issues, and that’s what the SiC camp is counting on.

A huge investment in SiC device innovation and the backing of the automotive industry should also be helpful along the way.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon carbide (SiC) counterviews at APEC 2024 appeared first on EDN.

A self-testing GPIO

Срд, 03/20/2024 - 15:49

General purpose input-output (GPIO) pins are the simplest peripherals.

The link to an object under control (OUC) may become inadvertently unreliable due to many reasons: a loss of contact, short circuit, temperature stress or a vapor condensate on the components. Sometimes a better link can be established with the popular bridge chip by simply exploring the possibilities provided by the chip itself.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The bridge, such as NXP’s SC18IM700, usually provides a certain amount of GPIOs, which are handy to implement a test. These GPIOs preserve all their functionality and can be used as usual after the test.

To make the test possible, the chip must have more than one GPIO. This way, they can be paired, bringing the opportunity for the members of the pair to poll each other.

Since the activity of the GPIO during test may harm the regular functions of the OUC, one of the GPIO pins can be chosen to temporary prohibit these functions. Very often, when this object is quite inertial, this prohibition may be omitted.

Figure 1 shows how the idea can be implemented in the case of the SC18IM700 UART-I2C bridge.

Figure 1: Self-testing GPIO using the SC18IM70pytho0 UART-I2C bridge.

The values of resistors R1…R4 must be large enough not to lead to an unacceptably large current; on the other hand, they should provide sufficient voltage for the logic “1” on the input. The values shown on Figure 1 are good for the most applications but may need to be adjusted.

Some difficulties may arise only with a quasi-bidirectional output configuration, since in this configuration it is weakly driven when the port outputs a logic HIGH. The problem may occur when the resistance of the corresponding OUC input is too low.

If the data rate of the UART output is too high for a proper charging of the OUC-related capacitance during the test, it can be decreased or, the corresponding values of the resistors can be lessened.

The sketch of the Python subroutine follows:

PortConf1=0x02 PortConf2=0x03 def selfTest(): data=0b10011001 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b10100101 bridge.writeRegister(PortConf2, data) #PortConfig2 #--- write 1 cc=0b11001100 bridge.writeGPIO(cc) aa=bridge.readGPIO() # 0b11111111 if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check # partners swap data=0b01100110 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b01011010 bridge.writeRegister(PortConf2, data) #PortConfig2 #---write 1 cc=0b00110011 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check # check quasy-bidirect data=0b01000100 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b01010000 bridge.writeRegister(PortConf2, data) #PortConfig2 #---write 1 cc=0b00110011 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check return True

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A self-testing GPIO appeared first on EDN.

15-bit voltage-to-time ADC for “Proper Function” anemometer linearization

Втр, 03/19/2024 - 15:55

Awhile back I published a simple design idea for a thermal airspeed sensor based on a self-heated Darlington transistor pair. The resulting sensor is simple, sensitive, and solid-state, but suffers from a radically nonlinear airspeed response, as shown in Figure 1.

Figure 1 The Vout versus airspeed response of the thermal sensor is very nonlinear.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Veteran design idea contributor Jordan Dimitrov has provided an elegant computational numerical solution for the problem that makes the final result nearly perfectly linear. He details it in Proper function linearizes a hot transistor anemometer with less than 0.2 % error.

However, a consequence of performing linearization in the digital domain after analog to digital conversion is a significant increase in required ADC resolution, e.g., from 11 bits to 15, here’s why…

Acquisition of a linear 0 to 2000 fpm airspeed signal resolved to 1 fpm would require an ADC resolution of 1 in 2000 = 11 bits. But inspection of Figure 1’s curve reveals that, while the full scale span of the airspeed signal is 5 V, the signal change associated with an airspeed increment of 1999 fpm to 2000 fpm is only 0.2 mV. Thus, to keep the former on scale while resolving the latter, needs a minimum ADC resolution of:

 1 in 5 / 0.0002 = 1 in 25,000 = 14.6 bits

15-bit (and higher resolution) ADCs are neither rare nor especially expensive, but they’re not usually integrated peripherals inside microcontrollers as mentioned in Mr. Dimitrov’s article. So, it seems plausible that a significant cost might be associated with provision of an ADC with resolution adequate for his design. I wondered about what alternatives might exist.

Here’s a design for simple and cheap high-resolution ADC built around an old, inexpensive, and widely available friend: the 555 analog timer chip. 

See Figure 2 for the schematic.

Figure 2 High resolution voltage-to-time ADC suitable for self-heated transistor anemometer linearization. An asterisk denotes precision components (1% tolerance).

 Signal acquisition begins with the R2, R3, U1 summation network combining the 0 to 5 V input signal with U1’s 2.5v precision reference to form:

V1 = (Vin + 2.5v)/2 = 1.25 to 3.75v = (0 to 3) * 1.25v

 V1 accumulates on C1 between conversion cycles with a time constant of:

(R2R3/(R2 + R3) + R1)C1 = 1.1M * 0.039 µF = 42.9 ms

 Thus, for 16 bit accuracy, a minimum settling time is required of:

42.9 ms LOGe(216) = 480 ms

 The actual conversion cycle can then be started by inputting a CONVERT command pulse (>2.5v amplitude and >1 microsecond duration) to the 555 Vth (threshold) pin 6 as illustrated in Figure 3.

 Figure 3 ADC cycle begins with a CONVERT Vth pulse that generates an OUT pulse of duration Tout = LOGe(V1 / 1.25 V)R1C1.

The OUT pulse (low true) begins with the rising edge of CONVERT and is coincident with the 555 Dch (discharge) pin 7 being driven to zero volts, beginning the discharge of C1 from V1 to the 555 trigger voltage (Vtrg = Vc/2 = 1.25v) on pin 7. The duration of C1 discharge and Tout, accumulated digitally (a counter of 16 bits and resolution of 1µs is adequate) by a suitable microcontroller, are given by:

Tout = LOGe(V1 / 1.25 V)R1C1 = LOGe(V1 / 1.25 V) 1M * 0.039 µF

= LOGe((Vin + 2.5 V) / 2.5 V) 39 ms

= LOGe(1) 39 ms = 0 for Vin = 0

= LOGe(3) 39 ms = 42.85 ms for Vin = 5 V

At the end of Tout, Dch is released so the recharge of C1 can commence, and the conversion result:

(N = 1 MHz * Tout)

is available for linearization computation. The math to decode and recover Vin is given by:

Vin = 2.5 V (EXP(N / 39000) – 1)

A final word. You may be wondering about something. Earlier I said a resolution of 1 part in 25000 = 14.6 bits would be needed to quantify the Vin delta between 1999 and 2000 fpm. So, what’s all this 42850 = 15.4 bits stuff?

The 42850 thing arises from the fact that the instantaneous slope (rate of change = dV/dT) of the C1 discharge curve is proportional to the voltage across, and therefore the current through, R1. For a full-scale input of Vin = 5 V, this parameter changes by a factor of 3 from V1 = 3.75 V and 3.75 µA at the beginning of the conversion cycle to only 1.25 V and 1.25 µA at the end. This increase in dV/dT causes a proportional but opposite change in resolution. Consequently, to achieve the desired 25000:1 resolution at Vin = 5 V, a higher average resolution is needed.

The necessary resolution factor bump is square root (3) = 1.732… of which 42850 / 25000 = 1.714 is a rough and ready, but adequate, approximation.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 15-bit voltage-to-time ADC for “Proper Function” anemometer linearization appeared first on EDN.

Harnessing the promise of ultracapacitors for next-gen EVs

Втр, 03/19/2024 - 13:40

Electronics design engineers bear the responsibility of overcoming the world’s concerns with electric vehicle (EV) power sources. Lithium-ion batteries are heavy, put pressure on natural resources, and sometimes are slow to charge. The logical next step in EV development is using ultracapacitors as a complementary power source for when there are not enough batteries to go around, allowing electrification to scale and tone down the detractors of modern charging electronics.

Looking to ultracapacitors may remove some of the market uncertainties surrounding other EV power generators. The electrostatic storage provides higher capacitance than the chemical method of conventional EV batteries. Additionally, the designs remove several rare metals from the composition, making it less challenging to acquire specific materials.

It may not have the density of chemical makeup, but its disruptively long-life cycle and lightning-fast fueling could make EV ownership more attractive. Whereas repeated charges in other batteries produce notable degradation, ultracapacitors can experience over 1 million charge and discharge cycles before noticeable damage.

Car manufacturers can install ultracapacitors alongside batteries for supplementary power. The energy boost is ideal for large-capacity fleet vehicles driving long ranges. Sometimes, they need more instantaneous power bursts climbing steep inclines than waiting for a chemical reaction. The two technologies working in conjunction reduce strain on both, extending their life cycles.

Ultracapacitor commercialization

Ultracapacitor designers started to see interest pique in the last several years. In 2020, an Estonian manufacturer received $161 million in new contracts for individual and public transportation needs. This signals electronics design engineers must create robust, accessible ultracapacitors for increasing demand and combating the climate crisis.

Lithium-ion batteries have an advantage over all other EV power sources because of their density, even if their heft and life span negatively affect their reputation. They are still the go-to device for auto manufacturers. Engineers must consider these design aspects for future ultracapacitor blueprints:

  • Materials with higher surface area and greater capacitance
  • Electrolytes with higher conductivity using additives
  • Thermal management for improved temperature regulation and reduced runaway
  • Seamless compatibility when integrated with other batteries
  • Porous electrode designs for increased performance

Additionally, these specs inform engineers how to size the ultracapacitor for driving applications. Everything from maximum voltage potential to discharge duration affects this, which must be communicated to OEMs, so they can integrate it into manufacturing.

What’s next for design engineers

Electronics design engineers must collaborate with renewable energy experts to make the transition to market-friendly ultracapacitors a reality. Engineers must validate a design’s electromagnetic compatibility and signal integrity. These efforts only matter if power providers are consistent and reliable to support charging infrastructure.

Grid stability with high frequency and voltage is the foundation for success, so communicating ultracapacitor design needs to the renewable sector is critical. Similarly, while one of the selling points for ultracapacitors is their charging time, there are few options for fueling these vehicles. Stations must be equipped with local battery packs instead of directly connected to the grid to prevent overloads and shutdowns.

The final frontier electronics design engineers could explore is a vehicle capable of running solely on an ultracapacitor. Currently, using them with other batteries is the next step. Research and development should explore its potential as a sole generator, though it does not appear feasible in 2024’s developmental landscape.

Electrification needs lithium-ion to become commercially viable, and it is the most cost-effective option right now for consumers. However, the circuit designs and engineers’ prototypes for ultracapacitors show a bright future for the EV industry. This power source, alongside other battery options, will lead to more comprehensive compliance considerations, intersector collaboration, and cost optimization for the EV market.

Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Harnessing the promise of ultracapacitors for next-gen EVs appeared first on EDN.

Single button load switches on the chip 222

Пн, 03/18/2024 - 16:39

The 222-microcircuit project described earlier in [1, 2] is an analog of the 555 microcircuit. Her main purpose is the generation of rectangular pulses with an adjustable fill factor and independent frequency control. Such a chip is not produced industrially, although it is not difficult to assemble its layout using two comparators and five resistors, Figure 1. The pin configuration and functions chip and the typical circuit of its inclusion are also shown. There are several devices for which the 222 chip can be used, these are also given in [1, 2].

Figure 1 The internal structure of the project of the chip 222, its pin configuration and functions and a typical circuit of inclusion.

Wow the engineering world with your unique design: Design Ideas Submission Guide

On the basis of the chip 222, simple switching devices can be created, which are turned on by briefly pressing the start button. The device is turned off by pressing the same button for a longer time. Figure 2 shows a diagram of such a device.

Figure 2 Switching device controlled by one button.

In the initial state, a fixed voltage is applied to the input Cx of the chip (pin 2) from the resistive divider R1, R2. The voltage at the control input ADJ (pin 5 of the chip) is zero. The voltage at the PWM output (pin 4) is also zero. Transistor Q1 is closed, the load Rload is de-energized. The capacitor C1 is charged via the contacts of the button S1 to the supply voltage of the device. When the S1 button is briefly pressed, the voltage from the charged capacitor C1 enters the ADJ input (pin 5 of the chip). The voltage at the PWM output (pin 4) increases to the supply voltage of the device and through the resistor R3 enters the input ADJ (pin 5) of the chip. The state of the chip is fixed, a constant high voltage level appears at its output and remains. When transistor Q1 switches its state, the load is connected to the power source.

In order to turn off the load, it is necessary to press the S1 button again, holding it in the pressed position for a longer time. Capacitor C1 will discharge to resistor R5 and R6 to a voltage below the switching voltage of the chip 222, the device will return to its original state.

The second version of the device, Figure 3, works on a different principle. When the S1 button is pressed, the U1 222 chip the state is being switched, the load is connected to the power source. You can return the device to its original state by briefly pressing the S2 button. Formally, this is a two-button device that performs the role of a thyristor.

Figure 3 A pseudo-thyristor device on a 222 chip.

The following Figure 4 shows a combined load control scheme. You can enable and disable the load by pressing the S1 button for a short or long time. Also, the load can be turned off by a short-term reset of the supply voltage when pressing the S2 button.

Figure 4 Is a combined device for push-button switching on and off of the load using the 222 chip.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 750 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

Related Content

References

  1. Shustov M.A. “Chip 222 – alternative 555. PWM generator with independent frequency control”, International Journal of Circuits and Electronics, 2021, V. 6, P. 23–31. Pub. Date: 06 September 2021. https://www.iaras.org/iaras/home/computer-science-communications/caijce/chip-222-alternative-555-pwm-generator-with-independent-frequency-control
  2. Shustov M.A. “Adjustable threshold devices on a chip 222”, Radioamateur (BY), 2023, No. 6, pp. 20–21.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Single button load switches on the chip 222 appeared first on EDN.

After TSMC fab in Japan, advanced packaging facility is next

Пн, 03/18/2024 - 13:15

Japan’s efforts to reboot its chip industry are likely to get another boost: an advanced packaging facility set up by TSMC. That seems a logical expansion to TSMC’s $7 billion front-end chip manufacturing fab built in Kumamoto on Japan’s southern island Kyushu.

In other words, a back-end packaging facility will follow the front-end fab to complement the chip manufacturing ecosystem in Japan amid considerations to diversify semiconductor supply chains beyond Taiwan due to geopolitical tensions. Trade media has been abuzz about TSMC setting up an advanced packaging plant and a new Reuters report supports this premise.

Especially when TSMC has already set up an advanced packaging R&D center in Ibaraki prefecture, northeast of Tokyo, in 2021. The demand for advanced semiconductor packaging has surged due to high-end chips serving artificial intelligence (AI) and high-performance computing (HPC) applications. The rise of chiplets has also brought advanced packaging technologies into the limelight.

The above factors call TSMC, the world’s largest semiconductor factory, to plan additional packaging capacity; in fact, it’s already working to set up a new packaging facility in Chiayi, southern Taiwan. However, as TrendForce analyst Joanne Chiao notes, TSMC’s advanced packaging facility in Japan will likely be limited in scale. That’s mainly because most of TSMC’s packaging customers are based in the United States.

Figure 1 TSMC’s advanced packaging technology encompasses front-end 3D stacking techniques such as chip-on-wafer (CoW) and wafer-on-wafer (WoW) as well as back-end packaging technologies like integrated fan-out (InFO) and chip-on-wafer-on-substrate (CoWoS). Source: TSMC

with this new plant, TSMC’s CoWoS packaging technology will be transferred to Japan. It’s a 2.5D wafer-level packaging technology developed by TSMC that allows multiple dies to be integrated on a single substrate, providing higher performance and integration density than traditional packaging technologies. Currently, TSMC’s CoWoS packaging capacity is entirely based in Taiwan.

Figure 2 In CoWoS, multiple silicon dies are integrated on a passive silicon interposer, which acts as a communication layer for the active die on top. Source: TSMC

On TSMC’s part, the packaging facility in Japan will have closer access to the country’s leading semiconductor materials and equipment suppliers and a solid customer base. TSMC will also enjoy the generous subsidies from the Japanese government, which aims to revitalize the local semiconductor industry after losing ground to South Korea and Taiwan.

Finally, as the Reuters report notes, no decision on the scale and timeline of building the advanced packaging facility has been made yet. TSMC also declined to comment on this story. Still, with the construction of the TSMC fab in Kumamoto, industry observers firmly believe that Taiwan’s mega fab will inevitably set up an advanced packaging facility in Japan.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post After TSMC fab in Japan, advanced packaging facility is next appeared first on EDN.

AI boom and the politics of HBM memory chips

Птн, 03/15/2024 - 12:13

The high-bandwidth memory (HBM) landscape, steadily growing in importance for its critical pairing with artificial intelligence (AI) processors, is ready to move to its next manifestation, HBM3e, increasing data transfer rate and peak memory bandwidth by 44%. Here, SK hynix, which launched the first HBM chip in 2013, is also the first to offer HBM3e validation for Nvidia’s H-200 AI hardware.

HBM is a high-performance memory that stacks chips on top of one another and connects them with through-silicon vias (TSVs) for faster and more energy-efficient data processing. The demand for HBM memory chips has boomed with the growing popularity of generative AI. However, it’s currently facing a supply bottleneck caused by both packaging constraints and the inherently long production cycle of HBM.

Figure 1 SK hynix aims to maintain its lead by releasing an HBM3e device with 16 layers of DRAM and a single-stack speed of up to 1,280 GB/s.

According to TrendForce, 2024 will mark the transition from HBM3 to HBM3e, and SK hynix is leading the pack with HBM3e validation in the first quarter of this year. It’s worth mentioning that SK hynix is currently the primary supplier of HBM3 memory chips for Nvidia’s H100 AI solutions.

Samsung, now fighting back to make up for the lost space, has received certification for AMD’s AMD MI300 series AI accelerators. That’s a significant breakthrough for the Suwon, South Korea-based memory supplier, as AMD’s AI accelerators are expected to scale up later this year.

Micron, which largely missed the HBM opportunity, is also catching up by launching the next iteration, HBM3e, for Nvidia’s H200 GPUs by the end of the first quarter in 2024. Nvidia’s H200 GPUs will start shipping in the second quarter of 2024.

Figure 2 The 8H HBM3e memory offering 24 GB will be part of Nvidia’s H200 Tensor Core GPUs, which will begin shipping in the second quarter of 2024. Source: Micron

It’s important to note that when it comes to HBM technology, SK hynix has remained ahead of its two mega competitors—Micron and Samsung—since 2013, when SK hynix introduced HBM memory in partnership with AMD. It took Samsung two years to challenge its South Korean neighbor when it developed the HBM2 device in late 2015.

But the rivalry between SK hynix and Samsung is more than merely a first-mover advantage. While Samsung chose the conventional non-conductive film (NCF) technology for producing HBM chips, SK hynix switched to the mass reflow molded underfill (MR-MUF) method to address NFC limitations. According to a Reuters report, while SK hynix has secured about 60-70% yield rates for its HBM3 production, Samsung’s HBM3 production yields stand at nearly 10-20%.

The MUF process involves injecting and then hardening liquid material between layers of silicon, which in turn, improves heat dissipation and production yields. Here, SK hynix teamed up with a Japanese materials engineering firm Namics while sourcing MUF materials from Nagase. SK hynix adopted the mass reflow molded underfill technique ahead of others and subsequently became the first vendor to supply HBM3 chips to Nvidia.

Recent trade media reports suggest Samsung is in contact with MUF material suppliers, though the memory supplier has vowed to stick to its NFC technology for the upcoming HBM3e chips. However, industry observers point out that Samsung’s MUF technology will not be ready until 2025 anyway. So, it’s likely that Samsung will use both NFC and MUF techniques to manufacture the latest HBM3 chips.

Both Micron and Samsung are making strides to narrow the gap with SK hynix as the industry moves from HBM3 to HBM3e memory chips. Samsung, for instance, has announced that it has developed an HBM3e device with 12 layers of DRAM chips, and it boasts the industry’s largest capacity of 36 GB.

Figure 3 The HBM3E 12H delivers a bandwidth of up to 1,280 GB/s and a storage capacity of 36 GB. Source: Samsung

Likewise, Idaho-based Micron claims to have started volume production of its 8-layer HBM3e device offering 24-GB capacity. As mentioned earlier, it’ll be part of Nvidia’s H200 Tensor Core units shipping in the second quarter. Still, SK hynix seems to be ahead of the pack when it comes to the most sought-after AI memory: HBM.

It made all the right moves at the right time and won Nvidia as a customer in late 2019 for pairing HBM chips with AI accelerators. No wonder engineers at SK hynix now jokingly call HBM “Hynix’s Best Memory”.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI boom and the politics of HBM memory chips appeared first on EDN.

Scalable MCUs tout MPU-like performance

Чтв, 03/14/2024 - 20:46

Outfitted with an Arm Cortex-7 core running at up to 600 MHz, ST’s STM32H7R/S MCUs provide the performance, scalability, and security of a microprocessor. They embed 64 kbytes of bootflash and 620 kbytes of SRAM on-chip to speed execution, while fast external memory interfaces support data transfer rates up to 200 MHz.

The STM32H7R and STM32H7S microcontrollers come with powerful security features. They include protection against physical attacks, memory protection, code isolation to protect the application at runtime, and platform authentication. Additionally, STM32H7S devices provide immutable root of trust, debug authentication, and hardware cryptographic accelerators. With these features, the MCUs target safety certifications up to SESIP3 and PSA Level 3.

The lines are further divided into general-purpose MCUs (STM32H7R3/S3) and those with enhanced graphics-handling capabilities (STM32H7R7/S7). With their dedicated NeoChrom GPU, these MCUs deliver rich colors, animation, and 3D-like effects. Developers can share software between the two lines for efficient use of project resources and to achieve faster time-to-market for new products.

STM32H7R/S MCUs are scheduled to enter volume production starting in April 2024. Sample requests and pricing information are available from local ST sales offices. For more information about the STM32H7R/S general-purpose and graphics lines of MCUs, click here.

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Scalable MCUs tout MPU-like performance appeared first on EDN.

Rad-tolerant LNA spans 2 GHz to 5 GHz

Чтв, 03/14/2024 - 20:45

An off-the-shelf S-Band low-noise amplifier, the TDLNA2050SEP from Teledyne, tolerates up to 100 krads of total ionizing dose (TID) radiation. This makes the part suitable for use in high-reliability satellite communication systems and phase-array radar.

According to the manufacturer, the MMIC amplifier delivers a gain of 17.5 dB from 2 GHz to 5 GHz, while maintaining a noise figure of less than 0.4 dB and an output power (P1dB) of 19.5 dB. The device should be biased at a VDD of +5.0 V and IDDQ of 60 mA.

The TDLNA2050SEP low-noise amplifier is built on a 90-nm enhancement-mode pHEMT process and is qualified per MIL-PRF-38534 Class K (space) or Class H (military). It comes in a 2×2×0.75-mm, 8-pin plastic DFN packages.

Devices are available from Teledyne e2v HiRel or an authorized distributor.

TDLNA2050SEP product page

Teledyne e2v HiRel Electronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Rad-tolerant LNA spans 2 GHz to 5 GHz appeared first on EDN.

Retimers boost PCIe 6.x connectivity

Чтв, 03/14/2024 - 20:45

Astera Labs has expanded its Aries PCIe/CXL Smart DSP retimer portfolio with devices that ensure robust PCIe 6.x and CXL 3.x connectivity. Doubling bandwidth to 64 GT/s per lane with automatic link equalization, Aries 6 retimers enable critical connectivity for AI server platforms and cloud infrastructure.

The protocol-aware, low-latency retimers integrate seamlessly between a root complex and endpoints, extending the reach threefold. They maintain signal integrity by compensating channel loss up to 36 dB at 64 GT/s with PAM4 signaling. Aries 6 also boasts low power at 11 W typical for a PCIe 6.x 16-lane configuration.

Aries 6 retimers are available in 16-lane and 8-lane lane variants to support PCIe 6.x and PCIe 5.x applications. They also come in multiple form factors, including silicon chips, Smart Cable modules, and boards. Seamless upgrading from second-generation Aries 5 retimers to third-generation Aries 6 is facilitated through adherence to industry-standard footprints.

Astera will demonstrate the Aries 6 retimers at this month’s NVIDIA GTC 2024 AI conference and expo.

Aries 6 product page

Astera Labs 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Retimers boost PCIe 6.x connectivity appeared first on EDN.

Edge AI/ML models team with Arm Keil MDK

Чтв, 03/14/2024 - 20:44

Embedded developers can deploy AI and ML models developed on Edge Impulse’s platform directly in Arm’s Keil microcontroller development kit (MDK). The partnership between the two companies makes it easier for engineers to collaborate with other cross-disciplinary teams to build edge AI products and bring them to market.

Keil MDK is a widely deployed software development suite used to create, build, and debug embedded applications for Arm-based microcontrollers. The Edge Impulse integration brings the company’s edge AI tools directly to the Keil ecosystem via the Common Microcontroller Software Interface Standard (CMSIS). Models developed in Edge Impulse Studio can be deployed as an Open-CMSIS Pack and imported into any Arm Keil MDK project.

Developers can improve the performance of their applications by combining Edge Impulse’s Edge Optimized Neural (EON) compiler with Arm’s latest compiler. According to Edge Impulse, the EON compiler runs models with up to 70% less RAM usage and up to 40% less flash usage. This is in addition to the savings achieved with the Arm compiler.

To get started with the Edge Impulse CMSIS Pack for the Arm Keil MDK, click here. Read more about the Arm Keil integration on the Edge Impulse blog.

Edge Impulse

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Edge AI/ML models team with Arm Keil MDK appeared first on EDN.

Alphawave demos 3-nm UCIe subsystem at 24 Gbps

Чтв, 03/14/2024 - 20:44

Alphawave Semi announced the successful bring-up of its first chiplet-connectivity silicon platform on TSMC’s advanced 3-nm process. The silicon-proven Universal Chiplet Interconnect Express (UCIe) subsystem, capable of operating at 24 Gbps per lane, was demonstrated at the recent Chiplet Summit in Santa Clara, CA.

Combining PHY IP and interface controller IP, the UCIe 1.1-compliant subsystem delivers high bandwidth density at very low power and with low latency. Its configurable die-to-die (D2D) controller supports streaming, PCIe/CXL, AXI-4, AXI-S, CXS, and CHI protocols. In addition, the PHY can be configured for TSMC’s chip-on-wafer-on-substrate (CoWoS) and integrated fanout (InFO) packaging technologies. Built-in bit error rate (BER) health monitoring ensures reliable operation.

“Achieving 3nm silicon-proven status for our 24-Gbps UCIe subsystem is a key milestone for Alphawave Semi, as it is an essential piece of our chiplet connectivity platform tailored for hyperscaler and data-infrastructure applications,” said Letizia Giuliano, VP IP Product Marketing at Alphawave Semi. “We are thankful to our TSMC team for their outstanding support, and we look forward to accelerating our mutual customers’ high-performance chiplet-based designs on TSMC’s leading-edge 3nm process.”

Read more about the UCIe subsystem on Alphawave Semi’s blog.

Alphawave Semi 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Alphawave demos 3-nm UCIe subsystem at 24 Gbps appeared first on EDN.

Cache coherent interconnect IP pre-validated for Armv9 processors

Чтв, 03/14/2024 - 11:57

Modern system-on-chip (SoC) designs require multiple interconnects for optimal performance, and here, cache coherent and non-coherent interconnects work together. In fact, it’s imperative that SoCs have an efficient combination of cache-coherent and non-coherent operations.

While SoC parts like accelerators and peripherals generally don’t require cache coherency, sharing a coherent view of memory and I/O is critical, so the processor has access to the most recent data without having to go off-chip. Arteris claims that its non-coherent FlexWay interconnect IP and Ncore cache coherent network-on-chip (NoC) IP seamlessly work together to offer SoC designers robust architectural flexibility.

The latest version of its cache-coherent NoC IP works with multiple processor IPs, including RISC-V and the next-generation Armv9 Cortex processor. Arteris has pre-validated Armv9 Cortex processor IP for its Ncore cache coherent interconnect IP, and the resulting validation system boots Linux on a multi-cluster Arm design and executes test suites to validate critical cache coherency cases.

It also supports multiple protocols, including CHI-E, with which the latest Armv9 processors are closely associated. Other protocols are CHI-B and ACE coherent, plus ACE-Lite and AXI* IO coherent interfaces. That allows chip designers to secure their investment in older architectures and evolve in a cost-effective manner.

Ncore can scale across a mix of fully coherent, I/O-coherent, non-coherent, memory and peripheral interfaces using a variety of NoC topologies. Source: Arteris

Next, Ncore cache coherent interconnect IP has achieved ISO 26262 certification from exida, a certification agency specializing in functional safety standards for the automotive industry. Previously, Arteris supported safety and designers would do their own hardware checking in terms of safety process. However, this Ncore version is certified, meaning that interconnect design is out-of-box ready with ISO 26262 certification.

On the software side, Ncore has a very logical user interface flow to accelerate design efficiency. The flow starts at the architectural level with chip specifications and system assembly configuration options. Then, it goes to the automatic mapping process of NoC library elements, followed by optimization and refinement before RTL is generated.

Moreover, compared to the manual approach, NCoR maintains a database of inputs that SoC architectures require. So, once the initial configuration is classified, which can be iterated, SoC designers can revisit each segment, making the job of managing SoC specifications a straightforward task.

Charles Janac, president and CEO of Arteris, says that SoC designers are challenged by the growing complexity resulting from the number of processing elements, multiple protocols, and functional safety requirements of modern electronics. “Our latest release of a production-proven Ncore marks an important milestone toward our ultimate cache coherent interconnect IP vision to connect any processor, using any protocol and topology.”

Ncore supports direct connections for heterogeneous, asymmetric systems and other flexible connectivity options, ensuring adaptability to various applications across automotive, industrial, communications, and enterprise computing markets. Arteris claims that Ncore can save SoC design teams upward of 50 years of engineering effort per project compared to manually generated interconnect solutions.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cache coherent interconnect IP pre-validated for Armv9 processors appeared first on EDN.

Power delivery for a load that is driven with multiple sources

Срд, 03/13/2024 - 15:26

When a load is driven simultaneously by more than one source with each source being of its own frequency, the individual load power deliveries from those sources are independent of each other. Whatever power any one of those sources would provide to the load all by itself, that power delivery will not be affected by the presence of the absence of the other sources.

Imagine a stack of voltage sources connected in series and feeding into some load resistance, R. It could look something like Figure 1.

Figure 1 A stack of voltage sources connected in series and feeding into some load resistance, R.

Of course, we could have more sources, say four, five or more, but three is a nice and convenient number. For the sake of discussion, we can meaningfully call the voltage from this stack of three a “triplet”. We further say of our triplet that each source is delivering its voltage at a different frequency. The frequency of the DC source is of course, zero.

The instantaneous power delivered to R is the instantaneous voltage at the top of the stack squared and then divided by R. The value of R is not of concern for now, so we will just look at that stack-top voltage which is our triplet.

When we square the triplet expression, we get several components per the following algebra in Figure 2.

Figure 2 Squaring the triplet expression to obtain the instantaneous power delivered to R.

Just as a double check of this algebra to demonstrate equality, by choosing deliberately different frequencies W1 and W2, we can graphically plot the triplet squared and then plot the sum of the derived terms as shown in Figure 3. We see that they are indeed identical.

Figure 3 Graphical check of squaring the triplet where, by choosing different frequencies (W1 and W2), we can graphically plot the triplet squared and plot the sum of the derived terms. From this, we can visually confirm that they are identical.

Getting back to the algebra, the results of squaring the triplet are shown above. The value of the first line is never negative, only positive, but the values of the second and third lines swing back and forth from positive to negative, to positive to negative, and so on and so on.

The energy delivered to R is the integral of the power over time. The integral for the first line is positive which means that R does indeed receive energy from the terms of that first line, but the integrals of the second and third lines each come to zero. As time goes on, the positive swings of the second and third lines giveth while the negative swings of the second and third lines taketh away. Therefore, the integrals of those two lines come to zero which means that those two lines deliver no energy to the load and no energy delivery means no power delivery.

Only the terms of the first line deliver power to R where that power is shown in Figure 4.

Figure 4 The power delivery to R. As shown in the image, only the terms of the first line deliver power (to R).

The upshot of all this is that each voltage source of our triplet delivers as much power to R as it would deliver if it were connected to R all by itself. The power delivered by each source is independent of the presence or absence of each of the other sources.

If we’d had four sources or five sources or more, it wouldn’t matter. As long as their frequencies are not equal, the power deliveries of each source would still be independent of all of the others.

With more sources, the algebra would be more complex, but their independence of each other would remain the case.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power delivery for a load that is driven with multiple sources appeared first on EDN.

Non-linear digital filters Uses cases and sample code

Срд, 03/13/2024 - 14:59

Most embedded engineers writing firmware have used some sort of digital filters to clean up data coming from various inputs such as ADCs, sensors with digital outputs, other processors, etc. Many times, the filters used are moving average (boxcar), finite impulse response (FIR), or infinite impulse response (IIR) filter architectures. These filters are linear in the sense that the outputs scale linearly to the amplitude of the input. That is, if you double the amplitude of the input stream the output of the filter will double (ignoring any offset). But there are many non-linear filters (NLF) that can be very useful in embedded systems and I would bet that many of you have used a few of them before. A NLF does not necessarily respond in a mathematically linear fashion to its inputs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In some cases, FIRs and IIRs can struggle with things like impulse noise and burst noise that can cause the output to react in an unacceptable way. Non-linear filters can offer protection to the data stream. Depending on your application they may be used as a stand-alone filter or as a pre-filter before the FIR, IIR, or boxcar filter.

The examples in this article are assuming one-dimensional streaming, signed or unsigned, integers (including longs and long longs). Some examples may be applicable to floats but others are not. Streaming is mentioned as it is assumed the data will be coming continuously from the source and these filters will process the data and send it out one-for-one, in real-time. In other words, we can’t just toss bad data, we need to send some value to replace the input. Some examples, though, may allow for oversampling and in that case, it can then decimate the data. For example, a sensor may send data at a rate 10 times faster than needed then process the 10 samples before sending out 1 sample to the next stage.

Another assumption for this discussion, is that we are designing for small embedded systems that are required to process incoming samples in real-time. Small in the sense that we won’t have a large amount of memory or a high MIPS rating. For that reason, we will avoid using floats.

So, let’s take a look at some of the non-linear filters and see where they are useful.

Bounds checking filter

This is one you may have used before but may not have considered it a filter. These filters are also often referred to as bounds checking, clipping, range checking, limiting, or even sanity checking. We are not referring to pointer checks but to data checking of incoming data or data that has been modified by previous code.

Here is a simple example piece of code:

#define Upper_Limit 1000 #define Lower_Limit -1000 int limit_check(int n) { if (n < Lower_Limit) n = Lower_Limit; else if (n > Upper_Limit) n = Upper_Limit; return n; }

Listing 1

You can see that if the integer n is greater than 1000 then 1000 is returned. If it is less than -1000 then -1000 is returned. If it is between 1000 and -1000, inclusive, the original value of n is returned. This would limit large impulse noise values from passing through your system, i.e., it filters the data.

When combined with another filter like a FIR, IIR, or temporal filter (described below), the limit value could be scaled based on the running filter value. If an out of range sample is detected, based on this moving limit, the bounds checker could return the latest filter output instead of a fixed limit or the suspect sample.

Some systems may provide some variation of bounds checking as a predefined function call or a macro.

Soft clipping filter

This is related to bounds checking but instead of just limiting a value after a certain level is reached, it slowly starts to back off the output value as the input approaches the maximum or minimum value. This type of soft clipping is often used in audio signal processing applications.

Soft clipping can be accomplished by something like a sigmoid function or a hyperbolic tangent function. The issue here is that these methods require significant processing power and will need fast approximation methods.

Soft clipping typically distorts a good portion of the input to output relationship, so it isn’t appropriate for use in most sensor inputs measuring things like temperatures, circuit voltages, currents, light levels, or other metrological inputs. As such, we will not discuss it further except to say there is lots of information on the web if you search “soft clipping”.

Truncated mean filter

The truncated mean, or trimmed mean, is a method where you take in a set of, at least 3, readings, toss the maximum and minimum reading, and average the rest. This is similar to the method you see in some Olympic judging. For embedded projects it is good at removing impulse noise. One method to implement this filter is by sorting, but in most applications in a small processor, this may be computationally expensive so for anything larger than 5 samples, I would suggest scanning the list to find the min and max. While running the scan, also calculate the total of all the entries. Lastly, subtract the min and max from the total and divide that value by the number of entries minus 2. Below is an example of such a function executing on an array of input values. At the end of the code there is an optional line to do rounding if needed.

#include int TruncatedMean(int inputArray[], unsigned int arraySize) { int i = 0; int min = INT_MAX; int max = 0; int total = 0; int mean = 0; for (i = 0; I < arraySize; i++) { if (inputArray[i] < min) min = inputArray[i]; if (inputArray[i] > max) max = inputArray[i]; total = total + inputArray[i]; } //mean = (total - min - max) / (arraySize - 2); // The previous truncates down. To assist in rounding use the following line mean = (total - min - max + ((arraySize - 2)/2)) / (arraySize - 2); return mean ; }

Listing 2

If you have only 3 values, it may be advantageous, in computation time, to rewrite the c code to remove looping as seen in this code example for 3 values.

int TruncatedMean_3(int a, int b, int c) { int mean = 0; if ((a<=b) && (a>=c) || ((a<=c) && (a>=b)) ) mean = a; else if ((b<=c) && (b>=a) || ((b<=a) && (b>=c)) ) mean = b; else mean = c; return mean; }

Listing 3

Note that the truncated mean, using at least 5 samples, can also be implemented to remove more than one maximum and one minimum if desired—which would be good for burst noise. Also note that you can implement this as a sliding function or an oversampling function. A sliding function, like a moving average, slides out the oldest input and inserts the new input and then executes the function again. So, you get one output for every input. Alternatively, an oversampling function takes in an array of values, finds the mean, and then gets a fresh array of new values to process. So, every array of input samples generates only one output and then you’ll need to get a new set of input values before calculating a new mean.

Median filtering

A median filter finds the middle value in a set of samples. This may be useful for various types of noise sources. In a large set of samples, the sample array would be sorted and then the middle indexed variable would be read. For example, say we have an array of 7 samples (samples[0 to 6])—we sort them and then the median is samples[3]. Note that sorting could be problematic in a small embedded system due to execution speed so median filtering should be used judiciously. For 3 samples, the code is the same as the code example function “TruncatedMean_3”(listing 3) above. For larger groups, listing 4 shows an example piece of c code for finding the median. At the bottom of the code, you will see the setting of median based on the number of samples being odd or even. This is needed because the median for an even number of samples is defined as the average of the middle two values. Depending on your need you may want to add rounding to this average.

#define numSamples 6 int sample[numSamples] = {5,4,3,3,1,0}; int Median( int sample[], int n) { int i = 0; int j = 0; int temp = 0; int median = 0; // First sort the array of samples for (i = 0; i < n; ++i){ for (j = i + 1; j < n; ++j){ if (sample[i] > sample[j]){ temp = sample[i]; sample[i] = sample[j]; sample[j] = temp; } } } // If numSamples is odd, take the middle number // If numSamples is even, average the middle two if ( (n & 1) == 0) { median = (sample[(n / 2) - 1] + sample[n / 2]) / 2; // Even } else median = sample[n / 2]; // Odd return(median); }

Listing 4

Just as in the truncated mean filter, you can implement this as a sliding function or an oversampling function.

Majority filtering

Majority filters, also referred to as mode filters, extract the value from a set of samples that occurred the most times—majority voting. (This should not be confused with “majority element” which is the value occurring more than the number-of-samples/2.) Listing 5 shows a majority filter for 5 samples.

#define numSamples 5 int Majority( int sample[], int n) { unsigned int count = 0; unsigned int oldcount = 0; int majoritysample = sample[0]; int i = 0; int j = 0; for (i = 0; i < numSamples; i++) { count = 0; for (j = i; j < numSamples; j++) { if (sample[i] == sample[j]) count++; } if (count > oldcount) { majoritysample = sample[i]; oldcount = count; } } return majoritysample; }

 Listing 5

The code uses two loops, the first grabbing one element at a time, and the second loop then stepping through the list and counting how many samples match the value indexed by the first loop. This second loop holds on to the highest match count, found along the way, and its sample value until the first loop steps through the entire array. If there are more than one set of sample values with the same count (i.e., {1, 2, 2, 0, 1}, two 2s and two 1s) the one found first in will be returned as the majority.

 Note that the majority filter may not be applicable to typical embedded data as the dynamic range (from sensors) of the numbers would normally be 8, 10, 12 bits or greater. If the received sample uses a large portion of the dynamic range, the chance samples from a small set may be matching is minimal unless the signal being measured is very stable. Due to this dynamic range issue, a modification of the majority filter may be useful. By doing a right shift on the binary symbols, the filter can then match symbols close to each other. For example, say we have the numbers (in binary) 00100000, 00100011, 01000011, 00100001. None of these match one another—they are all different. But, if we shift them all right by 2 bits, we get 00001000, 00100011, 00010000, 00001000. Now three of them match. We can now average the original values of the matching symbols creating a modified median value.

Again, as in the truncated mean filter, you can implement this as a sliding function or an oversampling function.

 Temporal filtering

This is a group of filters that react to a signal based more on time than amplitude. This will become clearer in a minute. We will refer to these different temporal filters as mode 1 through mode 4 and we begin with mode 1:

Mode 1 works by comparing the input sample to a starting filtered value (“filterout”) then, if the sample is greater than the current filtered value, the current filtered value is increased by 1. Similarly, if the sample is less than the current filtered value, the current filtered value is decreased by 1. (The increase/decrease number can also be any reasonable fixed value (e.g., 2, 5, 10, …). The output of this filter is “filterout”. You can see that the output will slowly move towards the signal level thus changes are more related to time (number of samples) than to the sample value.

Now, if we get an unwanted impulse, it can only move the output by 1 no matter what the sample’s amplitude is. This means burst noise and impulse noise is greatly mitigated. This type of filter is very good for signals that move slowly versus the sample rate. It’s works very well filtering things like temperature readings by an ADC, especially in an electrically noisy environment. It performed very well on a project I worked on to extract a very slow moving signal sent on a power line (a very noisy environment and the signal was about -120 dB below the line voltage). Also, it’s very good for creating a dynamic digital reference level such as the dc bias level of an ac signal or a signal controlling a PLL. Listing 6 illustrates the use of the mode 1 temporal filter to smooth the value “filterout”.

#define IncDecValue 1 int sample = 0; int filterout = 512; // Starting value call your “getsample” function here… if (sample > filterout) filterout = filterout + IncDecValue; else if (sample < filterout) filterout = filterout - IncDecValue;

Listing 6

If the sample you are filtering is an int, you may want to do a check to make sure the filtered value doesn’t overflow/underflow and wrap around. If your sample is from a sensor or ADC that is 10 or 12 bits, this is not an issue and no check is needed.

Mode 2 is the same as Mode 1 but instead of a single value for the increase/decrease number, two or more values are used. One example is using a different increase/decrease value depending on the difference between the sample and the current filtered value (“filterout”). If they are close we use ±1, and if they are far apart we use ±10. This has been successfully used to speed up the startup of a temporal filtered control for a VCO used to match a frequency from a GPS signal.

#define IncDecValueSmall 1 #define IncDecValueBig 10 #define BigDiff 100 int sample = 0; int filterout = 100; // Starting value call your “getsample” function here… if (sample > filterout) { if ((sample - filterout) > BigDiff) filterout = filterout + IncDecValueBig; else filterout = filterout + IncDecValueSmall; } else if (sample < filterout) { if ((filterout - sample) > BigDiff) filterout = filterout - IncDecValueBig; else filterout = filterout - IncDecValueSmall; }

Listing 7

The increment/decrement value could also be a variable that is adjusted by the firmware depending on various internal factors or directly by the user.

Mode 3 is also very similar to Mode 1 but instead of increasing by ±1, if the sample is greater than the current filtered signal, the current filtered signal is increased by a fixed percentage of the difference between the current filtered and the sample. If the sample is less than the current filtered signal, the current filtered signal is decreased by a percentage. Let’s look at an example. Say we start with a current filtered value (“filterout”) of 1000 and are using 10% change value. Then we get a new sample of 1500. This would result in an increase of 10% of 1500-100 or 50. So the current filtered value is now 1050. If the next sample is 500, and we used -10% we would get a new current filtered of 995 (1050 minus 10% of 1050-500).

#define IncPercent 10 // 10% #define DecPercent 10 // 10% int sample = 0; int filterout = 1000; // Starting value call your “getsample” function here… if (sample > filterout) { filterout = filterout + (((sample - filterout) * IncPercent) / 100); } else if (sample < filterout) { filterout = filterout - (((filterout - sample) * DecPercent) / 100); }

Listing 8

One thing to watch for is overflow in the multiplications. You may need to use longs when making these calculations. Also note that it may be useful to make “IncPercent” and “DecPercent” a variable that may be adjusted via an internal algorithm or by user intervention.

To speed up this code on systems lacking a 1 or 2 cycle divide: instead of scaling IncPercent and DecPercent by 100, scale it by 128 ( 10 % would be ~13) Then the “/100” In the code would be “/128” which the compiler would optimize to a shift operation.

Mode 4 is comparable to Mode 3 except, like Mode 2, there are two or more levels that can come into play depending on the difference between the sample and the current output value (“filterout”). In the code in listing 9, there are two levels.

#define IncPctBig 25 // 25% #define DecPctBig 25 // 25% #define IncPctSmall 10 // 10% #define DecPctSmall 10 // 10% int sample = 0; int filterout = 1000; // Stating value call your “getsample” function here… if (sample > filterout) { if ((sample - filterout) > BigDiff) { filterout = filterout + (((sample - filterout) * IncPctBig) / 100); } else filterout = filterout + (((sample - filterout) * IncPctSmall) / 100); } else if (sample < filterout) { if ((filterout - sample) > BigDiff){ filterout = filterout - (((filterout - sample) * DecPctBig) / 100); } else filterout = filterout - (((filterout - sample) * DecPctSmall) / 100); }

Listing 9

One interesting thought is that temporal filters could also be used to generate statistics on things like impulse and burst noise. They could count the number of occurrences over a period of time and calculate stats such as impulses/sec. This could be done by adding another compare for samples being very much larger, or smaller, than the “filterout” value.

Pushbutton filtering

You may not think of this as a filter, but it is a filter for 1-bit symbols. Pushbuttons, switches, and relays have contacts that bounce open and closed for several milliseconds when pressed. If these are not filtered by external hardware (normally an RC filter) you will have to debounce (filter) it in code. There are a multitude of ways to do this. There are many discussions and much code on the web, but I think Jack Ganssle’s may have the best document at: (http://www.ganssle.com/debouncing-pt2.htm)

Using NLFs in your own projects

Although this is not a comprehensive list of NLFs, I hope this gives you a flavor of the concept. I’m sure many of you have created unique NLF’s for your own projects. Perhaps you would like to share them with others in the comments below.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

 Phoenix Bonicatto is a freelance writer.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Non-linear digital filters Uses cases and sample code appeared first on EDN.

Charge pump halves voltage to double current “efficiency”

Втр, 03/12/2024 - 13:50

Capacitor type charge pumps are a well-known, simple, efficient, cost-effective (and therefore popular!) method for inverting and multiplying voltage supply rails. Perhaps less well known, however, is that they also work just as well for dividing voltage (while multiplying current). Figure 1 illustrates a Vout = Vin/2, Iout = Iin*2 example pump built around the venerable xx4053 CMOS triple SPDT switch.

Figure 1 xx4053 based, 100kHz, voltage-halving, current-doubling charge pump.

Here’s how it works.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The R1C1 time constant couples the Vin/ppv square wave found at U1pin14 to U1pin9, creating an Fpump oscillator frequency of (approximately):

Fpump = 1 / (2 * 100k * 68 pF * loge(2)) = 100 kHz

During the Fpump negative half-cycle (U1pin4 = 0), the upper (U1pin14) end of C2 is connected to Vin while the lower end (U1pin15) end is connected to Vout, thus charging C2 to:

Vc2-= Vin – Vout

 Then, during the following Fpump positive half-cycle (U1pin4 = Vin), the upper end of C2 connects to Vin while the lower end connects to Vout, and:

Vc2 = Vout

 This deposits a quantity of charge onto C3 of:

Q+ = C2((Vin – Vout) – Vout) = C2(Vin – 2Vout)

 During the subsequent negative half-cycle, again:

Vc2 = Vin – Vout

Depositing another charge onto C3 of:

Q- = C2 ((Vin – Vout) – Vout) = C2(Vin – 2Vout)

Thus, each full cycle of Fpump deposits a net charge onto C3 of:

Q = Q+ + Q- = 2 * C2(Vin – 2Vout)

 Which, if Iout = 0, forces Q = 0 and therefore:

Vin – 2Vout = 0

Vout = Vin / 2

 However, for the (much more interesting) case of Iout > 0:

Q = Iout / 100 kHz

2 * C2(Vin – 2Vout) = Iout / 100 kHz

Vin – 2Vout = Iout / 100 kHz / 2 / C2

Vout = (Vin – (Iout / 100 kHz / 2 / C2)) / 2

In other words, Vout droops a bit as the output is loaded. This is because, for a finite C2 Q is also finite, but also to the fact that the U1a and U1b internal switches have non-zero ON resistances.

The combined effect on Vout versus Iout amounts to an effective impedance of 150 Ω for Vin = 5 V and is plotted in Figure 2, along with current multiplication “efficiency”. Note that the latter soars past unity due to the fact that only half of the dollops of C2 charge (the Q+) are drawn from the Vin rail, while the Q- are supplied from residual voltage on C2, causing zero additional drain from the rail.

Figure 2 Current multiplying charge-pump Vout and Iout/In current “efficiency” for Vin = 5 V.

So, what is it good for?

Figure 3 suggests one useful application, generating nominally symmetrical +/-Vin/2 bipolar rails from a single positive source with minimal current draw from the source.

Figure 3 Current doubling charge pump plus voltage inverter makes an efficient bipolar rail splitter.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Charge pump halves voltage to double current “efficiency” appeared first on EDN.

Singaporean chiplet specialist plans new fabrication site in Italy

Втр, 03/12/2024 - 13:16

Silicon Box, an advanced packaging upstart focused on chiplets, is moving to Italy after setting up a $2 billion packaging facility in Singapore in July 2023. The chiplet specialist has announced to set up another manufacturing facility in Northern Italy to cater to Europe’s existing and planned wafer fabrication clusters in France, Germany, and Italy.

While fabs like TSMC and Samsung Foundry as well as OSATs such as Amkor and ASE Technology are eying opportunities in chiplets business amid their expertise in advanced packaging technologies, what makes Silicon Box prominent is its sole focus on chiplet fabrication and packaging.

For a start, Silicon Box claims to bring effective chiplet integration capabilities through its Singapore site. It’s important to note that while the Singapore-based firm specializes in advanced packaging technologies like other OSATs, it uses panel packaging instead of standard wafer approach. The panel-level production leads to higher yield and is tailored for chiplet interconnects.

In other words, Silicon Box’s advanced packaging capabilities are not limited to chiplet integration; the company employs advanced interconnection through proprietary, large-format manufacturing. So, its standardized packaging process facilitates the shortest chiplet-to-chiplet interconnection with better thermal and electrical performance.

The new chiplet production facility in Italy plans to replicate Silicon Box’s foundry in Singapore.

Silicon Box’s next hop to Italy shows that the upstart founded by Marvell co-founders in 2021 is confident in the growing demand for chiplets and their manufacturing capacity. The firm plans to invest $3.6 billion in this new chiplet manufacturing facility in Northern Italy while creating approximately 1,600 semiconductor jobs.

Moreover, a close collaboration with European fabs will boost resilience and cost efficiency for the brand-new chiplet supply chain. A new chiplet packaging facility in Europe also bodes well for the ecosystem that is still in its infancy and needs more focused efforts to make chiplet production viable.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Singaporean chiplet specialist plans new fabrication site in Italy appeared first on EDN.

The TiVo RA2400 Stream 4K: A decent idea, plagued by usage delay

Пн, 03/11/2024 - 15:31

I’ve torn down a lot of streaming multimedia receiver devices over the years, most recently both HD (1080p) and 4K variants of Google’s Chromecast with Google TV. The list of victims also includes a bunch of Rokus in both “box” and “stick” form factors, Amazon’s Fire TV Stick, and an Apple TV (plus a few others with proprietary operating systems, including Google’s own prior-generation Chromecasts). But until today, I’m pretty sure I’ve only taken apart one other “pure” Android TV-based player, that one being the grandfather of them all, Google’s Nexus Player.

What do I mean by “pure”? Consider, for example, that Amazon’s Fire TV devices run (at least for the moment) the Android-derived Fire OS. Google TV, similarly, has an Android TV foundation, on top of which the company has (simplistically speaking) notably revamped the user interface and feature set, innately integrating (for example) Google Home facilities for smart home control purposes, along with making Live TV support front-and-center. But the Nexus Player’s Android TV UI obviously hearkened back to its Android roots; in fact, it originally ran Android 5. And the UI of today’s teardown victim, TiVo’s RA2400 Stream 4K (which, going forward I’ll refer to as “RA2400” for short)  is similarly Android TV-ish in its characteristics.

Why do Android TV-based products like the RA2400 still exist, if Google TV is supposedly a superior successor? Some of the answer, I suspect, has to do with longevity; Android TV has been around for a few months shy of a decade now, whereas the first Google TV-based Chromecast only started shipping in late 2020. And some of it, I also suspect, has to do with higher licensing fees that Google may charge for Google TV versus Android TV, as well as a more restrictive list of licensees. Whatever the reason(s), plenty of Android TV-based devices are still available for sale, which isn’t necessarily a good thing from a consumer standpoint.

Why? Android’s maturity and ubiquity, along with its open-source foundation, make it straightforward to develop apps that run on top of the O/S. This software might unfortunately also include malware and other undesirable code, enabled by unpatched vulnerabilities in out-of-date software stacks (if, say, the manufacturer goes out of business or maybe just decides to redirect its support attention to more lucrative newer products). At minimum, that no-name Android TV box you bought on eBay or elsewhere might be doing bitcoin mining on the side, piggybacking on your network connection and sucking up your electricity in the process. More critically, it might directly act as an attack vector for infecting other devices on your LAN and/or, by opening firewall holes via UPnP or other more malicious means, expose the entire LAN to WAN-based attacks, too.

That’s why, if you’re going to bring an Android TV-based device into your residence, it’s best to go with a “brand name” supplier like, say…well, TiVo, for example. I was admittedly surprised to find out in researching the RA2400 that it’s still available for sale, given that it was introduced in May 2020. Four years is forever in the consumer electronics industry, particularly for a product whose initial reviews called out its sluggish performance. Applications generally get more resource-intensive over time, not less, which would tend to increasingly hamper performance over time. But for whatever reason, the RA2400 is still alive and kicking; its advanced-at-the-time 4K resolution support doesn’t hurt.

My unit was a seller-refurbished device sold by VIP Outlet on eBay, which I bought two years (and a few weeks) ago promotion-priced at $21.25 plus tax ($25 minus 15%) solely with a future teardown in mind. That might sound like a good deal, and in fact it is in at least some sense, given that the RA2400 originally was priced at $50. Then again, however, as I wrote these words, new units were selling for $24.99 at both Amazon and Best Buy (in both cases marked down from the usual $39.99, which is what it’s selling for on TiVo’s website right now).

It obviously took a while for the RA2400 to rise to the top of my teardown pile! And in finally cracking open the box a few weeks ago, I found several surprising omissions (hold that thought). The packaging on my refurb, as you can see, was quite spartan.

I’ll save you five more photos’ worth of plain white box panels, instead focusing on the sticker affixed to one side:

Opening the box lid provides our first look at our “patient”:

Underneath, in a bubble wrap baggie, are a male-to-male HDMI-to-mini HDMI cable (which doesn’t seem to come with new units, or to serve a useful purpose for that matter, so I’m guessing this was a VIP Outlet mix-up) and a USB-A to micro-USB power cable (but no wall wart, although it looks from the documentation that one comes with new units, so this was apparently just another “seller refurbished” miss).

And speaking of omissions, can you tell yet what else isn’t in the box that should be? For a clue, take another look at that online documentation, either in HTML or (if you prefer) PDF format. Now take a look at the “stock” photo I showed you earlier. See the remote control there? See it here? No? Exactly. Sigh.

Onward. Freed from its cardboard and clear-plastic constraints, the RA2400 (with dimensions of 77 x 53 x 16 mm) comes into full view, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

On one end is the aforementioned micro-USB power input:

Coming out the other end is the beefy HDMI jack:

Along one side, and admittedly only barely visible in this shot, is a small button which, when held down for a few seconds, enables manual pairing with the remote control (assuming one exists…did I mention that mine was missing its remote control?), and when pressed a bit longer, initiates a factory reset:

(Full disclosure: I’m being a bit harsh about the missing remote control and wall wart, because I never intended to actually use the RA2400, only to take it apart. Frankly, considering all my fancy-pants video gear, the HDMI to mini-HDMI cable I got in exchange was a net sum gain. On the other hand, if I was a normal consumer hoping to use the RA2400, I’d be pretty bummed…)

And along the other side is another interesting advanced feature (for a 2020-era product, at least), a USB-C connection (with USB 2.0-only bandwidth, by the way):

This is not, TiVo’s documentation makes clear, an alternative power input path, nor is it an alternative video output option. It is, instead, a means of hardware-expanding (along with associated software support, dependent in some cases on third-party Android TV drivers and the like) the RA2400 to handle, for example, a wired Ethernet adapter, a game controller, a keyboard or mouse, a storage device, or multiples of these via a USB-C hub intermediary.

Last but not least, here’s a top view:

And a bottom view:

With a closeup of the label revealing, among other things the FCC ID (2AOVU-IPA1104HDW):

Before diving in, one more thing. The penny in the prior photos obscured, I suspect, just how funky the RA2400’s enclosure is. Check out the unique asymmetry!

Oh well. I was impressed. You all probably just want me to get to the getting-inside.

Peeling off the label from the bottom:

unfortunately didn’t expose any convenient screw heads to view:

but it did draw my attention to something I’d previously overlooked; the thin seam running along the underside periphery:

Betcha know what comes next, yes?

Bingo!

The PCB now also pops right out of the other half of the case:

I’m quite certain you’ve already noticed the Faraday cages on both sides of the PCB. And anyone who’s read one of my teardowns before definitely knows what comes next. Let’s flip the PCB back over to its backside first (as I’ve mentioned before, since these things are designed to dangle from the back of a TV there really is no consistent “top” or “bottom”, but my convention is that “top” is associated with the TiVo logo impression side of the now-removed case, with “bottom” in proximity to the now-removed label side of the now-removed case…phew):

Note how pristine both the cage and PCB still are. Are you proud of my atypical disassembly-force restraint and deft technique?

The shiny IC in the right section, labeled AP6398S, is a SIP module implementing both Bluetooth and Wi-Fi functions, based on Broadcom’s BCM43598. I suspect that at least some of you have already noticed the PCB-embedded antennae in the upper right and lower right-and-left quadrants of the PCB, yes? And in the sorta-center section are, at top, Amlogic’s S905Y2 application processor (can I just say for the record that rarely do I see a system’s “guts” documented so thoroughly in a consumer-intended product page? Here’s even more detail), comprised of, among other things, a quad-core 1.8 GHz Arm Cortex-A53 CPU core and an Arm Mali-G31 MP2 GPU core, and below it, a Nanya NT5AD512M16A4 1 GByte DDR4-2666 SDRAM.

Flip the PCB back over, deftly pop the top off its Faraday Cage too:

and we can inventory the remainder of the notable (IMHO, at least) bill of materials:

Along the left side are the USB-C connector and, below it, a 37.4 (MHz, I’m assuming) crystal oscillator. Along the right is the pairing-and-reset switch. And in the middle are a very faintly marked Samsung KLM8G1GETF-B041 8GByte eMMC flash memory module and, below it, another Nanya NT5AD512M16A4 1 GByte DDR4-2666 SDRAM.

To get a better look at the sides-located components, as well as to gain another perspective on that shiny “box” at the top of both sides of the PCB, out of which the HDMI cable juts, I’ll share some side views now. PCB top-up first:

Now bottom-up:

Those metal blobs on the sides of the “box” are not, I’m pretty confident, solder; those are welding remnants. The reinforcement necessity is understandable when you consider, reiterating what I mentioned earlier, that “these things are designed to dangle from the back of a TV” (not to mention that they’re likely predominantly disconnected from the TV by grabbing the main body and yanking). I was pretty sure there was nothing underneath but solder joints (with the “box” intended to mute high-frequency signal emissions). I even got a chuckle when I checked out the FCC certification report’s internal photo set and noticed they didn’t bother trying to tackle breaking apart the welds, either. However, I still got the tops pried away enough to peek underneath:

See what I told you? Solder. Along with strong adhesive, of course. Fini! Let me know about anything else that caught your eye in the comments.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The TiVo RA2400 Stream 4K: A decent idea, plagued by usage delay appeared first on EDN.

Comparison of 3 step-down converters to predict EMC issues

Пн, 03/11/2024 - 06:15

Step-down converters’ switch-node voltage waveform defines the electromagnetic compatibility (EMC) behavior for automotive CISPR 25 Class 5 measurements. The ringing frequency in the switch-node waveform is an important signal on the EMC receiver, where a higher ringing amplitude on the switch node often causes EMC issues. Understanding the switch-node waveform enables predicting the converter’s EMC characteristics as well as optimizing EMC filter design at an early design stage.

This article compares three automotive step-down converters to provide practical advice on using switch-node waveforms to predict EMC characteristics for automotive CISPR 25 Class 5 measurements. This is helpful to optimize EMC filter design and PCB layout to meet CISPR 25 Class 5 standards.

Switch-node measurements

Switch-node waveforms are used to compare the EMC characteristics among three automotive step-down converters. Figure 1 shows the switch-node measurement on an evaluation board using an active voltage probe.

Figure 1 Use an active voltage probe for the switch-node measurement on the evaluation board. Source: Monolithic Power Systems

The switch-node voltage waveform typically has a rising time and falling time between 700 ps and 2 ns. This requires a minimum oscilloscope bandwidth of about 1 GHz on the voltage probe tip, where the voltage can be measured with an active probe or a passive probe that has the necessary bandwidth.

For both variants, the ground connection to the PCB must be as short as possible to ensure that the measured ringing on the switch node does not include the additional ringing from the long probe ground connection.

Figure 2 shows the correct voltage probe tip position for the switch-node measurement on the evaluation board. Connect the GND tip as close as possible to the IC’s PGND pin and connect the probe input tip as close as possible to the IC’s switch-node pin. Solder the active probe tip with a 0.7-pF input capacitance directly to the component pads via removable gold-plated measuring tips.

Figure 2 Position the probe tip correctly for the switch-node measurement on the evaluation board. Source: Monolithic Power Systems

Histogram and time trend

Figure 3 shows a step-down converter’s switch-node voltage (yellow trace), fSW histogram (pink trace), and time trend (orange trace).

Figure 3 The dual frequency spread spectrum of the MPQ4371-AEC1 includes the switch-node voltage, fSW histogram, and time trend. Source: Monolithic Power Systems

The oscilloscope measures the switch-node voltage for each trigger event across a period of 400 µs and calculates the frequency of each switching cycle. Each calculated frequency is accumulated in the histogram. The total duration of this test is about 10 minutes. For the last trigger event, the measured frequencies are represented as time trend fSW vs. time.

The measured frequencies in Figure 3 verify the fSW vs. time relationship from the MPQ4371-AEC1 datasheet. The time trend waveform confirms the specified dual frequency spread spectrum modulation frequencies of 15 kHz and 120 kHz. By verifying proper IC operation, these frequencies provide an overview of the expected fSW values for CISPR 25 Class 5 measurements.

Voltage waveform

Step down converter’s switch-node voltage waveform is measured with an active probe. Figure 4 shows the rising and the falling edges of MPQ4371-AEC1, in which both waveforms are overlaid on the oscilloscope by an alternating rising and falling trigger. The rising edge has a rising time of 922 ps and a step response with a 273 MHz resonance frequency and a 3.2 V peak-to-peak voltage.

Figure 4 The switch-node voltage waveform for MPQ4371-AEC1 has rising and falling edges. Source: Monolithic Power Systems

The MPQ4371-AEC1 step-down converter’s Quiet-FET technology enables combining fast slewing edges without excessive ringing. Quiet-FET technology does not significantly degrade efficiency like a snubber or bootstrap resistor (RBST), and instead uses a minimum two-step sequential switching action to turn on the internal MOSFETs.

The resonance frequency is determined by the parasitic hot-loop inductances and capacitances. The equivalent hot-loop series inductances (ESL) are defined by the following:

  • ESL of the 100 nF, 0603-sized MLCC (about 800 pH)
  • ESL of the high-side MOSFET (HS-FET) and low-side MOSFET (LS-FET)
  • ESL of the package lead frame
  • ESL of the PCB traces between the MLCC and IC’s VIN and PGND pins (about 700 pH/mm)

The switch-node waveform can also be predicted using a simulation of the PCB hot-loop network.

Frequency domain

Figure 5 shows a fast Fourier transformation (FFT) of step-down converter’s switch-node waveform. The average fSW of 420 kHz is distributed between 384 kHz and 456 kHz (green markers) and corresponds to the measured histogram from Figure 3. The switch-node resonance frequency at 273 MHz is distributed between 250 MHz and 300 MHz (red markers) due to dual frequency spread spectrum modulation and corresponds to Figure 4.

Figure 5 A fast Fourier transformation is applied to the MPQ4371-AEC1’s switch-node waveform. Source: Monolithic Power Systems

Radiated emissions (RE) antenna for CISPR 25 Class 5

The vertical monopole, biconical, and log periodic antenna measurements in CISPR 25 Class 5 can be analyzed. Figure 6 shows the radiating switching inductance at peak CISPR 25 (blue) and average CISPR 25 (yellow), where the analyzer resolution bandwidth (RBW) = 9 kHz, fSW = 420 kHz, input voltage (VIN) = 13.5 V, output voltage (VOUT) = 3.3 V, and load current (ILOAD) = 2.5 A. The dual FSS modulation is helpful to maintain RE below the limits.

Figure 6 The vertical monopole antenna measurement of MPQ4371-AEC1 passes CISPR 25 Class 5. Source: Monolithic Power Systems

Figure 7 shows the radiating objects (for example, the harness or radiating traces on the PCB) at peak CISPR 25 (blue) and average CISPR 25 (yellow), where RBW = 120 kHz, fSW = 420 kHz, VIN = 13.5 V, VOUT = 3.3 V, and ILOAD = 2.5 A.

Figure 7 The biconical antenna measurement of MPQ4371-AEC1 passes CISPR 25 Class 5. Source: Monolithic Power Systems

Figure 8 shows the switch-node resonance frequencies between 250 MHz and 300 MHz (corresponding to Figure 4 and Figure 5) at peak CISPR 25 (blue) and average CISPR 25 (yellow), where RBW = 120 kHz, fSW = 420 kHz, VIN = 13.5 V, VOUT = 3.3 V, and ILOAD = 2.5 A. There is no RE that exceeds the 250 MHz to 300 MHz resonance frequency range.

Figure 8 The log periodic antenna measurement of the MPQ4371-AEC1 passes CISPR 25 Class 5. Source: Monolithic Power Systems

Figure 9 shows the 1.2 GHz switch-node resonance frequency within RE at peak CISPR 25 (blue), average CISPR 25 (yellow), and the noise level (gray), where RBW = 120 kHz, fSW = 2.2 MHz, VIN = 13.5 V, VOUT = 3.3 V, and ILOAD = 2.5 A.

Figure 9 The log periodic antenna measurement of the MPQ4323M-AEC1 step-down converter passes CISPR 25 Class 5. Source: Monolithic Power Systems

Switch-node waveform for MPQ4323M-AEC1

The MPQ4323M-AEC1’s integrated, 100 nF, hot-loop MLCCs reduce the internal parasitic inductances, which shifts the resonance frequency to higher values and reduces the resonance amplitude. Figure 10 shows an example of a fast slewing, switching converter combined with low internal parasitic inductances. This improves the switch-node waveform and reduces RE.

Figure 10 A fast-slewing switching converter combined with low parasitic inductances improves the switch-node waveform of the MPQ4323M-AEC1 step-down converter. Source: Monolithic Power Systems

Switch-node example on a 2-layer PCB

Figure 11 shows two different step-down converters soldered on the same 2-layer PCB. The left curve shows the MPQ4326-AEC1 with frequency spread spectrum modulation on a 2-layer PCB, with a switch-node resonance at 450 MHz. The right curve shows a step-down converter in a suboptimal set-up without FSS modulation and a 320 MHz resonance. The two converters are compared on the same PCB and with the same external components.

Figure 11 Two step-down converters are compared in a switch-node example on a 2-layer PCB. Source: Monolithic Power Systems

The step-down converter with the suboptimal set-up indicates undesirable resonance on the rising edge (red arrow), meaning there is a timing difference between the HS-FET and LS-FET. This resonance is caused by using a 2-layer PCB instead of a 4-layer PCB. Compared to a 4-layer PCB, a 2-layer PCB layout has higher parasitic inductances within the hot loop, which increases the resonance amplitude and changes the location of the switch-node resonance.

The increased amplitude is observed with both converters. In addition, the 2-layer PCB does not have the important solid ground layer directly under the top layer, resulting in a larger resonance amplitude and stronger RE.

FFT of step-down converters on a 2-layer PCB

Figure 12 shows the FFT of the switch-node voltage waveforms for the MPQ4326-AEC1 (with FSS modulation) and step-down converter with the suboptimal set-up (without FSS modulation) from Figure 11.

Figure 12 A fast Fourier transformation is applied to the switch-node voltage waveforms for the MPQ4326-AEC1 (with FSS modulation) and step-down converter with a suboptimal set-up (without FSS modulation). Source: Monolithic Power Systems

MPQ4326-AEC1 uses frequency spread spectrum modulation, while the step-down converter with the suboptimal set-up is set to a constant fSW. Typically, FSS modulation results in lower fundamentals and harmonics. Whether FSS modulation or a constant frequency is more advantageous depends on the requirements of the application. However, FFT shows the differences between the two methods.

MPQ4326-AEC1’s FFT shows the switch-node resonance at 450 MHz, and the step-down converter with the suboptimal set-up shows the switch-node resonance at 320 MHz. These switch-node resonance frequencies can be found in the CISPR 25 Class 5 measurements.

Understand switch-node waveform

This article analyzed the relationship between the switch-node voltage waveform and the frequency domain, using MPQ4323M-AEC1, MPQ4326-AEC1, and MPQ4371-AEC1 automotive step-down converters as examples. Understanding the switch-node waveform enables predicting PCB behavior for CISPR 25 Class 5 measurements. The measured resonance frequency shows up in RE measurements, enabling improved EMC filter design for suppressing the resonance frequency.

Furthermore, it is possible to assess expected frequency range interferences at an early stage by understanding the switch-node waveform. This helps find a suitable step-down converter according to the application specifications, shorten development times, and reduce costs by simplifying component selection for the EMC filter.

Ralf Ohmberger is a staff applications engineer at Monolithic Power Systems (MPS).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Comparison of 3 step-down converters to predict EMC issues appeared first on EDN.

Multichannel driver controls automotive LEDs

Птн, 03/08/2024 - 01:12

A PWM linear LED driver, the AL1783Q from Diodes, provides independent control of brightness and color on all three of its channels. Used for automotive interior and exterior lighting, the AL1783Q delivers 250 mA per channel to support higher LED current ranges in a wider range of lighting applications.

The device allows vehicle occupants to change interior lighting colors to suit their mood. It simultaneously enables animated turn-indicator signals and exterior grill lighting for different road conditions. Three external REF pins are used to set LED current for each channel, while 40-kHz PWM provides independent dimming control.

Since higher voltage rails are often used to power vehicle subsystems, the AL1783Q operates from a 55-V rail, allowing it to accommodate increasing LED chain voltages. Protection functions include undervoltage lockout, overvoltage, and overtemperature, as well as LED open and short-circuit detection.

Qualified to AEC-Q100 requirements, the AL1783Q operates over a temperature range of -40°C to +125°C. It comes in a TSSOP-16EP package that has an exposed cooling pad for improved heat dissipation. The AL1783Q LED driver costs $0.43 each in lots of 2500 units.

AL1783Q product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Multichannel driver controls automotive LEDs appeared first on EDN.

Сторінки