EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 23 min 45 sec ago

Renesas builds RISC-V MCUs with own core

Thu, 03/28/2024 - 20:11

General-purpose 32-bit MCUs in the R9A02G021 group from Renesas employ an internally developed RISC-V CPU core. Renesas has designed and tested the new RISC-V core independently and implemented it in a commercial product that is available worldwide.

R9A02G021 MCUs enable embedded systems designers to develop low-power, cost-sensitive applications based on the RISC-V open-source instruction set architecture (ISA). The devices target such end markets as IoT sensors, consumer electronics, medical devices, small appliances, and industrial systems. They are also supported by a full-scale development environment and a network of toolchain partners.

The CPU core runs at 48 MHz and achieves a performance rating of 3.27 CoreMark/MHz. Power consumption is 162 µA/MHz when active, dropping to just 0.3 µA in standby with a wakeup time of 4 µs. Other features of the R9A02G021 group include:

  • Memory: 128 KB code flash, 16 KB SRAM, and 4 KB data flash
  • Serial communications interfaces: UART, SPI, I2C, SAU
  • Analog peripherals: 12-bit ADC and 8-bit DAC
  • Temperature range: -40°C to 125°C
  • Operating voltage range: 1.6 V to 5.5V

Packaging options for the R9A02G021 MCUs include 16-pin WLCSP and 24-pin, 32-pin, and 48-pin QFN. Devices are available now through global distributors.

R9A02G021 product page

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Renesas builds RISC-V MCUs with own core appeared first on EDN.

Motor control MCUs pack ample flash memory

Thu, 03/28/2024 - 20:10

Toshiba has added eight devices to the M4K group of TXZ+ 32-bit MCUs offering extended flash memory and four different packaging options. Outfitted with 512 kbytes or 1 Mbyte of code flash memory, the MCUs address the need for large program capacity in IoT motor control applications. They also offer firmware over-the-air updating.

With 1 Mbyte of code flash divided into two separate 512-kbyte areas, the MCUs enable firmware rotation using memory swapping. While instructions are being read from one area, updated code can be programmed into the other area simultaneously.

In addition to the expanded flash memory, the devices also boost RAM capacity to 64 kbytes. They are powered by an Arm Cortex-M4 core running at up to 160 MHz and provide UART, tSPI, and I2C interfaces. Three 12-bit ADCs, three advanced motor control circuits, and a vector engine allow the MCUs to control three motors, even in 64-pin packages.

M4K microcontrollers can be used to control AC motors, brushless DC motors, and inverters in home appliances, power tools, and industrial equipment. Packaging options include QFP100, LQFP100, and two different size LQFP64 types.

TXZ+ M4K group product page

Toshiba Electronic Devices & Storage 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Motor control MCUs pack ample flash memory appeared first on EDN.

100-V Schottky rectifiers aid efficiency

Thu, 03/28/2024 - 20:10

Twenty-eight 100-V trench Schottky rectifiers from ST increase efficiency and power density in power converters operating at high switching frequencies. Target applications for the portfolio of devices include power supplies for telecom, server, and smart metering equipment, as well as automotive LED lighting and low-voltage DC/DC converters.

According to the manufacturer, the diodes reduce rectifier losses with forward-voltage and reverse-recovery characteristics that enable increased power density with high efficiency. Forward voltage is 50 mV to 100 mV better than comparable planar diodes, depending on current and temperature conditions. Changing to these new devices can increase efficiency by 0.5%.

Variants in the family cover eight current ratings ranging from 1 A to 15 A. Multiple surface-mount package types are available in both industrial and automotive grades. Automotive parts are AEC-Q101 qualified for operation over a temperature range of -40°C to +175°C and manufactured in PPAP-capable facilities. Diodes are 100% avalanche tested in production to ensure device robustness and system reliability.

All of the parts are available now in DPAK, SOD123 flat, SOD128 flat, SMB flat, and PSMC (TO-227A) packages. Volume prices start at $0.107 for the 1-A STPST1H100ZF in the SMD123 flat package.

To access the datasheets for the 28 trench Schottky rectifiers, click here.

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 100-V Schottky rectifiers aid efficiency appeared first on EDN.

Buck-boost MOSFET meets USB PD 3.1 demands

Thu, 03/28/2024 - 20:09

Occupying a small footprint to ease PCB design, the AONZ66412 MOSFET from Alpha & Omega targets buck-boost converters in USB PD 3.1 EPR applications. While the 3.1 Extended Power Range specification enables power delivery up to 240 W over a USB Type-C cable and connector, the AONZ66412 addresses the most commonly used power range of up to 140 W at 28 V.

The AONZ66412 combines two 40-V N-channel MOSFETs arranged in a half-bridge configuration within a symmetric XSPairFET 5×6-mm package. When used to replace two single 5×6-mm DFN packages, the compact AONZ66412 reduces PCB area, improves efficiency, and simplifies the layout of a 4-switch buck-boost architecture.

Alpha & Omega’s XSPairFET DFN is a bottom-side source package. Each high-side and low-side MOSFET provides a maximum on-resistance of 3.8 mΩ. The source of the low-side MOSFET is directly linked to a large paddle on the lead frame. This setup enhances thermal performance by enabling direct connection of the paddle to the PCB’s ground plane. When tested, the AONZ66412 demonstrated 97% efficiency at 1 MHz under typical USB PD 3.1 EPR conditions with a 28-V input, 17.6-V output, and 8-A load.

The AONZ66412 dual MOSFET costs $1.56 each in lots of 1000 units. It is available now in production quantities with a lead time of 16 weeks.

AONZ66412 product page

Alpha & Omega Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Buck-boost MOSFET meets USB PD 3.1 demands appeared first on EDN.

Aspinity strengthens AI-based automotive security

Thu, 03/28/2024 - 20:08

Aspinity has launched a dashcam evaluation kit and a suite of smart analogML algorithms for parked vehicle monitoring. The hardware/software offerings leverage the company’s always-on AML100 analog machine learning processor. The near-zero power AML100 enables continuous monitoring for extended periods without impacting the vehicle’s battery or requiring an external power source.

The company recently demonstrated a dashcam with a single microphone and an AML100 processor. The setup uses an acoustic-only trigger and analogML algorithms trained to identify automotive security events. According to Aspinity, the solution detects events more accurately than dashcams outfitted with a standard G-sensor. Surveillance algorithms detect such events as jiggling of the door handle, a neighboring car door opening into the vehicle, runaway shopping cart hitting the side of the car, and window glass breaking, while ignoring sounds from events unrelated to the vehicle.

Based on the AML100-REF-1 wireless, battery-operated reference module, the dashcam evaluation kit enables deployment and evaluation in the cabin of a vehicle. It consumes <50 µA in always-on mode and eliminates the video recording of false events that waste power.

To learn more about Aspinity’s AML100 monitoring solutions for automotive security, click here.

Aspinity

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Aspinity strengthens AI-based automotive security appeared first on EDN.

Single phase mains cycle skipping controller sans harmonics

Thu, 03/28/2024 - 16:43

In electrical heating applications, resistive heaters are powered through phase angle-controlled SCR/triac circuits to vary the applied voltage/power to maintain the required temperature.

Phase angle control produces a lot of harmonics leading to power line disturbances.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1’s circuit gives a simple and cost-effective solution without introducing harmonics. This controller skips a certain number of power cycles in between, to vary power to the heaters.

Figure 1 Circuit schematic of mains cycle skipping controller, this controller skips a certain number of power cycles in between, to vary power to the heaters.

In this typical design, 10 full cycles are taken as base. Timer U3 (555) through R2, R4, and C1 decides this by giving output pulses with an interval of  200 ms, which is the width of 10 full AC cycles of a 50 Hz AC mains (for a 60 Hz mains, this will be 166.6 ms). These pulses trigger U4 (555) monostable to produce pulses with an adjustable width within 200 ms, by adjusting potentiometer RV1. This pulse train controls an optotriac with zero cross detector U2 (MOC3033) to trigger triac U1 (BTA25-600BW). The triac conducts for the duration of “off pulse widths” produced by U4. Thus, these conduction periods allow the selected number of voltage cycles to pass through and impress on load. During “on pulse widths”, the triac does not conduct and skips the voltage cycles. Simulated waveforms can be seen in Figure 2 with two full cycles being skipped and Figure 3 with five full cycles being skipped.


Figure 2
Simulated waveforms with the U3 timer output (yellow), U4 timer output (blue), and heater voltage (pink). Eight full cycles are impressed on load, skipping two full cycles as decided by the RV1 potentiometer position.

Figure 3 Simulated waveforms with the U3 timer output (yellow), U4 timer output (blue), and heater voltage (pink). Five full cycles are impressed on load, skipping five full cycles as decided by another RV1 potentiometer position.

As an example, if a 40 ms width is chosen by RV1, which corresponds to 2 full cycles of a 50 Hz mains, the triac will not conduct for 2 voltage cycles and will conduct for 8 full cycles and pass to the load. Thus, two cycles are skipped. This operation repeats. Thus, load power is controlled by skipping a selected number of voltage cycles. As AC cycles passed to load are full cycles, unwanted harmonics are eliminated.

Normally such controllers are realized with an MCU and software, the novelty of this circuit realizing the  same function without using the MCU, thus making it simple with low cost components.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Single phase mains cycle skipping controller sans harmonics appeared first on EDN.

Parsing PWM (DAC) performance: Part 4 – Groups of inhomogeneous duty cycles

Wed, 03/27/2024 - 16:29

Editor’s Note: This is a four-part series of DIs proposing improvements in the performance of a “traditional” PWM—one whose output is a duty cycle-variable rectangular pulse which requires filtering by a low-pass analog filter to produce a DAC. The first part suggests mitigations and eliminations of common PWM error types. The second discloses circuits driven from various Vsupply voltages to power rail-rail op amps and enable their output swings to include ground and Vsupply. The third pursues the optimization of post-PWM analog filters. This fourth part pursues the optimization of post-PWM analog filters.

 Part 1 can be found here.

 Part 2 can be found here.

 Part 3 can be found here.

Recently, there has been a spate of design ideas (DIs) published (see Related Content) which deals with microprocessor-generated pulse width modulators driving low-pass filters to produce DACs. Approaches have been introduced which address ripple attenuation, settling time minimization, and limitations in accuracy. This is the fourth in a series of DIs proposing improvements in overall PWM-based DAC performance. Each of the series’ recommendations is implementable independently of the others. This DI addresses PWM sequence modifications which ease low pass analog filtering requirements.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The tyranny of resolution vs response time

The combination of PWM clock frequency Fclk Hz and the number of bits b of PWM resolution dictates the lowest frequency (Fclk·2-b Hz) output component of a standard PWM. Over all the possible duty cycles, this component is also the largest and therefore the most challenging for an analog filter to suppress. For a given Fclk, the more bits of resolution, the longer the settling time will be of a filter which provides adequate suppression. But there is a way around this limitation.

Suppose a standard 8-bit PWM whose output is either 0 or 1 is configured for a duty cycle of (arbitrarily) 121/256. The first 121 states in a 256-state cycle would be 1 and the remaining 135 would be 0’s. But what if the first 128 states started with 60 ones and the last 128 states started with 61 ones? Let’s call this the “split-in-two” PWM. These two sequences have been offset in amplitude slightly so that they can be clearly seen on a graph shown in Figure 1.

Figure 1 Output sequences of standard and split-in-two 8-bit PWMs with the same clock frequency, period, and duty cycle (121/256).

The blue waveform represents the standard PWM and the orange one is the split-in-two PWM. Why might the latter be advantageous? Consider the spectra of the two PWMs seen in Figure 2.

Figure 2 Frequency content of standard and split-in-two 8-bit PWMs with the same clock frequency, period, and duty cycle (121/256).    

The energy in the first harmonic of the split-in-two PWM is negligible in comparison with that of the standard PWM. The necessary attenuation for the first harmonic has been significantly lessened, and that which was required is now applied to the harmonic at double the frequency. A less aggressive attenuation-with-frequency analog filter can now be employed, resulting in a shorter settling time in response to a change in duty cycle.

Another way to look at this is to double the split-in-two PWM period to 512 states to produce a 9-bit PWM. As shown in Figure 3, the spectra of the two PWMs are almost identical because the time domain waveforms are almost identical—they differ only in that every other 256-bit sequence, one additional one-state replaces a zero-state. The higher resolution 9-bit PWM produces a small amount of energy (less than 1%) at half the frequency of the 8-bit’s fundamental. Any analog low pass filter with adequate suppression of the 8-bit fundamental frequency will more than sufficiently attenuate the signal at half that frequency.

Figure 3 Frequency content of a standard 8-bit PWM of duty cycle 121/256 and a split-in-two 9-bit PWM of duty cycle (121.5/256). They share the same clock, but the split-in-two’s period is twice the standard PWM’s.

The super-cycle

We can think of the split-in-two as generating a “super-cycle” consisting of two cycles of 2b states, each having at least S one-states, with 0 ≤ S < 2b. In one cycle, one zero-state could be swapped for a one-state if the total number of ones in the super-cycle is odd. This is a (b+1)-bit PWM with a period of 2b+1 states. But there is no reason to stop at two. There can be a super-cycle of 2n cycles where n is any integer. With each cycle capable of optionally swapping one zero-state for a one-state, this leads to a PWM super-cycle with a resolution of 2b+n bits. But unlike standard, non-super-cycle PWMs whose maximum spectral energy component is at fclk/2b+n Hz, the super-cycle’s is at a much higher fclk/2b Hz. As with the specific case of the split-in-two, this eases analog filtering requirements and results in a shorter settling time.

It’s worth thinking of a super-cycle as consisting of the sum of two different sequences. One is the S-sequence in which every cycle consists of an identical sequence of S contiguous one-states. The other is the X-sequence where each cycle optionally swaps the first zero-state following the last one-state with another one-state. The X-sequence has X one-states where 0 ≤ X < 2n. The duty cycle of the super-cycle is then (2n·S + X)/2b+n.

When n = 1 for a super-cycle, there is only one cycle where an extra one-state can reside. But when n > 1, X is also greater than one and the question becomes how to distribute the X ones among the 2n cycles so as to minimize the super-cycle’s energy at low frequencies. The fine folks at Microchip who manufacture the SAM D21 microcontroller not only have figured this out for us, but they have also implemented it in hardware [1]! For this IC, it is necessary only to write the values of X and S to separate registers to implement a super-cycle PWM; the hardware does the rest unsupervised. Fortunately, it is simple for almost any microprocessor to augment a standard PWM to implement a super-cycle. For each PWM cycle, the duty cycle count must be modified so that immediately after the sequence of S ones, the first zero gets changed to a one if and only if the following C expression is true for that cycle:

MASK & (cycleNbr * X) > MASK – X

Here, MASK = 2n– 1, X is as before, and cycleNbr is the numeric position of the cycle in the super-cycle. Figure 4 is a graph of the magnitudes of the lowest 32 harmonics of an n = 4, b = 8 super-cycle PWM. The graph provides evidence of the benefit of this approach.

Figure 4 First 32 harmonics of an n=4, b=8 super-cycle PWM. Spectra are displayed for X=1 through 8. (Spectra of X=9 through 15 are the same as those shown.)

The X-sequence’s energy is relatively low, having only 0 through 2n-1 one-states, but it also presents the lowest frequency component, fclk/2n+b Hz. The S-sequence generally contains the most energy by far (except for instances of very small duty cycles), but its smallest frequency component is noticeably higher at Fclk/2b Hz. Among the X sequences, X = 1 gives the largest amplitude for its first harmonic: 2-11 at fclk/2n+b Hz. The S sequence’s spectrum starts at the X sequence’s harmonic number 24 = 16 and produces its largest amplitude of 2/π for that harmonic when S = 211. If this were a standard PWM (an n = 0 super-cycle—no super-cycle at all that is, just a normal PWM), then that amplitude of 2/π would appear at frequency which is 16 times lower. The standard PWM presents a much more severe filtering problem. Its filter would take a lot longer to settle in response to a duty cycle change because of the much larger amount of low frequency attenuation required.

Comparing the filters for (n+b)-bit standard and super-cycle PWMs

The filtered AC steady state time-domain contributions of both the standard and the super-cycle (with its X and S sequences) PWMs should be less than some fraction α of the voltage of the PWMs’ one-state. A reasonable value of α is 2-(n+b+1), ½ LSB. This translates to an attenuation factor of 1/4 at the first harmonic of the X sequence. It is fortunate that even a simple two-component R-C filter meeting this requirement will sufficiently attenuate all higher X sequence harmonics, so there are no additional constraints to meet to suppress them. The 16th X harmonic frequency is that of the first S harmonic. Its PWM energy requires an attenuation factor of (π/2)·2-(n+b+1) at a 50% duty cycle. Again, any low pass filter meeting this requirement will adequately attenuate the remaining S-sequence harmonics. For an Fclk = 20 MHz, Figure 5 and Figure 6are graphs of the frequency and time domain step responses of 3rd order filters (two op-amps, 3 resistors, and 3 capacitors) meeting these requirements for standard 12-bit and super-cycle n = 4, b = 8  (12-bit) PWMs.

Figure 5 The frequency responses of filters for standard and super-cycle n = 4 bit PWMs with 12 bits of resolution. The maxima of the peaked waveforms are the maximum responses allowed for the filters at the peaked frequencies. The filters ensure that the steady state time domain energy at their outputs is less than ½ LSB of Full Scale.

Figure 6 The log of the absolute value of time responses of filters for standard and super-cycle n = 4 bit PWMs with 12 bits of resolution. The much shorter settling time of the super-cycle PWM is clearly evident.

 Easing low pass analog filter requirements

When partnered with an appropriate analog filter, an approach to PWM embodiment available in hardware in an existing microprocessor [1] offers significantly shorter settling times than does a standard PWM. This approach can be implemented with the aid of a small amount of software in almost any microcontroller.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

 References

  1. https://ww1.microchip.com/downloads/en/DeviceDoc/SAM-D21DA1-Family-Data-Sheet-DS40001882G.pdf(See section 31.6.3.3.)
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Parsing PWM (DAC) performance: Part 4 – Groups of inhomogeneous duty cycles appeared first on EDN.

Power Integrations’ gallium nitride (GaN) story

Wed, 03/27/2024 - 12:04

Gallium nitride (GaN) power semiconductors continue to push the boundaries of high-voltage electronics, as evident at this year’s Applied Power Electronics Conference (APEC) in Long Beach, California. GaN devices are moving beyond fast chargers for cell phones, tablets, and game machines into the realm of automotive, renewable energy, and industrial applications.

At APEC 2024, during his plenary presentation titled “Innovating for Sustainability and Profitability,” Power Integrations CEO Balu Balakrishnan revealed that his company had been shipping GaN switches for two to three years in high volume before anybody knew it. “We never advertised our GaN technology because we saw it as a means to an end to deliver high efficiency and performance.”

Figure 1 Balakrishnan talked about his company’s GaN history and future roadmap at APEC 2024. Source: Power Integrations

That was quite a revelation because, according to Omdia, Power Integrations was the number one supplier of GaN power semiconductors in 2022, with nearly 17% market share. Moreover, in October 2023, Power Integrations unveiled a 1,250 V GaN IC; it claims power conversion efficiency as high as 93% while enabling highly compact flyback power supplies that can deliver up to 85 W without a heatsink.

Earlier, in March 2023, the Silicon Valley-based power semiconductor supplier released a 900-V GaN IC as part of its InnoSwitch3 family of flyback switcher ICs. It delivers up to 100 W with better than 93% efficiency and eliminates the need for heat sinks.

Figure 2 The 1,250-V GaN power supply IC is part of the company’s InnoSwitch flyback switcher ICs. Source: Power Integrations

Power Integrations claims to be the first to market with high-volume shipments of GaN-based power-supply ICs in 2019. A GaN switch, integrated with a controller and everything else in a single package, was first used in a notebook adapter design. “Two customers were very suspicious, saying there is no way you can have that level of efficiency with silicon,” Balakrishnan told the APEC audience. “So, we had to tell them under a non-disclosure agreement (NDA).”

“Efficiency is going to be the mantra in power electronics for a long time,” he added. Balakrishnan also said that GaN will eventually be less expensive than silicon for high-voltage switches. “There is no fundamental reason why it won’t be cost-effective in the long run.”

However, he clarified that GaN will replace silicon in certain areas. “Everybody thinks it will replace silicon, but GaN won’t replace silicon in controllers and digital circuitry,” he added. Balakrishnan and his engineers at Power Integrations also believe that GaN will get to the point where it’ll be very competitive with SiC while being less expensive to build.

As Balakrishnan noted, GaN has been talked about for a long time, but the challenge was operating reliably on high voltage. It’s ascent to voltages as high as 900 V, 1,250 V, and potentially even higher voltages shows that GaN is ready for commercial limelight.

Consequently, stakes for GaN semiconductor players, including Power Integrations, are getting higher as well.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Integrations’ gallium nitride (GaN) story appeared first on EDN.

Who knew? Wearables can be excessive skin-heat sources, too.

Tue, 03/26/2024 - 15:39

You might think that a smart watch or fitness wearable would not be a thermal concern for users. After all, they only have small rechargeable batteries and sip that battery’s energy to extend operating time as much as possible, typically at least 24 hours.

Their heat dissipation is many orders of magnitude less than that of a CPU, GPU, or other processor-core device cooking along at tens and even hundreds of watts. Nonetheless, wearables can be highly localized sources of heat and therefore cause potential skin problems.

I hadn’t thought about the extent of this localized heating on skin due to wearables until I coincidentally saw several items on the subject. The first was an IEEE conference article re- posted at InCompliance magazine, “Reduced-Order Modeling of Pennes’ Bioheat Equation for Thermal Dose Analysis.” The second was an article in Electronics Cooling, “Thermal Management and Safety Regulation of Smart Watches.”

The first paper was intensely analytic with complicated thermal models and equations, and while I didn’t want to go through it in detail, I did get the overall message: you can get surprisingly high localized skin heating from a wearable.

It pointed out that the simple term “skin” actually comprises four distinct tissue layers, and each is unique in its geometric, thermal, and physiological properties. The outermost layer is the exposed epidermis, beneath it is the dermis which is the “core” of the skin, then the subcutaneous fat (hyodermis) layer, and finally, the inner tissue muscle and bone, Figure 1.

Figure 1 The term “skin” really refers to a four-layer structure, where each layer has distinctive material, thermal, and other properties, most of which are hard to measure. Source: Cleveland Clinic

Damage to the skin is analyzed by the extent of partial or complete necrosis (death) of each layer. While that’s more than I wanted to know, I was curious about the assessment of skin damage.

It turns out that there is, as expected, a quantitative assessment of thermally induced damage and it is based on cumulative exposure at various temperatures. This thermal dose is estimated as cumulative equivalent minutes at 43°C, or CEM43°C, which provides a time and duration number:

Where T is tissue temperature, t is time, and R is a piecewise-constant function of temperature with:

 R(T) = 0.25 for T ≤ 43°C and = 0.5 for T > 43°C.

So far, so good. The rest the of lengthy paper delved into models of heat flow, heat spreading through the skin, transforming surface data into three-dimensional data, and more. The analysis was complicated by the fact that heat flow through the layers is hard to measure and model, especially as the skin layers are anisotropic (the flow is different along different axes).

Cut to the chase: even a modest self-heating of the wearable can cause skin damage over time, and so must be modeled, measured, and assessed. How much heating is allowed? There are standards for that, of course, such as IEC Guide 117:2010, “Electrotechnical equipment – Temperatures of touchable hot surfaces.”

What to do?

Knowing there’s a problem is the first step to solving it. In the case of wearables, the obvious solution is to reduce dissipation even further, which would also increase run time as an added benefit. But efforts are underway to go beyond that obvious approach.

Coincident with seeing the two cited articles, I came across an article in the scholarly journal Science Advances, “Ultrathin, soft, radiative cooling interfaces for advanced thermal management in skin electronics.” A research team led by City University of Hong Kong has devised a photonic, material-based, ultrathin, soft, radiative-cooling interface (USRI) that greatly enhances heat dissipation in devices.

Their multifunctional composite polymer coating offers both radiative and non-radiative cooling capacity without using electricity and with advances in wearability and stretchability. The cooling interface coating is composed of hollow silicon dioxide (SiO2) microspheres for improving infrared radiation along with titanium dioxide (TiO2) nanoparticles and fluorescent pigments, for enhancing solar reflection. It is less than a millimeter thick, lightweight (about 1.27g/cm2), and has robust mechanical flexibility, Figure 2.

Figure 2 Overview of the USRI-enabled thermal management for wearable electronics. (A) Exploded view of the components and assembly method of the ultrathin, soft, radiative-cooling interface (USRI). (B) Photographs of a fabricated USRI layer (i) and that attached on the wrist and hand (ii). (C) Thermal exchange processes in wearable electronics seamlessly integrated with a USRI, including radiative (thermal radiation and solar reflectance) and nonradiative (convection and conduction) contributions, as well as the internal Joule heating. (D) Comparison of cooling power from the radiative and nonradiative processes in wearable devices as a function of the above-ambient temperature caused by Joule heating. (E) Conceptual graph capturing functional advantages and potential applications of USRI in wearable and stretchable electronics. Source: City University of Hong Kong

When heat is generated in a wearable fitted with this thermal interface, it flows to the cooling interface layer and dissipates to the ambient environment through both thermal radiation and air convection. The open space above the interface layer provides a cooler heat sink and an additional thermal exchange channel.

To assess its cooling capacity, they conformally coated the cooling interface layer onto a metallic resistance wire functioning as a heat source, Figure 3. With a coating thickness of 75 μm, the temperature of the wire dropped from 140.5°C to 101.3°C, compared with uncoated wire at an input current of 0.5 A with a 600-μm thickness, it dropped to 84.2°C for a temperature drop of more than 56°C. That’s fairly impressive, for sure.

Figure 3 Passive cooling for conductive interconnects in skin electronics. (A) Exploded view of a USRI-integrated flexible heating wire. (B) Photographs of the flexible heating wire before and after coating with the USRI, showing their seamless and robust integration under bending, twisting, and folding. (C) Thermal exchange processes of the USRI-coated flexible heating wire. (D and E) Measured temperature variation of the USRI-integrated flexible heating with varied interface thickness (D) and interface area (E) under different working currents. The colored shaded regions depict simulation results. (F) Image of the USRI-integrated flexible heating wire and corresponding infrared images of such devices with different thicknesses and areas. The working current was kept at 0.3 A. (G and H) Statistics of cooling temperatures of two USRI-coated flexible heating wires working at a current varying from 0.1 to 0.5 A. Both the thickness and the interface area present significant differences between the control and USRI groups (P = 0.012847 for interface thickness, P = 0.020245 for interface area, n = 3). (I) Temperature distribution of USRI-integrated flexible heating wires with varied thickness, area, and current. Source: City University of Hong Kong

Have you had to worry about excessive heat dissipation in a wearable, and the risks it might bring? Were you aware of the relevant regulatory standards for this phenomenon? How did you solve your problem?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Who knew? Wearables can be excessive skin-heat sources, too. appeared first on EDN.

Power Tips #127: Using advanced control methods to increase the power density of GaN-based PFC

Mon, 03/25/2024 - 18:37

Introduction

Modern electronic systems need small, lightweight, high-efficiency power supplies. These supplies require cost-effective methods to take power from the AC power distribution grid and convert it to a form that can run the necessary electronics.

High switching frequencies are among the biggest enablers for small size. To that end, gallium nitride (GaN) switches provide an effective way to achieve these high frequencies given their low parasitic output capacitance (COSS) and rapid turn-on and turn-off times. It is possible, however, to amplify the high-power densities enabled by GaN switches through the use of advanced control techniques.

In this article, I will examine an advanced control method used inside a 5-kW power factor corrector (PFC) for a server. The design uses high-performance GaN FETs to operate the power supplies at the highest practical frequency. The power supply also uses a novel control technology that extracts more performance out of the GaN FETs. The end result is a high-efficiency, small-form-factor design with higher power density.

System overview

It’s well known that the totem-pole PFC is the workhorse of a high-power, high-efficiency PFC. Figure 1 illustrates the topology.

Figure 1 Basic totem-pole PFC topology where S1 and S2 are high-frequency GaN switches and S3 and S4 are low-frequency-switching Si MOSFETs. Source: Texas Instruments

S1 and S2 are high-frequency GaN switches operating with a variable frequency between 70 kHz and 1.2 MHz. S3 and S4 are low-frequency-switching silicon MOSFETs operating at the line frequency (50 to 60 Hz).

During the positive half cycle of the AC line, S2 operates as the control FET and S1 is the synchronous rectifier. S4 is always on and S3 is always off. Figure 2 shows the interval when the inductor current is increasing because control FET S2 is on. Figure 3 shows the interval when the inductor current is discharging through synchronous rectifier S1.

Figure 2 Positive one-half cycle inductor current charge interval. Source: Texas Instruments

Figure 3 Positive one-half cycle inductor discharge interval. Source: Texas Instruments

Figure 4 and Figure 5 illustrate the same behaviors for the negative one-half cycle.

Figure 4 Negative one-half cycle inductor current charge interval. Source: Texas Instruments

Figure 5 Negative one-half cycle inductor discharge interval. Source: Texas Instruments

ZVS

The use of GaN switches for S1 and S2 enables the converter to run at higher switching frequencies given the lower turn-on and turn-off losses of the switch. It is possible to achieve even higher frequencies, however, if the GaN switches can turn on with zero voltage switching (ZVS). The objective for this design is to achieve ZVS on every switching cycle for all line and load conditions. In order to do this, you will need two things:

  • Feedback to tell the controller if ZVS has been achieved
  • An algorithm that a microcontroller can execute in real time to achieve low total harmonic distortion (THD)

You can accomplish the first item through an integrated zero voltage detection (ZVD) sensor inside the GaN switches [1]. The ZVD flag works by asserting a high signal if the switch turns on with ZVS; if it does not achieve ZVS at turn-on, the ZVD signal stays low. Figure 6 and Figure 7 illustrate this behavior.

Figure 6 ZVD feedback block diagram with the LMG3425R030 GaN FET with an integrated driver, protection and temperature reporting as well as the TMS320F280049C MCU. Source: Texas Instruments

Figure 7 ZVD signal with ZVS (left) and ZVD signal without ZVS (right). The integrated ZVD sensor enables a ZVD flag that can be seen if the switch turns on with ZVS. Source: Texas Instruments

Integrating this function inside the GaN switch provides a number of advantages: minimal component count, low latency and reliable detection of ZVS events.

In addition to the ZVD signal, you also need an algorithm capable of calculating the switch timing parameters such that you can achieve ZVS and low THD simultaneously. Figure 8 is a block diagram of the hardware needed to implement the algorithm.

Figure 8 Hardware needed for the ZVD-based control method that enables an algorithm capable of calculating the switch timing parameters to achieve ZVS and a low THD simultaneously. Source: Texas Instruments

Solving the state plane for ZVS of the resonant transitions of the GaN FET’s drain-to-source voltage (VDS) will give you the algorithm for this design. Figure 9 illustrates the GaN FET VDS, inductor current, and control signals, along with both the time-domain and state-plane plots.

Figure 9 Resonant transition state-plane solution with the GaN FET VDS, inductor current, and control signals, along with both the time-domain and state-plane plots. Source: Texas Instruments

In Figure 9’s state-plane plot:

  • “j” is the normalized current at the beginning and end of each dead-time interval
  • “m” is the normalized voltage
  • “θ” is used for the normalized timing parameters

The figure also shows the normalization relationships. The microcontroller in Figure 8 solves the state-plane system equations shown in Figure 9 such that the system achieves both ZVS and an ideal power factor. The ZVD signal provides feedback to instruct the microcontroller on how to adjust the switching frequency to meet ZVS.

Figure 10 shows the operating waveforms when the applied frequency is too low (left), ideal (center) and too high (right). You can see that both ZVD signals are present only when the applied frequency is at the ideal value; thus, varying the frequency until both FETs achieve ZVD will reveal the ideal operating point.

Figure 10 ZVD control waveforms when the applied frequency is too low (left), ideal (center) and too high (right). Source: Texas Instruments

Hardware performance

Figure 11 is a photo of a two-phase 5-kW design example using GaN and the previously described algorithm.

Figure 11 Two-phase 5 kW GaN-based PFC with the hardware required to apply algorithms to achieve even higher frequencies and enhance the efficiency of the overall solution. Source: Texas Instruments

Table 1 lists the specifications for the design example.

Parameters

Value

AC input

208V-264V

Line frequency

50-60Hz

DC output

400V

Maximum power

5kW

Holdup time at full load

20ms

THD

OCP v3

Electromagnetic interference

European Norm 55022 Class A

Operating frequency

Variable, 75kHz-1.2MHz

Microcontroller

TMS320F280049C

High-frequency GaN FETs

LMG3526R030

Low-frequency silicon FETs

IPT60R022S7XTMA1

Internal dimensions

38mm x 65mm x 263mm

Power density

120W/in3

Switching frequency

70kHz-1.2MHz

 Table 1 Design specifications for hardware example used in Figure 11.

Figure 12 shows the inductor current waveforms (ILA and ILB) and GaN FET VDS waveforms for both phases (VA and VB). The plots are at full power and illustrate three different operating conditions. In each case, you can see ZVS and a sinusoidal current envelope. The conditions for all three plots are VIN = 230VRMS, VOUT = 400V, P = 5kW, and 200V/div, 20A/div and 2µs/div.

Figure 12 The inductor current waveforms (ILA and ILB) and GaN FET VDS waveforms taken at full power for: (a) VIN≪VOUT/2, (b) VIN=VOUT/2, and (c) VIN≫VOUT/2. Source: Texas Instruments

Figure 13 shows the measured efficiency and THD for a system operating with a 230VAC input across the load range.

Figure 13 Efficiency and THD of a two-phase PFC operating with a 230VAC input across the load range. Source: Texas Instruments

 Reducing the footprint of a GaN power supply

GaN switches can increase the power density of a wide variety of applications by enabling faster switching frequencies. However, the addition of technologies such as advanced control algorithms can significantly reduce the footprint of a power supply even further. For more information about the reference design example discussed in this article, see reference [2].

Brent McDonald works as a system engineer for the Texas Instruments Power Supply Design Services team, where he creates reference designs for a variety of high-power applications. Brent received a bachelor’s degree in electrical engineering from the University of Wisconsin-Milwaukee, and a master’s degree, also in electrical engineering, from the University of Colorado Boulder.

Related Content

 References

  1. Texas Instruments. n.d. LMG3526R030 650-V 30-mΩ GaN FET with Integrated Driver, Protection and Zero-Voltage Detection. Accessed Jan. 22, 2024.
  2. Texas Instruments. n.d. “Variable-Frequency, ZVS, 5-kW, GaN-Based, Two-Phase Totem-Pole PFC Reference Design.” Texas Instruments reference design No. PMP40988. Accessed Jan. 22, 2024.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #127: Using advanced control methods to increase the power density of GaN-based PFC appeared first on EDN.

The advantages of coreless transformer-based isolators/drivers

Mon, 03/25/2024 - 13:24

Design options allow system designers to configure their system with the right performance, reliability, and safety considerations while meeting design cost and efficiency targets. The right design options can be even more important in high-voltage and/or high-current applications. In these high-power designs, an isolation technique with several integrated features can mean the difference between a product that meets and even exceeds customer expectations and one that generates numerous customer complaints.

For example, an integrated solid-state isolator (SSI) based on coreless transformer (CT) provides galvanic isolation with several design benefits. With integrated features such as a dynamic Miller clamp (DMC), overcurrent and overtemperature protection (OTP), under-voltage lockout protection, fast turn-on, and more, an integrated SSI driver can provide essential protection and ensure proper operation and extended life for high-power systems. These integrated protection features are not available in optical-based solid-state relays (SSRs).

Combined with the appropriate power switches, the highly integrated solid-state isolators allow designers to create custom solid-state relays capable of controlling loads in excess of 1,000 V and 100 A. The CT-based isolators enable energy transfer across the isolation barrier capable of driving large MOSFET or IGBT without the added circuitry of a power supply on the isolated side. SSRs designed with these innovative protection features can be highly reliable and extremely robust.

These coreless transformer-based isolators enable ON and OFF control, acting like a relay switch without requiring a secondary side, isolated power supply. Combined with MOSFETs and IGBTs, SSIs enable cost effective, reliable, and low power solid-state relays for a variety of applications. This includes battery management systems, power supplies, power transmission and distribution, programmable logic controllers (PLCs), industrial automation, and robotics as well as smart building applications such as heating, ventilation, and air conditioning (HVAC) controllers and smart thermostats.

Energy transfer through coreless transformer

The main design feature of an SSI device is a coreless transformer which enables power transfer across a galvanic isolation barrier of up to 10 mW. This eliminates the need for an isolated power supply for the switch reducing the bill of material (BOM) volume, count, and cost as well as providing a fast turn ON/OFF feature (≤ 1 µs) to ensure that the safe operating area (SOA) of the switch is adhered to.

Figure 1 Highly integrated solid-state isolators easily drive MOSFETs or IGBTs and do not require an isolated bias supply. Source: Infineon

Integrated protection

The integrated protection features of the CT-based isolators deserve further explanation. These include overcurrent and overtemperature protection (OTP), a dynamic Miller clamp, and under-voltage lockout (latch-off) protection as well as satisfying essential industry standards.

System and switch protection

Depending on the application’s need and product variant selected, SSIs offers overcurrent protection (OCP) as well as OTP either via an external positive temperature coefficient (PTC) thermistor/resistor or a MOSFET’s integrated direct temperature sensor.

In case of a failure event (overcurrent or overtemperature), SSI triggers a latch-off. Once triggered, the protection reacts quickly, turning off in less than 1 μs. Furthermore, it can support the AC-15 system tests, required for electromechanical relays according to the IEC 60947-5-1 under appropriate operating conditions.

Overcurrent protection

When operating solid-state relays, a common problem is the handling of fast overcurrent or short circuit events in the range of 20 A/μs up to 100 A/μs. Isolation issues often result in a short circuit with an extremely high current level that is defined by the power source’s impedance and cabling resistance.

Figure 2 shows a circuit for implementing the overcurrent protection. The shunt resistor (RSh) and its inherent stray inductance (LSh) generate a voltage drop that is monitored by the current sense comparator. Noise on the grid needs to be filtered out from the shunt signal, so an external filter (CF and RF) complements the integrated filter. When the comparator triggers, it activates the fast turn-off and latches the fault leaving the system in a safe state.

Figure 2 The above circuitry implements overcurrent protection using an isolator driver. Source: Infineon

Overtemperature protection

Another major known issue when operating solid-sate relays is the slow overload events that heat up the switches and the current sensor (shunt). Increased load current and insufficient thermal management can additionally shift the overall temperature above the thermal power transistor limits.

Figure 3 shows an example measurement of the overtemperature protection using an isolated driver. The SSI turns off two MOSFETs with integrated temperature sensors configured in a common-source mode. The sensing MOSFET heats up from the load current until the sensor voltage decreases below the comparator trigger threshold. As a result, the SSI’s output is turned off.

Figure 3 Isolated driver’s overtemperature protection triggers within 500 ns. Source: Infineon

The lower part of Figure 3 depicts a detailed zoom into the turn-off in this measurement with a time resolution of 500 ns per division. This reduced timeframe shows that the gate is turned off in much less than 500 ns. This means that the switched transistors do not violate their safe operating area.

Dynamic Miller clamping protection

Some SSIs also have an integrated dynamic Miller clamp to protect against spurious switching due to surge voltages and fast electric transients as well as the dv/dt of the line voltage. The dv/dt applied by the connected AC voltage creates capacitive displacement currents through the parasitic capacitances of a power transistor.

This can lead to parasitic turn-on of the power switch by increasing the voltage at its gate node during its “off” state. The dynamic Miller clamping feature ensures that the power switch remains safe in the “off” state.

When failure is not an option

When matched with the appropriate power switch, the isolator drivers enable switching designs with a much lower resistance compared to optically driven/isolated solid-state solutions. This translates to longer lifespans and lower cost of ownership in system designs. As with all solid-state isolators, the devices also offer superior performance compared to electromagnetic relays, including 40% lower turn-on power loss and increased reliability due to the elimination of moving or degrading parts.

When failure is not an option, the right choice of isolation can mean the difference between design success and failure.

Dan Callen Jr. is a senior manager at Power IC Group of Infineon Technologies.

Davide Giacomini is director of marketing at Power IC Group of Infineon Technologies.

Sameh Snene is a product applications engineer at Infineon Technologies.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The advantages of coreless transformer-based isolators/drivers appeared first on EDN.

2-A Schottky rectifiers occupy tiny footprint

Fri, 03/22/2024 - 15:44

Three trench Schottky rectifiers from Diodes deliver 2 A with low forward voltage drop in chip-scale packages that require just 0.84 mm2 of PCB space. The SDT2U30CP3 (30 V/2 A), SDT2U40CP3 (40 V/2 A), and SDT2U60CP3 (60 V/2 A) can be used as blocking, boost, switching, or reverse-protection diodes in portable, mobile, and wearable devices.

The rectifiers come in 1.4×0.6-mm X3-DSN1406-2 packages, with a typical profile of 0.25 mm. According to the manufacturer, they are among the smallest in their class. Their low forward voltage drop of 480 mV maximum (580 mV for the SDT2U60CP3) minimizes conduction losses and improves efficiency. Additionally, the devices’ avalanche capability allows them to rapidly respond to voltage spikes to protect electronic circuits from damage.

The SDT2U30CP3, SDT2U40CP3, and SDT2U60CP3 rectifiers cost $0.16, $0.17, and $0.19 each, respectively, in lots of 2500 units. They are lead-free and fully compliant with RoHS 3.0 standards.

SDT2U30CP3 product page

SDT2U40CP3 product page

SDT2U60CP3 product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2-A Schottky rectifiers occupy tiny footprint appeared first on EDN.

Kyocera AVX rolls out expansive line of capacitors

Fri, 03/22/2024 - 15:44

Wet aluminum electrolytic capacitors in the AEF series from Kyocera AVX come in 11 different case sizes with capacitance ratings from 2.2 µF to 470 µF. Voltage ratings for the V-chip (can-type) capacitors range from 6.3 VDC to 400 VDC.

Targeting a broad range of industrial and consumer electronics applications, the components can be surface-mounted on high-density PCBs. The series comprises 59 variants in case sizes spanning 0608 to 1216. They exhibit low direct current leakage (DCL) and low equivalent series resistance (ESR), which enables higher tolerance for ripple currents. Capacitance tolerance is ±20%.

AEF series capacitors are available for operation over two temperature ranges: -40°C to +105°C and -55°C to +105°C. They have a lifetime of 6000 hours at +105°C and rated voltages. The devices are supplied with pure tin terminations on 13-in. or 15-in. reels compatible with automated assembly equipment. Standard lead time is 24 weeks.

AEF series product page

Kyocera AVX 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Kyocera AVX rolls out expansive line of capacitors appeared first on EDN.

Plastic ARM-based microcontroller is space-ready

Fri, 03/22/2024 - 15:43

Frontgrade Technologies has developed a plastic-encapsulated version of its UT32M0R500 radiation-tolerant microcontroller aimed at space missions. Built around a 32-bit Arm Cortex-M0+ core, the plastic UT32M0R500 is set for flight grade production in July 2024 after meeting NASA’s PEM INST-001 Level 2 qualification.

Housed in a 14.5×14.5-mm, 143-pin plastic BGA package, the UT32M0R500 offers the same I/O configuration and features as its ceramic QML counterpart. It tolerates up to 50 krads of total ionizing dose (TID) radiation. For design flexibility, the device combines two independent CAN 2.0B controllers with mission read/write flash memory and system-on-ship functionality. This integration enables designers to manage board utilization, while reducing both cost and complexity.

“The proliferation of satellites for LEO missions is increasing the demand for highly reliable components with efficient SWaP-C characteristics and radiation assurances,” said Dr. J. Mitch Stevison, president and CEO of Frontgrade Technologies. “Adding another plastic device to our portfolio that is qualified to NASA’s Space PEM Level 2 strengthens our position as a trusted provider of high reliability, radiation-assured devices for critical space missions.”

The UT32M0R500 is supported by Arm’s Keil suite of embedded development tools.

UT32M0R500 product page

Frontgrade Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Plastic ARM-based microcontroller is space-ready appeared first on EDN.

Image sensor elevates smartphone HDR

Fri, 03/22/2024 - 15:42

Omnivision’s OV50K40 smartphone image sensor with TheiaCel technology achieves human eye-level high dynamic range (HDR) with a single exposure. Initially introduced in automotive image sensors, TheiCel employs lateral overflow integration capacitors (LOFIC) to provide superior single-exposure HDR, regardless of lighting conditions.

The OV50K40 50-Mpixel image sensor features a 1.2-µm pixel in a 1/1.3-in. optical format. High gain and correlated multiple sampling enable optimal performance in low-light conditions. At 50 Mpixels, the sensor has a maximum image transfer rate of 30 fps. Using 4-cell pixel binning, the OV50K40 delivers 12.5 Mpixels at 120 fps, dropping to 60 fps in HDR mode but with a fourfold increase in sensitivity.

To achieve high-speed autofocus, the OV50K40 offers quad phase detection (QPD). This enables 2×2 phase detection autofocus across the sensor’s entire image array for 100% coverage. An on-chip QPD remosaic enables full 50-Mpixel Bayer output, 8K video, and 2x crop-zoom functionality.

The OV50K40 image sensor is now in mass production.

OV50K40 product page  

Omnivision

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Image sensor elevates smartphone HDR appeared first on EDN.

Snapdragon SoC brings AI to more smartphones

Fri, 03/22/2024 - 15:42

Qualcomm’s Snapdragon 8s Gen 3 SoC offers select features of the high-end Snapdragon 8 Gen 3 for a wider range of premium Android smartphones. The less expensive 8s Gen 3 chip provides on-device generative AI and an always-sensing image signal processor (ISP).

The SoC’s AI engine supports multimodal AI models comprising up to 10 billion parameters, including large language models (LLMs) such as Baichuan-7B, Llama 2, Gemini Nano, and Zhipu ChatGLM. Its Spectra 18-bit triple cognitive ISP offers AI-powered features like photo expansion, which intelligently fills in content beyond a capture’s original aspect ratio.

The Snapdragon 8s Gen 3 is slightly slower than the Snapdragon 8 Gen 3, and it has one less performance core. The 8s variant employs an Arm Cortex-X4 prime core running at 3 GHz, along with four performance cores operating at 2.8 GHz and three efficiency cores clocked at 2 GHz.

Snapdragon 8s Gen 3 will be adopted by key smartphone OEMs, including Honor, iQOO, Realme, Redmi, and Xiaomi. The first devices powered by the 8s Gen 3 are expected as soon as this month.

Snapdragon 8s Gen 3 product page

Qualcomm Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Snapdragon SoC brings AI to more smartphones appeared first on EDN.

The role of cache in AI processor design

Fri, 03/22/2024 - 09:34

Artificial intelligence (AI) is making its presence felt everywhere these days, from the data centers at the Internet’s core to sensors and handheld devices like smartphones at the Internet’s edge and every point in between, such as autonomous robots and vehicles. For the purposes of this article, we recognize the term AI to embrace machine learning and deep learning.

There are two main aspects to AI: training, which is predominantly performed in data centers, and inferencing, which may be performed anywhere from the cloud down to the humblest AI-equipped sensor.

AI is a greedy consumer of two things: computational processing power and data. In the case of processing power, OpenAI, the creator of ChatGPT, published the report AI and Compute, showing that since 2012, the amount of compute used in large AI training runs has doubled every 3.4 months with no indication of slowing down.

With respect to memory, a large generative AI (GenAI) model like ChatGPT-4 may have more than a trillion parameters, all of which need to be easily accessible in a way that allows to handle numerous requests simultaneously. In addition, one needs to consider the vast amounts of data that need to be streamed and processed.

Slow speed

Suppose we are designing a system-on-chip (SoC) device that contains one or more processor cores. We will include a relatively small amount of memory inside the device, while the bulk of the memory will reside in discrete devices outside the SoC.

The fastest type of memory is SRAM, but each SRAM cell requires six transistors, so SRAM is used sparingly inside the SoC because it consumes a tremendous amount of space and power. By comparison, DRAM requires only one transistor and capacitor per cell, which means it consumes much less space and power. Therefore, DRAM is used to create bulk storage devices outside the SoC. Although DRAM offers high capacity, it is significantly slower than SRAM.

As the process technologies used to develop integrated circuits have evolved to create smaller and smaller structures, most devices have become faster and faster. Sadly, this is not the case with the transistor-capacitor bit-cells that lie at the heart of DRAMs. In fact, due to their analog nature, the speed of bit-cells has remained largely unchanged for decades.

Having said this, the speed of DRAMs, as seen at their external interfaces, has doubled with each new generation. Since each internal access is relatively slow, the way this has been achieved is to perform a series of staggered accesses inside the device. If we assume we are reading a series of consecutive words of data, it will take a relatively long time to receive the first word, but we will see any succeeding words much faster.

This works well if we wish to stream large blocks of contiguous data because we take a one-time hit at the start of the transfer, after which subsequent accesses come at high speed. However, problems occur if we wish to perform multiple accesses to smaller chunks of data. In this case, instead of a one-time hit, we take that hit over and over again.

More speed

The solution is to use high-speed SRAM to create local cache memories inside the processing device. When the processor first requests data from the DRAM, a copy of that data is stored in the processor’s cache. If the processor subsequently wishes to re-access the same data, it uses its local copy, which can be accessed much faster.

It’s common to employ multiple levels of cache inside the SoC. These are called Level 1 (L1), Level 2 (L2), and Level 3 (L3). The first cache level has the smallest capacity but the highest access speed, with each subsequent level having a higher capacity and a lower access speed. As illustrated in Figure 1, assuming a 1-GHz system clock and DDR4 DRAMs, it takes only 1.8 ns for the processor to access its L1 cache, 6.4 ns to access the L2 cache, and 26 ns to access the L3 cache. Accessing the first in a series of data words from the external DRAMs takes a whopping 70 ns (Data source Joe Chang’s Server Analysis).

Figure 1 Cache and DRAM access speeds are outlined for 1 GHz clock and DDR4 DRAM. Source: Arteris

The role of cache in AI

There are a wide variety of AI implementation and deployment scenarios. In the case of our SoC, one possibility is to create one or more AI accelerator IPs, each containing its own internal caches. Suppose we wish to maintain cache coherence, which we can think of as keeping all copies of the data the same, with the SoCs processor clusters. Then, we will have to use a hardware cache-coherent solution in the form of a coherent interconnect, like CHI as defined in the AMBA specification and supported by Ncore network-on-chip (NoC) IP from Arteris IP (Figure 2a).

Figure 2 The above diagram shows examples of cache in the context of AI. Source: Arteris

There is an overhead associated with maintaining cache coherence. In many cases, the AI accelerators do not need to remain cache coherent to the same extent as the processor clusters. For example, it may be that only after a large block of data has been processed by the accelerator that things need to be re-synchronized, which can be achieved under software control. The AI accelerators could employ a smaller, faster interconnect solution, such as AXI from Arm or FlexNoC from Arteris (Figure 2b).

In many cases, the developers of the accelerator IPs do not include cache in their implementation. Sometimes, the need for cache wasn’t recognized until performance evaluations began. One solution is to include a special cache IP between an AI accelerator and the interconnect to provide an IP-level performance boost (Figure 2c). Another possibility is to employ the cache IP as a last-level cache to provide an SoC-level performance boost (Figure 2d). Cache design isn’t easy, but designers can use configurable off-the-shelf solutions.

Many SoC designers tend to think of cache only in the context of processors and processor clusters. However, the advantages of cache are equally applicable to many other complex IPs, including AI accelerators. As a result, the developers of AI-centric SoCs are increasingly evaluating and deploying a variety of cache-enabled AI scenarios.

Frank Schirrmeister, VP solutions and business development at Arteris, leads activities in the automotive, data center, 5G/6G communications, mobile, aerospace and data center industry verticals. Before Arteris, Frank held various senior leadership positions at Cadence Design Systems, Synopsys and Imperas.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The role of cache in AI processor design appeared first on EDN.

Workarounds (and their tradeoffs) for integrated storage constraints

Thu, 03/21/2024 - 16:14

Over the Thanksgiving 2023 holiday weekend, I decided to retire my trusty silver-color early-2015 13” MacBook Pro, which was nearing software-induced obsolescence, suffering from a Bluetooth audio bug, and more generally starting to show its age performance- and other-wise. I replaced it with a “space grey” color scheme 2020 model, still Intel x86-based, which I covered in detail in one of last month’s posts.

Over the subsequent Christmas-to-New Year’s week, once again taking advantage of holiday downtime, I decided to retire my similarly long-in-use silver late-2014 Mac mini, too. Underlying motivations were similar; pending software-induced obsolescence, plus increasingly difficult-to-overlook performance shortcomings (due in no small part to the system’s “Fusion” hybrid storage configuration). Speed limitations aside, the key advantage of this merged-technology approach had been its cost-effective high capacity: a 1 TByte HDD, visible and accessible to the user, behind-the-scenes mated by the operating system to 128 GBytes of flash memory “cache”.

Its successor was again Intel-based (as with its laptop-transition precursor, the last of the x86 breed) and space grey in color; a late-2018 Mac mini:

This particular model, versus its Apple Silicon successors, was notable (as I’ve mentioned before) for its comparative abundance of back-panel I/O ports:

And this specific one was especially attractive in nearly all respects (thereby rationalizing my mid-2023 purchase of it from Woot!). It was brand new, albeit not an AppleCare Warranty candidate (instead, I bought an inexpensive extended warranty from Asurian via Woot! parent company Amazon). It was only $449 plus tax after discounts. It included the speediest-available Intel Core i7-8700B 6-core (physical; 12-core virtual via HyperThreading) 3.2 GHz CPU option, capable of boost-clocking to 4.1 GHz. And it also came with 32 GBytes of 2666 MHz DDR4 SDRAM which, being user-accessible SoDIMM-based (unlike the soldered-down memory in its predecessor), was replaceable and even further upgradeable to 64 GBytes max.

Note, however, my prior allusion to this new system not being attractive in all respects. It only included a 128 GByte integrated SSD, to be precise. And, unlike this system’s RAM (or the SSD in the late 2014 Mac mini predecessor, for that matter), its internal storage capacity wasn’t user-upgradeable. I’d figured that similar to my even earlier mid-2011 Mac mini model, I could just boot from a tethered external drive instead, and that may still be true (online research is encouraging). However, this time I decided to first try some options I’d heard about for relocating portions of my app suite and other files while keeping the original O/S build internal and intact.

I’ve subsequently endured no shortage of dead-end efforts courtesy of latest operating system limitations coupled with applications’ shortsightedness, along with experiments that functionally worked but ended up being too performance-sapping or too little capacity-freeing to be practical. However, after all the gnashing of teeth, I’ve come up with a combination of techniques that will, I think, deliver a long-term usable configuration (then again, I haven’t attempted a major operating system update yet, so don’t hold me to that prediction). I’ve learned a lot along the way, which I hope will not only be helpful to other MacOS users but, thanks to MacOS’s BSD Unix underpinnings, may also be relevant to those of you running Linux, Android, Chrome OS, and other PC and embedded Unix-based operating systems.

Let’s begin with a review of my chosen external-storage hardware. Initially, I thought I’d just tether a Thunderbolt 3 external SSD (such as the 2TB Plugable drive that I picked up from B&H Photo Video on sale a year ago for $219) to the mac Mini, and that remains a feasible option:

However, I decided to “kill two birds with one stone” by beefing up the Mac mini’s expansion capabilities in the process. Specifically, I initially planned on going with one of Satechi’s aluminum stand and hubs. The baseline-feature set one that color-matches my Mac mini’s space grey scheme has plenty of convenient-access front-panel connections, but that’s it:

Its “bigger brother” additionally supports embedding a SATA (or, more recently, NVMe) M.2 format SSD, but connectivity is the same 5-or-more-recently-10 Gbps USB-C as before (ok for tethering peripherals, not so much for directly running apps from mass storage). Plus, it only came in a silver color scheme (ok for Apple Silicon Mac minis, not so much for x86-based ones):

So, what did I end up with? I share the following photo with no shortage of chagrin:

In the middle is the Mac mini. Above it is a Windows Dev Kit 2023, aka “Project Volterra,” an Arm- (Qualcomm Snapdragon 8cx Gen 3, to be precise, two SoC steppings newer than the Gen 1 in my Surface Pro X) and Windows 11-based mini PC, which I’ll say more about in a future post.

And at the bottom of the stack is my external storage solution—dual-storage, to be precise—an OWC MiniStack STX in its original matte black color scheme (it now comes in silver, too).

Does it color-match the Mac mini? No, even putting aside the glowing blue OWC-logo orb on the front panel. And speaking of the front panel, are there any easily user-accessible expansion capabilities? Again, no. In fact, the only expansion ports offered are three more Thunderbolt 3 ones around back…the fourth there connects to the computer. But Thunderbolt 3’s 40 Gbps bandwidth is precisely what drove my decision to go with the OWC MiniStack STX, aided by the fact that I’d found a gently used one on eBay at substantial discount from MSRP.

Inside, I’ve installed a 2 TByte Samsung 980 Pro PCIe 4.0 NVMe SSD which I bought for $165.59 used at Amazon Warehouse a year ago (nowadays, new ones sell for the same price…sigh…):

alongside a 2 TByte Kingston 2.5” KC600 2.5” SATA SSD:

They appear as separate external drives on system bootup, and the performance results are nothing to sneeze at. Here’s the Samsung NVMe PCI 4.0 SSD (the enclosure’s interface to the SSD, by the way, is “only” PCIe 3.0; it’s leaving storage performance potential “on the table”):

and here’s the Kingston, predictably a bit slower due to its SATA III interface and command set (therefore rationalizing why I’ve focused my implementation attention on the Samsung so far):

For comparison, here’s the Mac mini’s internal SSD:

The Samsung holds its own from a write performance standpoint but is more than 3x slower on reads, rationalizing my strategy to keep as much content as possible on internal storage. To wit, how did I decide to proceed, after quickly realizing (mid-system setup) that I’d fill up the internal available 128 GBytes well prior to getting my full desired application suite installed?

(Abortive) Step 1: Move my entire user account to external storage

Quoting from the above linked article:

In UNIX operating systems, user accounts are stored in individual folders called the user folder. Each user gets a single folder. The user folder stores all of the files associated with each user, and settings for each user. Each user folder usually has the system name of the user. Since macOS is based on UNIX, users are stored in a similar manner. At the root level of your Mac’s Startup Disk you’ll see a number of OS-controlled folders, one of which is named Users.

Move (copy first, then delete the original afterwards) an account’s folder structure elsewhere (to external storage, in this case), then let the foundation operating system know what you’ve done, and as my experience exemplifies, you can free up quite a lot of internal storage capacity.

Keep in mind that when you relocate your user home folder, it only moves the home folder – the rest of the OS stays where it was originally.

One other note, which applies equally to other relocation stratagems I subsequently attempted, and which perhaps goes without saying…but just to cover all the bases:

Consider that when you move your home folder to an external volume, the connection to that volume must be perfectly reliable – meaning both the drive and the cable connecting the drive to your Mac. This is because the home folder is an integral part of macOS, and it expects to be able to access files stored there instantly when needed. If the connection isn’t perfectly reliable, and the volume containing the home folder disappears even for a second, strange and undefined behavior may result. You could even lose data.

That all being said, everything worked great (with the qualifier that initial system boot latency was noticeably slower than before, albeit not egregiously so), until I noticed something odd. Microsoft’s OneDrive client indicated that it has successfully sync’d all the cloud-resident information in my account, but although I could then see a local clone of the OneDrive directory structure, all of the files themselves were missing, or at least invisible.

This is, it turns out, a documented side effect of Apple’s latest scheme for handling cloud storage services. External drives that self-identify as capable of being “ejectable” can’t be used as OneDrive sync destinations (unless, perhaps, you first boot the system from them…dunno). And the OneDrive sync destination is mirrored within the user’s account directory structure. My initial response was “fine, I’ll bail on OneDrive”. It turns out, however, that Dropbox (on which I’m much more reliant) is, out of operating system support necessity, going down the same implementation-change path. Scratch that idea.

Step 2: Install applications to external storage

This one seems intuitively obvious, yes? Reality proved much more complicated and ultimately limited in its effectiveness, however. Most applications I wanted to use that had standalone installers, it turns out, didn’t even give me an option to install anywhere but internal storage. And for the ones that did give me that install-redirect option…well, please take a look at this Reddit thread I started and eventually resolved, and then return to this writeup afterwards.

Wild, huh? That said, many MacOS apps don’t have separate installer programs; you just open a DMG (disk image) file and then drag the program icon inside (behind which is the full program package) to the “Applications” folder or anywhere else you choose. This led to my next idea…

Step 3: Move already-installed applications to external storage

As previously mentioned, “hiding” behind an application’s icon is the entire package structure. Generally speaking, you can easily move that package structure intact elsewhere (to external storage, for example) and it’ll still run as before. The problem, I found out, comes when you subsequently try to update such applications, specifically where a separate updater utility is involved. Take Apple’s App Store, for example. If you download and install apps using it (which is basically the only way to accomplish this) but you then move those apps elsewhere, the App Store utility can no longer “find” them for update purposes. The same goes for Microsoft’s (sizeable, alas) Office suite. In these and other cases, ongoing use of internal storage is requisite (along with trimming down the number of installed App Store- and Office suite-sourced applications to the essentials). Conversely, apps with integrated update facilities, such as Mozilla’s Firefox and Thunderbird, or those that you update by downloading and swapping in a new full-package version, upgrade fine post-move.

Step 4: Move data files, download archives, etc. to external storage

I mentioned earlier that Mozilla’s apps (for example) are well-behaved from a relocation standpoint. I was specifically referring to the programs themselves. Both Firefox and Thunderbird also create user profiles, which by default are stored within the MacOS user account folder structure, and which can be quite sizeable. My Firefox profile, for example, is just over 3 GBytes in size (including the browser cache and other temporary files), while my Thunderbird profile is nearly 9 GBytes (I’ve been using the program for a long time, and I also access email via POP3—which downloads messages and associated file attachments to my computer—vs IMAP). Fortunately, by tweaking the entries in both programs’ profiles.ini files, I’ve managed to redirect the profiles to external storage. Both programs now launch more slowly than before, due to the aforementioned degraded external drive read performance, but they then run seemingly as speedy as before, thanks to the aforementioned comparable write performance. And given that they’re perpetually running in the background as I use the computer, the launch-time delay is a one-time annoyance at each (rare) system reboot.

Similarly, I’ve redirected my downloaded-files default (including a sizeable archive of program installers) to external storage, along with an encrypted virtual drive that’s necessary for day-job purposes. I find, in cases like these, that creating an alias from the old location to the new is a good reminder of what I’ve previously done, if I subsequently find myself scratching my head because I can’t find a particular file or folder.

The result

By doing all the above (steps 2-4, to be precise), I’ve relocated more than 200 GBytes (~233 GBytes at the moment, to be precise) of files to external storage, leaving me with nearly 25% free in my internal storage (~28 GBytes at the moment, to be precise). See what I meant when I earlier wrote that in the absence of relocation success, I’d “fill up the available 128 GBytes well prior to getting my full desired application suite installed”? I should clarify that “nearly 25% free storage” comment, by the way…it was true until I got the bright idea to command-line install recently released Wine 9, which restores MacOS compatibility (previously lost with the release of 64-bit-only MacOS 10.15 Catalina in October 2019)…which required that I first command-line install the third-party Homebrew package manager…which also involved command-line installing the Xcode Command Line Tools…all of which installed by default to internal storage, eating up ~10 GBytes (I’ll eventually reverse my steps and await a standalone, more svelte package installer for Wine 9 to hopefully come).

Thoughts on my experiments and their outcomes? Usefulness to other Unix-based systems? Anything else you want to share? Let me know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Workarounds (and their tradeoffs) for integrated storage constraints appeared first on EDN.

Silicon carbide (SiC) counterviews at APEC 2024

Thu, 03/21/2024 - 11:06

At this year’s APEC in Long Beach, California, Wolfspeed CEO Gregg Lowe’s speech was a major highlight of the conference program. Lowe, the chief of the only vertically integrated silicon carbide (SiC) company and cheerleader of this power electronics technology, didn’t disappoint.

In his plenary presentation, “The Drive for Silicon Carbide – A Look Back and the Road Ahead – APEC 2024,” he called SiC a market hitting the major inflection point. “It’s a story of four decades of American ingenuity at work, and it’s safe to say that the transition from silicon to SiC is unstoppable.”

Figure 1 Lowe: The future of this amazing technology is only beginning to dawn on the world at large, and within the next decade or so, we will look around and wonder how we lived, traveled, and worked without it. Source: APEC

Lowe told the APEC 2024 attendees that the demand for SiC is exploding, and so is the number of applications using this wide bandgap (WBG) technology. “Technology transitions like this create moments and memories that last a lifetime, and that’s where we are with SiC right now.”

Interestingly, just before Lowe’s presentation, Balu Balakrishnan, chairman and CEO of Power Integrations, raised questions about the viability of SiC technology during his presentation titled “Innovating for Sustainability and Profitability”.

Balakrishnan’s counterviews

While telling the Power Integrations’ gallium nitride (GaN) story, Balakrishnan narrated how his company started heavily investing in SiC 15 years ago and spent $65 million to develop this WBG technology. “One day, sitting in my office, while doing the math, I realized this isn’t going to work for us because of the amount of energy it takes to manufacture SiC and that the cost of SiC is so much more than silicon,” he said.

“This technology will never be as cost-effective as silicon despite its better performance because it’s such a high-temperature material, which takes a humongous amount of energy,” Balakrishnan added. “It requires expensive equipment because you manufacture SiC at very high temperatures.”

The next day, Power Integrations cancelled its SiC program and wrote off $65 million. “We decided to discontinue not because of technology, but because we believe it’s not sustainable and it’s not going to be cost-effective.” he said. “That day, we switched over to GaN and doubled down on it because it’s low-temperature, operates at temperatures similar to silicon, and mostly uses same equipment as silicon.”

Figure 2 Balakrishnan: GaN will eventually be less expensive than silicon for high-voltage switches. Source: APEC

So, why does Power Integrations still have SiC product offerings? Balakrishnan acknowledged that SiC can go to higher voltages and power levels and is a more mature technology than GaN because it started earlier.

“There are certain applications where SiC is very attractive today, but I’ll dare to say that GaN will get there sometimes in the future,” he added. “Fundamentally, there isn’t anything wrong with taking GaN to higher voltages and power levels.” He mentioned a 1,200 GaN device Power Integrations recently announced and claimed that his company plans to announce another GaN device with even a higher voltage very soon.

Balakrishnan recognized that there are problems to be solved. “But these challenges require R&D efforts rather than a technology breakthrough,” he said. “We believe that GaN will get to the point where it’ll be very competitive with SiC while being far less expensive to build.”

Lowe’s defense

In his speech, Lowe also recognized the SiC-related cost and manufacturability issues, calling them near-term turbulence. However, he was optimistic that undersupply vs demand issues encompassing crystal boules, substrate capability, wafering, and epi will be resolved by the end of this decade.

“We will continue to realise better economic value with SiC by moving from 150-mm to 200-mm wafers, which increases the area by 1.7x and decreases the cost by about 40%,” he said. His hopes for resolving cost and manufacturability issues also seemed to lie in a huge investment in SiC technology and the automotive industry as a major catalyst.

For a reality check on these counterviews about the viability of SiC, a company dealing with both SiC and GaN businesses could offer a balanced perspective. Hence, Navitas’ booth at APEC 2024, where the company’s VP of corporate marketing, Stephen Oliver, explained the evolution of SiC wafer costs.

He said a 6-inch SiC wafer from Cree cost nearly $3,000 in 2018. Fast forward to 2024, a 7-inch wafer from Wolfspeed (renamed from Cree) costs about $850. Moving forward, Oliver envisions that the cost could come down to $400 by 2028 while being built on 12-inch to 15-inch SiC wafers.

Navitas, a pioneer in the GaN space, acquired startup GeneSiC in 2022 to cater to both WBG technologies. At the show, in addition to Gen-4 GaNSense Half-Bridge ICs and GaNSafe, which incorporates circuit protection functionality, Navitas also displayed Gen-3 Fast SiC power FETs.

In the final analysis, Oliver’s viewpoint about SiC tilted toward Lowe’s pragmatism in SiC’s shift from 150-mm to 200-mm wafers. The recent technology history is a testament to how economy of scale has been able to manage cost and manufacturability issues, and that’s what the SiC camp is counting on.

A huge investment in SiC device innovation and the backing of the automotive industry should also be helpful along the way.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon carbide (SiC) counterviews at APEC 2024 appeared first on EDN.

A self-testing GPIO

Wed, 03/20/2024 - 15:49

General purpose input-output (GPIO) pins are the simplest peripherals.

The link to an object under control (OUC) may become inadvertently unreliable due to many reasons: a loss of contact, short circuit, temperature stress or a vapor condensate on the components. Sometimes a better link can be established with the popular bridge chip by simply exploring the possibilities provided by the chip itself.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The bridge, such as NXP’s SC18IM700, usually provides a certain amount of GPIOs, which are handy to implement a test. These GPIOs preserve all their functionality and can be used as usual after the test.

To make the test possible, the chip must have more than one GPIO. This way, they can be paired, bringing the opportunity for the members of the pair to poll each other.

Since the activity of the GPIO during test may harm the regular functions of the OUC, one of the GPIO pins can be chosen to temporary prohibit these functions. Very often, when this object is quite inertial, this prohibition may be omitted.

Figure 1 shows how the idea can be implemented in the case of the SC18IM700 UART-I2C bridge.

Figure 1: Self-testing GPIO using the SC18IM70pytho0 UART-I2C bridge.

The values of resistors R1…R4 must be large enough not to lead to an unacceptably large current; on the other hand, they should provide sufficient voltage for the logic “1” on the input. The values shown on Figure 1 are good for the most applications but may need to be adjusted.

Some difficulties may arise only with a quasi-bidirectional output configuration, since in this configuration it is weakly driven when the port outputs a logic HIGH. The problem may occur when the resistance of the corresponding OUC input is too low.

If the data rate of the UART output is too high for a proper charging of the OUC-related capacitance during the test, it can be decreased or, the corresponding values of the resistors can be lessened.

The sketch of the Python subroutine follows:

PortConf1=0x02 PortConf2=0x03 def selfTest(): data=0b10011001 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b10100101 bridge.writeRegister(PortConf2, data) #PortConfig2 #--- write 1 cc=0b11001100 bridge.writeGPIO(cc) aa=bridge.readGPIO() # 0b11111111 if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check # partners swap data=0b01100110 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b01011010 bridge.writeRegister(PortConf2, data) #PortConfig2 #---write 1 cc=0b00110011 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check # check quasy-bidirect data=0b01000100 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b01010000 bridge.writeRegister(PortConf2, data) #PortConfig2 #---write 1 cc=0b00110011 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check return True

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A self-testing GPIO appeared first on EDN.

Pages