EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 42 sec ago

Power Tips #127: Using advanced control methods to increase the power density of GaN-based PFC

Mon, 03/25/2024 - 18:37

Introduction

Modern electronic systems need small, lightweight, high-efficiency power supplies. These supplies require cost-effective methods to take power from the AC power distribution grid and convert it to a form that can run the necessary electronics.

High switching frequencies are among the biggest enablers for small size. To that end, gallium nitride (GaN) switches provide an effective way to achieve these high frequencies given their low parasitic output capacitance (COSS) and rapid turn-on and turn-off times. It is possible, however, to amplify the high-power densities enabled by GaN switches through the use of advanced control techniques.

In this article, I will examine an advanced control method used inside a 5-kW power factor corrector (PFC) for a server. The design uses high-performance GaN FETs to operate the power supplies at the highest practical frequency. The power supply also uses a novel control technology that extracts more performance out of the GaN FETs. The end result is a high-efficiency, small-form-factor design with higher power density.

System overview

It’s well known that the totem-pole PFC is the workhorse of a high-power, high-efficiency PFC. Figure 1 illustrates the topology.

Figure 1 Basic totem-pole PFC topology where S1 and S2 are high-frequency GaN switches and S3 and S4 are low-frequency-switching Si MOSFETs. Source: Texas Instruments

S1 and S2 are high-frequency GaN switches operating with a variable frequency between 70 kHz and 1.2 MHz. S3 and S4 are low-frequency-switching silicon MOSFETs operating at the line frequency (50 to 60 Hz).

During the positive half cycle of the AC line, S2 operates as the control FET and S1 is the synchronous rectifier. S4 is always on and S3 is always off. Figure 2 shows the interval when the inductor current is increasing because control FET S2 is on. Figure 3 shows the interval when the inductor current is discharging through synchronous rectifier S1.

Figure 2 Positive one-half cycle inductor current charge interval. Source: Texas Instruments

Figure 3 Positive one-half cycle inductor discharge interval. Source: Texas Instruments

Figure 4 and Figure 5 illustrate the same behaviors for the negative one-half cycle.

Figure 4 Negative one-half cycle inductor current charge interval. Source: Texas Instruments

Figure 5 Negative one-half cycle inductor discharge interval. Source: Texas Instruments

ZVS

The use of GaN switches for S1 and S2 enables the converter to run at higher switching frequencies given the lower turn-on and turn-off losses of the switch. It is possible to achieve even higher frequencies, however, if the GaN switches can turn on with zero voltage switching (ZVS). The objective for this design is to achieve ZVS on every switching cycle for all line and load conditions. In order to do this, you will need two things:

  • Feedback to tell the controller if ZVS has been achieved
  • An algorithm that a microcontroller can execute in real time to achieve low total harmonic distortion (THD)

You can accomplish the first item through an integrated zero voltage detection (ZVD) sensor inside the GaN switches [1]. The ZVD flag works by asserting a high signal if the switch turns on with ZVS; if it does not achieve ZVS at turn-on, the ZVD signal stays low. Figure 6 and Figure 7 illustrate this behavior.

Figure 6 ZVD feedback block diagram with the LMG3425R030 GaN FET with an integrated driver, protection and temperature reporting as well as the TMS320F280049C MCU. Source: Texas Instruments

Figure 7 ZVD signal with ZVS (left) and ZVD signal without ZVS (right). The integrated ZVD sensor enables a ZVD flag that can be seen if the switch turns on with ZVS. Source: Texas Instruments

Integrating this function inside the GaN switch provides a number of advantages: minimal component count, low latency and reliable detection of ZVS events.

In addition to the ZVD signal, you also need an algorithm capable of calculating the switch timing parameters such that you can achieve ZVS and low THD simultaneously. Figure 8 is a block diagram of the hardware needed to implement the algorithm.

Figure 8 Hardware needed for the ZVD-based control method that enables an algorithm capable of calculating the switch timing parameters to achieve ZVS and a low THD simultaneously. Source: Texas Instruments

Solving the state plane for ZVS of the resonant transitions of the GaN FET’s drain-to-source voltage (VDS) will give you the algorithm for this design. Figure 9 illustrates the GaN FET VDS, inductor current, and control signals, along with both the time-domain and state-plane plots.

Figure 9 Resonant transition state-plane solution with the GaN FET VDS, inductor current, and control signals, along with both the time-domain and state-plane plots. Source: Texas Instruments

In Figure 9’s state-plane plot:

  • “j” is the normalized current at the beginning and end of each dead-time interval
  • “m” is the normalized voltage
  • “θ” is used for the normalized timing parameters

The figure also shows the normalization relationships. The microcontroller in Figure 8 solves the state-plane system equations shown in Figure 9 such that the system achieves both ZVS and an ideal power factor. The ZVD signal provides feedback to instruct the microcontroller on how to adjust the switching frequency to meet ZVS.

Figure 10 shows the operating waveforms when the applied frequency is too low (left), ideal (center) and too high (right). You can see that both ZVD signals are present only when the applied frequency is at the ideal value; thus, varying the frequency until both FETs achieve ZVD will reveal the ideal operating point.

Figure 10 ZVD control waveforms when the applied frequency is too low (left), ideal (center) and too high (right). Source: Texas Instruments

Hardware performance

Figure 11 is a photo of a two-phase 5-kW design example using GaN and the previously described algorithm.

Figure 11 Two-phase 5 kW GaN-based PFC with the hardware required to apply algorithms to achieve even higher frequencies and enhance the efficiency of the overall solution. Source: Texas Instruments

Table 1 lists the specifications for the design example.

Parameters

Value

AC input

208V-264V

Line frequency

50-60Hz

DC output

400V

Maximum power

5kW

Holdup time at full load

20ms

THD

OCP v3

Electromagnetic interference

European Norm 55022 Class A

Operating frequency

Variable, 75kHz-1.2MHz

Microcontroller

TMS320F280049C

High-frequency GaN FETs

LMG3526R030

Low-frequency silicon FETs

IPT60R022S7XTMA1

Internal dimensions

38mm x 65mm x 263mm

Power density

120W/in3

Switching frequency

70kHz-1.2MHz

 Table 1 Design specifications for hardware example used in Figure 11.

Figure 12 shows the inductor current waveforms (ILA and ILB) and GaN FET VDS waveforms for both phases (VA and VB). The plots are at full power and illustrate three different operating conditions. In each case, you can see ZVS and a sinusoidal current envelope. The conditions for all three plots are VIN = 230VRMS, VOUT = 400V, P = 5kW, and 200V/div, 20A/div and 2µs/div.

Figure 12 The inductor current waveforms (ILA and ILB) and GaN FET VDS waveforms taken at full power for: (a) VIN≪VOUT/2, (b) VIN=VOUT/2, and (c) VIN≫VOUT/2. Source: Texas Instruments

Figure 13 shows the measured efficiency and THD for a system operating with a 230VAC input across the load range.

Figure 13 Efficiency and THD of a two-phase PFC operating with a 230VAC input across the load range. Source: Texas Instruments

 Reducing the footprint of a GaN power supply

GaN switches can increase the power density of a wide variety of applications by enabling faster switching frequencies. However, the addition of technologies such as advanced control algorithms can significantly reduce the footprint of a power supply even further. For more information about the reference design example discussed in this article, see reference [2].

Brent McDonald works as a system engineer for the Texas Instruments Power Supply Design Services team, where he creates reference designs for a variety of high-power applications. Brent received a bachelor’s degree in electrical engineering from the University of Wisconsin-Milwaukee, and a master’s degree, also in electrical engineering, from the University of Colorado Boulder.

Related Content

 References

  1. Texas Instruments. n.d. LMG3526R030 650-V 30-mΩ GaN FET with Integrated Driver, Protection and Zero-Voltage Detection. Accessed Jan. 22, 2024.
  2. Texas Instruments. n.d. “Variable-Frequency, ZVS, 5-kW, GaN-Based, Two-Phase Totem-Pole PFC Reference Design.” Texas Instruments reference design No. PMP40988. Accessed Jan. 22, 2024.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #127: Using advanced control methods to increase the power density of GaN-based PFC appeared first on EDN.

The advantages of coreless transformer-based isolators/drivers

Mon, 03/25/2024 - 13:24

Design options allow system designers to configure their system with the right performance, reliability, and safety considerations while meeting design cost and efficiency targets. The right design options can be even more important in high-voltage and/or high-current applications. In these high-power designs, an isolation technique with several integrated features can mean the difference between a product that meets and even exceeds customer expectations and one that generates numerous customer complaints.

For example, an integrated solid-state isolator (SSI) based on coreless transformer (CT) provides galvanic isolation with several design benefits. With integrated features such as a dynamic Miller clamp (DMC), overcurrent and overtemperature protection (OTP), under-voltage lockout protection, fast turn-on, and more, an integrated SSI driver can provide essential protection and ensure proper operation and extended life for high-power systems. These integrated protection features are not available in optical-based solid-state relays (SSRs).

Combined with the appropriate power switches, the highly integrated solid-state isolators allow designers to create custom solid-state relays capable of controlling loads in excess of 1,000 V and 100 A. The CT-based isolators enable energy transfer across the isolation barrier capable of driving large MOSFET or IGBT without the added circuitry of a power supply on the isolated side. SSRs designed with these innovative protection features can be highly reliable and extremely robust.

These coreless transformer-based isolators enable ON and OFF control, acting like a relay switch without requiring a secondary side, isolated power supply. Combined with MOSFETs and IGBTs, SSIs enable cost effective, reliable, and low power solid-state relays for a variety of applications. This includes battery management systems, power supplies, power transmission and distribution, programmable logic controllers (PLCs), industrial automation, and robotics as well as smart building applications such as heating, ventilation, and air conditioning (HVAC) controllers and smart thermostats.

Energy transfer through coreless transformer

The main design feature of an SSI device is a coreless transformer which enables power transfer across a galvanic isolation barrier of up to 10 mW. This eliminates the need for an isolated power supply for the switch reducing the bill of material (BOM) volume, count, and cost as well as providing a fast turn ON/OFF feature (≤ 1 µs) to ensure that the safe operating area (SOA) of the switch is adhered to.

Figure 1 Highly integrated solid-state isolators easily drive MOSFETs or IGBTs and do not require an isolated bias supply. Source: Infineon

Integrated protection

The integrated protection features of the CT-based isolators deserve further explanation. These include overcurrent and overtemperature protection (OTP), a dynamic Miller clamp, and under-voltage lockout (latch-off) protection as well as satisfying essential industry standards.

System and switch protection

Depending on the application’s need and product variant selected, SSIs offers overcurrent protection (OCP) as well as OTP either via an external positive temperature coefficient (PTC) thermistor/resistor or a MOSFET’s integrated direct temperature sensor.

In case of a failure event (overcurrent or overtemperature), SSI triggers a latch-off. Once triggered, the protection reacts quickly, turning off in less than 1 μs. Furthermore, it can support the AC-15 system tests, required for electromechanical relays according to the IEC 60947-5-1 under appropriate operating conditions.

Overcurrent protection

When operating solid-state relays, a common problem is the handling of fast overcurrent or short circuit events in the range of 20 A/μs up to 100 A/μs. Isolation issues often result in a short circuit with an extremely high current level that is defined by the power source’s impedance and cabling resistance.

Figure 2 shows a circuit for implementing the overcurrent protection. The shunt resistor (RSh) and its inherent stray inductance (LSh) generate a voltage drop that is monitored by the current sense comparator. Noise on the grid needs to be filtered out from the shunt signal, so an external filter (CF and RF) complements the integrated filter. When the comparator triggers, it activates the fast turn-off and latches the fault leaving the system in a safe state.

Figure 2 The above circuitry implements overcurrent protection using an isolator driver. Source: Infineon

Overtemperature protection

Another major known issue when operating solid-sate relays is the slow overload events that heat up the switches and the current sensor (shunt). Increased load current and insufficient thermal management can additionally shift the overall temperature above the thermal power transistor limits.

Figure 3 shows an example measurement of the overtemperature protection using an isolated driver. The SSI turns off two MOSFETs with integrated temperature sensors configured in a common-source mode. The sensing MOSFET heats up from the load current until the sensor voltage decreases below the comparator trigger threshold. As a result, the SSI’s output is turned off.

Figure 3 Isolated driver’s overtemperature protection triggers within 500 ns. Source: Infineon

The lower part of Figure 3 depicts a detailed zoom into the turn-off in this measurement with a time resolution of 500 ns per division. This reduced timeframe shows that the gate is turned off in much less than 500 ns. This means that the switched transistors do not violate their safe operating area.

Dynamic Miller clamping protection

Some SSIs also have an integrated dynamic Miller clamp to protect against spurious switching due to surge voltages and fast electric transients as well as the dv/dt of the line voltage. The dv/dt applied by the connected AC voltage creates capacitive displacement currents through the parasitic capacitances of a power transistor.

This can lead to parasitic turn-on of the power switch by increasing the voltage at its gate node during its “off” state. The dynamic Miller clamping feature ensures that the power switch remains safe in the “off” state.

When failure is not an option

When matched with the appropriate power switch, the isolator drivers enable switching designs with a much lower resistance compared to optically driven/isolated solid-state solutions. This translates to longer lifespans and lower cost of ownership in system designs. As with all solid-state isolators, the devices also offer superior performance compared to electromagnetic relays, including 40% lower turn-on power loss and increased reliability due to the elimination of moving or degrading parts.

When failure is not an option, the right choice of isolation can mean the difference between design success and failure.

Dan Callen Jr. is a senior manager at Power IC Group of Infineon Technologies.

Davide Giacomini is director of marketing at Power IC Group of Infineon Technologies.

Sameh Snene is a product applications engineer at Infineon Technologies.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The advantages of coreless transformer-based isolators/drivers appeared first on EDN.

2-A Schottky rectifiers occupy tiny footprint

Fri, 03/22/2024 - 15:44

Three trench Schottky rectifiers from Diodes deliver 2 A with low forward voltage drop in chip-scale packages that require just 0.84 mm2 of PCB space. The SDT2U30CP3 (30 V/2 A), SDT2U40CP3 (40 V/2 A), and SDT2U60CP3 (60 V/2 A) can be used as blocking, boost, switching, or reverse-protection diodes in portable, mobile, and wearable devices.

The rectifiers come in 1.4×0.6-mm X3-DSN1406-2 packages, with a typical profile of 0.25 mm. According to the manufacturer, they are among the smallest in their class. Their low forward voltage drop of 480 mV maximum (580 mV for the SDT2U60CP3) minimizes conduction losses and improves efficiency. Additionally, the devices’ avalanche capability allows them to rapidly respond to voltage spikes to protect electronic circuits from damage.

The SDT2U30CP3, SDT2U40CP3, and SDT2U60CP3 rectifiers cost $0.16, $0.17, and $0.19 each, respectively, in lots of 2500 units. They are lead-free and fully compliant with RoHS 3.0 standards.

SDT2U30CP3 product page

SDT2U40CP3 product page

SDT2U60CP3 product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2-A Schottky rectifiers occupy tiny footprint appeared first on EDN.

Kyocera AVX rolls out expansive line of capacitors

Fri, 03/22/2024 - 15:44

Wet aluminum electrolytic capacitors in the AEF series from Kyocera AVX come in 11 different case sizes with capacitance ratings from 2.2 µF to 470 µF. Voltage ratings for the V-chip (can-type) capacitors range from 6.3 VDC to 400 VDC.

Targeting a broad range of industrial and consumer electronics applications, the components can be surface-mounted on high-density PCBs. The series comprises 59 variants in case sizes spanning 0608 to 1216. They exhibit low direct current leakage (DCL) and low equivalent series resistance (ESR), which enables higher tolerance for ripple currents. Capacitance tolerance is ±20%.

AEF series capacitors are available for operation over two temperature ranges: -40°C to +105°C and -55°C to +105°C. They have a lifetime of 6000 hours at +105°C and rated voltages. The devices are supplied with pure tin terminations on 13-in. or 15-in. reels compatible with automated assembly equipment. Standard lead time is 24 weeks.

AEF series product page

Kyocera AVX 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Kyocera AVX rolls out expansive line of capacitors appeared first on EDN.

Plastic ARM-based microcontroller is space-ready

Fri, 03/22/2024 - 15:43

Frontgrade Technologies has developed a plastic-encapsulated version of its UT32M0R500 radiation-tolerant microcontroller aimed at space missions. Built around a 32-bit Arm Cortex-M0+ core, the plastic UT32M0R500 is set for flight grade production in July 2024 after meeting NASA’s PEM INST-001 Level 2 qualification.

Housed in a 14.5×14.5-mm, 143-pin plastic BGA package, the UT32M0R500 offers the same I/O configuration and features as its ceramic QML counterpart. It tolerates up to 50 krads of total ionizing dose (TID) radiation. For design flexibility, the device combines two independent CAN 2.0B controllers with mission read/write flash memory and system-on-ship functionality. This integration enables designers to manage board utilization, while reducing both cost and complexity.

“The proliferation of satellites for LEO missions is increasing the demand for highly reliable components with efficient SWaP-C characteristics and radiation assurances,” said Dr. J. Mitch Stevison, president and CEO of Frontgrade Technologies. “Adding another plastic device to our portfolio that is qualified to NASA’s Space PEM Level 2 strengthens our position as a trusted provider of high reliability, radiation-assured devices for critical space missions.”

The UT32M0R500 is supported by Arm’s Keil suite of embedded development tools.

UT32M0R500 product page

Frontgrade Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Plastic ARM-based microcontroller is space-ready appeared first on EDN.

Image sensor elevates smartphone HDR

Fri, 03/22/2024 - 15:42

Omnivision’s OV50K40 smartphone image sensor with TheiaCel technology achieves human eye-level high dynamic range (HDR) with a single exposure. Initially introduced in automotive image sensors, TheiCel employs lateral overflow integration capacitors (LOFIC) to provide superior single-exposure HDR, regardless of lighting conditions.

The OV50K40 50-Mpixel image sensor features a 1.2-µm pixel in a 1/1.3-in. optical format. High gain and correlated multiple sampling enable optimal performance in low-light conditions. At 50 Mpixels, the sensor has a maximum image transfer rate of 30 fps. Using 4-cell pixel binning, the OV50K40 delivers 12.5 Mpixels at 120 fps, dropping to 60 fps in HDR mode but with a fourfold increase in sensitivity.

To achieve high-speed autofocus, the OV50K40 offers quad phase detection (QPD). This enables 2×2 phase detection autofocus across the sensor’s entire image array for 100% coverage. An on-chip QPD remosaic enables full 50-Mpixel Bayer output, 8K video, and 2x crop-zoom functionality.

The OV50K40 image sensor is now in mass production.

OV50K40 product page  

Omnivision

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Image sensor elevates smartphone HDR appeared first on EDN.

Snapdragon SoC brings AI to more smartphones

Fri, 03/22/2024 - 15:42

Qualcomm’s Snapdragon 8s Gen 3 SoC offers select features of the high-end Snapdragon 8 Gen 3 for a wider range of premium Android smartphones. The less expensive 8s Gen 3 chip provides on-device generative AI and an always-sensing image signal processor (ISP).

The SoC’s AI engine supports multimodal AI models comprising up to 10 billion parameters, including large language models (LLMs) such as Baichuan-7B, Llama 2, Gemini Nano, and Zhipu ChatGLM. Its Spectra 18-bit triple cognitive ISP offers AI-powered features like photo expansion, which intelligently fills in content beyond a capture’s original aspect ratio.

The Snapdragon 8s Gen 3 is slightly slower than the Snapdragon 8 Gen 3, and it has one less performance core. The 8s variant employs an Arm Cortex-X4 prime core running at 3 GHz, along with four performance cores operating at 2.8 GHz and three efficiency cores clocked at 2 GHz.

Snapdragon 8s Gen 3 will be adopted by key smartphone OEMs, including Honor, iQOO, Realme, Redmi, and Xiaomi. The first devices powered by the 8s Gen 3 are expected as soon as this month.

Snapdragon 8s Gen 3 product page

Qualcomm Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Snapdragon SoC brings AI to more smartphones appeared first on EDN.

The role of cache in AI processor design

Fri, 03/22/2024 - 09:34

Artificial intelligence (AI) is making its presence felt everywhere these days, from the data centers at the Internet’s core to sensors and handheld devices like smartphones at the Internet’s edge and every point in between, such as autonomous robots and vehicles. For the purposes of this article, we recognize the term AI to embrace machine learning and deep learning.

There are two main aspects to AI: training, which is predominantly performed in data centers, and inferencing, which may be performed anywhere from the cloud down to the humblest AI-equipped sensor.

AI is a greedy consumer of two things: computational processing power and data. In the case of processing power, OpenAI, the creator of ChatGPT, published the report AI and Compute, showing that since 2012, the amount of compute used in large AI training runs has doubled every 3.4 months with no indication of slowing down.

With respect to memory, a large generative AI (GenAI) model like ChatGPT-4 may have more than a trillion parameters, all of which need to be easily accessible in a way that allows to handle numerous requests simultaneously. In addition, one needs to consider the vast amounts of data that need to be streamed and processed.

Slow speed

Suppose we are designing a system-on-chip (SoC) device that contains one or more processor cores. We will include a relatively small amount of memory inside the device, while the bulk of the memory will reside in discrete devices outside the SoC.

The fastest type of memory is SRAM, but each SRAM cell requires six transistors, so SRAM is used sparingly inside the SoC because it consumes a tremendous amount of space and power. By comparison, DRAM requires only one transistor and capacitor per cell, which means it consumes much less space and power. Therefore, DRAM is used to create bulk storage devices outside the SoC. Although DRAM offers high capacity, it is significantly slower than SRAM.

As the process technologies used to develop integrated circuits have evolved to create smaller and smaller structures, most devices have become faster and faster. Sadly, this is not the case with the transistor-capacitor bit-cells that lie at the heart of DRAMs. In fact, due to their analog nature, the speed of bit-cells has remained largely unchanged for decades.

Having said this, the speed of DRAMs, as seen at their external interfaces, has doubled with each new generation. Since each internal access is relatively slow, the way this has been achieved is to perform a series of staggered accesses inside the device. If we assume we are reading a series of consecutive words of data, it will take a relatively long time to receive the first word, but we will see any succeeding words much faster.

This works well if we wish to stream large blocks of contiguous data because we take a one-time hit at the start of the transfer, after which subsequent accesses come at high speed. However, problems occur if we wish to perform multiple accesses to smaller chunks of data. In this case, instead of a one-time hit, we take that hit over and over again.

More speed

The solution is to use high-speed SRAM to create local cache memories inside the processing device. When the processor first requests data from the DRAM, a copy of that data is stored in the processor’s cache. If the processor subsequently wishes to re-access the same data, it uses its local copy, which can be accessed much faster.

It’s common to employ multiple levels of cache inside the SoC. These are called Level 1 (L1), Level 2 (L2), and Level 3 (L3). The first cache level has the smallest capacity but the highest access speed, with each subsequent level having a higher capacity and a lower access speed. As illustrated in Figure 1, assuming a 1-GHz system clock and DDR4 DRAMs, it takes only 1.8 ns for the processor to access its L1 cache, 6.4 ns to access the L2 cache, and 26 ns to access the L3 cache. Accessing the first in a series of data words from the external DRAMs takes a whopping 70 ns (Data source Joe Chang’s Server Analysis).

Figure 1 Cache and DRAM access speeds are outlined for 1 GHz clock and DDR4 DRAM. Source: Arteris

The role of cache in AI

There are a wide variety of AI implementation and deployment scenarios. In the case of our SoC, one possibility is to create one or more AI accelerator IPs, each containing its own internal caches. Suppose we wish to maintain cache coherence, which we can think of as keeping all copies of the data the same, with the SoCs processor clusters. Then, we will have to use a hardware cache-coherent solution in the form of a coherent interconnect, like CHI as defined in the AMBA specification and supported by Ncore network-on-chip (NoC) IP from Arteris IP (Figure 2a).

Figure 2 The above diagram shows examples of cache in the context of AI. Source: Arteris

There is an overhead associated with maintaining cache coherence. In many cases, the AI accelerators do not need to remain cache coherent to the same extent as the processor clusters. For example, it may be that only after a large block of data has been processed by the accelerator that things need to be re-synchronized, which can be achieved under software control. The AI accelerators could employ a smaller, faster interconnect solution, such as AXI from Arm or FlexNoC from Arteris (Figure 2b).

In many cases, the developers of the accelerator IPs do not include cache in their implementation. Sometimes, the need for cache wasn’t recognized until performance evaluations began. One solution is to include a special cache IP between an AI accelerator and the interconnect to provide an IP-level performance boost (Figure 2c). Another possibility is to employ the cache IP as a last-level cache to provide an SoC-level performance boost (Figure 2d). Cache design isn’t easy, but designers can use configurable off-the-shelf solutions.

Many SoC designers tend to think of cache only in the context of processors and processor clusters. However, the advantages of cache are equally applicable to many other complex IPs, including AI accelerators. As a result, the developers of AI-centric SoCs are increasingly evaluating and deploying a variety of cache-enabled AI scenarios.

Frank Schirrmeister, VP solutions and business development at Arteris, leads activities in the automotive, data center, 5G/6G communications, mobile, aerospace and data center industry verticals. Before Arteris, Frank held various senior leadership positions at Cadence Design Systems, Synopsys and Imperas.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The role of cache in AI processor design appeared first on EDN.

Workarounds (and their tradeoffs) for integrated storage constraints

Thu, 03/21/2024 - 16:14

Over the Thanksgiving 2023 holiday weekend, I decided to retire my trusty silver-color early-2015 13” MacBook Pro, which was nearing software-induced obsolescence, suffering from a Bluetooth audio bug, and more generally starting to show its age performance- and other-wise. I replaced it with a “space grey” color scheme 2020 model, still Intel x86-based, which I covered in detail in one of last month’s posts.

Over the subsequent Christmas-to-New Year’s week, once again taking advantage of holiday downtime, I decided to retire my similarly long-in-use silver late-2014 Mac mini, too. Underlying motivations were similar; pending software-induced obsolescence, plus increasingly difficult-to-overlook performance shortcomings (due in no small part to the system’s “Fusion” hybrid storage configuration). Speed limitations aside, the key advantage of this merged-technology approach had been its cost-effective high capacity: a 1 TByte HDD, visible and accessible to the user, behind-the-scenes mated by the operating system to 128 GBytes of flash memory “cache”.

Its successor was again Intel-based (as with its laptop-transition precursor, the last of the x86 breed) and space grey in color; a late-2018 Mac mini:

This particular model, versus its Apple Silicon successors, was notable (as I’ve mentioned before) for its comparative abundance of back-panel I/O ports:

And this specific one was especially attractive in nearly all respects (thereby rationalizing my mid-2023 purchase of it from Woot!). It was brand new, albeit not an AppleCare Warranty candidate (instead, I bought an inexpensive extended warranty from Asurian via Woot! parent company Amazon). It was only $449 plus tax after discounts. It included the speediest-available Intel Core i7-8700B 6-core (physical; 12-core virtual via HyperThreading) 3.2 GHz CPU option, capable of boost-clocking to 4.1 GHz. And it also came with 32 GBytes of 2666 MHz DDR4 SDRAM which, being user-accessible SoDIMM-based (unlike the soldered-down memory in its predecessor), was replaceable and even further upgradeable to 64 GBytes max.

Note, however, my prior allusion to this new system not being attractive in all respects. It only included a 128 GByte integrated SSD, to be precise. And, unlike this system’s RAM (or the SSD in the late 2014 Mac mini predecessor, for that matter), its internal storage capacity wasn’t user-upgradeable. I’d figured that similar to my even earlier mid-2011 Mac mini model, I could just boot from a tethered external drive instead, and that may still be true (online research is encouraging). However, this time I decided to first try some options I’d heard about for relocating portions of my app suite and other files while keeping the original O/S build internal and intact.

I’ve subsequently endured no shortage of dead-end efforts courtesy of latest operating system limitations coupled with applications’ shortsightedness, along with experiments that functionally worked but ended up being too performance-sapping or too little capacity-freeing to be practical. However, after all the gnashing of teeth, I’ve come up with a combination of techniques that will, I think, deliver a long-term usable configuration (then again, I haven’t attempted a major operating system update yet, so don’t hold me to that prediction). I’ve learned a lot along the way, which I hope will not only be helpful to other MacOS users but, thanks to MacOS’s BSD Unix underpinnings, may also be relevant to those of you running Linux, Android, Chrome OS, and other PC and embedded Unix-based operating systems.

Let’s begin with a review of my chosen external-storage hardware. Initially, I thought I’d just tether a Thunderbolt 3 external SSD (such as the 2TB Plugable drive that I picked up from B&H Photo Video on sale a year ago for $219) to the mac Mini, and that remains a feasible option:

However, I decided to “kill two birds with one stone” by beefing up the Mac mini’s expansion capabilities in the process. Specifically, I initially planned on going with one of Satechi’s aluminum stand and hubs. The baseline-feature set one that color-matches my Mac mini’s space grey scheme has plenty of convenient-access front-panel connections, but that’s it:

Its “bigger brother” additionally supports embedding a SATA (or, more recently, NVMe) M.2 format SSD, but connectivity is the same 5-or-more-recently-10 Gbps USB-C as before (ok for tethering peripherals, not so much for directly running apps from mass storage). Plus, it only came in a silver color scheme (ok for Apple Silicon Mac minis, not so much for x86-based ones):

So, what did I end up with? I share the following photo with no shortage of chagrin:

In the middle is the Mac mini. Above it is a Windows Dev Kit 2023, aka “Project Volterra,” an Arm- (Qualcomm Snapdragon 8cx Gen 3, to be precise, two SoC steppings newer than the Gen 1 in my Surface Pro X) and Windows 11-based mini PC, which I’ll say more about in a future post.

And at the bottom of the stack is my external storage solution—dual-storage, to be precise—an OWC MiniStack STX in its original matte black color scheme (it now comes in silver, too).

Does it color-match the Mac mini? No, even putting aside the glowing blue OWC-logo orb on the front panel. And speaking of the front panel, are there any easily user-accessible expansion capabilities? Again, no. In fact, the only expansion ports offered are three more Thunderbolt 3 ones around back…the fourth there connects to the computer. But Thunderbolt 3’s 40 Gbps bandwidth is precisely what drove my decision to go with the OWC MiniStack STX, aided by the fact that I’d found a gently used one on eBay at substantial discount from MSRP.

Inside, I’ve installed a 2 TByte Samsung 980 Pro PCIe 4.0 NVMe SSD which I bought for $165.59 used at Amazon Warehouse a year ago (nowadays, new ones sell for the same price…sigh…):

alongside a 2 TByte Kingston 2.5” KC600 2.5” SATA SSD:

They appear as separate external drives on system bootup, and the performance results are nothing to sneeze at. Here’s the Samsung NVMe PCI 4.0 SSD (the enclosure’s interface to the SSD, by the way, is “only” PCIe 3.0; it’s leaving storage performance potential “on the table”):

and here’s the Kingston, predictably a bit slower due to its SATA III interface and command set (therefore rationalizing why I’ve focused my implementation attention on the Samsung so far):

For comparison, here’s the Mac mini’s internal SSD:

The Samsung holds its own from a write performance standpoint but is more than 3x slower on reads, rationalizing my strategy to keep as much content as possible on internal storage. To wit, how did I decide to proceed, after quickly realizing (mid-system setup) that I’d fill up the internal available 128 GBytes well prior to getting my full desired application suite installed?

(Abortive) Step 1: Move my entire user account to external storage

Quoting from the above linked article:

In UNIX operating systems, user accounts are stored in individual folders called the user folder. Each user gets a single folder. The user folder stores all of the files associated with each user, and settings for each user. Each user folder usually has the system name of the user. Since macOS is based on UNIX, users are stored in a similar manner. At the root level of your Mac’s Startup Disk you’ll see a number of OS-controlled folders, one of which is named Users.

Move (copy first, then delete the original afterwards) an account’s folder structure elsewhere (to external storage, in this case), then let the foundation operating system know what you’ve done, and as my experience exemplifies, you can free up quite a lot of internal storage capacity.

Keep in mind that when you relocate your user home folder, it only moves the home folder – the rest of the OS stays where it was originally.

One other note, which applies equally to other relocation stratagems I subsequently attempted, and which perhaps goes without saying…but just to cover all the bases:

Consider that when you move your home folder to an external volume, the connection to that volume must be perfectly reliable – meaning both the drive and the cable connecting the drive to your Mac. This is because the home folder is an integral part of macOS, and it expects to be able to access files stored there instantly when needed. If the connection isn’t perfectly reliable, and the volume containing the home folder disappears even for a second, strange and undefined behavior may result. You could even lose data.

That all being said, everything worked great (with the qualifier that initial system boot latency was noticeably slower than before, albeit not egregiously so), until I noticed something odd. Microsoft’s OneDrive client indicated that it has successfully sync’d all the cloud-resident information in my account, but although I could then see a local clone of the OneDrive directory structure, all of the files themselves were missing, or at least invisible.

This is, it turns out, a documented side effect of Apple’s latest scheme for handling cloud storage services. External drives that self-identify as capable of being “ejectable” can’t be used as OneDrive sync destinations (unless, perhaps, you first boot the system from them…dunno). And the OneDrive sync destination is mirrored within the user’s account directory structure. My initial response was “fine, I’ll bail on OneDrive”. It turns out, however, that Dropbox (on which I’m much more reliant) is, out of operating system support necessity, going down the same implementation-change path. Scratch that idea.

Step 2: Install applications to external storage

This one seems intuitively obvious, yes? Reality proved much more complicated and ultimately limited in its effectiveness, however. Most applications I wanted to use that had standalone installers, it turns out, didn’t even give me an option to install anywhere but internal storage. And for the ones that did give me that install-redirect option…well, please take a look at this Reddit thread I started and eventually resolved, and then return to this writeup afterwards.

Wild, huh? That said, many MacOS apps don’t have separate installer programs; you just open a DMG (disk image) file and then drag the program icon inside (behind which is the full program package) to the “Applications” folder or anywhere else you choose. This led to my next idea…

Step 3: Move already-installed applications to external storage

As previously mentioned, “hiding” behind an application’s icon is the entire package structure. Generally speaking, you can easily move that package structure intact elsewhere (to external storage, for example) and it’ll still run as before. The problem, I found out, comes when you subsequently try to update such applications, specifically where a separate updater utility is involved. Take Apple’s App Store, for example. If you download and install apps using it (which is basically the only way to accomplish this) but you then move those apps elsewhere, the App Store utility can no longer “find” them for update purposes. The same goes for Microsoft’s (sizeable, alas) Office suite. In these and other cases, ongoing use of internal storage is requisite (along with trimming down the number of installed App Store- and Office suite-sourced applications to the essentials). Conversely, apps with integrated update facilities, such as Mozilla’s Firefox and Thunderbird, or those that you update by downloading and swapping in a new full-package version, upgrade fine post-move.

Step 4: Move data files, download archives, etc. to external storage

I mentioned earlier that Mozilla’s apps (for example) are well-behaved from a relocation standpoint. I was specifically referring to the programs themselves. Both Firefox and Thunderbird also create user profiles, which by default are stored within the MacOS user account folder structure, and which can be quite sizeable. My Firefox profile, for example, is just over 3 GBytes in size (including the browser cache and other temporary files), while my Thunderbird profile is nearly 9 GBytes (I’ve been using the program for a long time, and I also access email via POP3—which downloads messages and associated file attachments to my computer—vs IMAP). Fortunately, by tweaking the entries in both programs’ profiles.ini files, I’ve managed to redirect the profiles to external storage. Both programs now launch more slowly than before, due to the aforementioned degraded external drive read performance, but they then run seemingly as speedy as before, thanks to the aforementioned comparable write performance. And given that they’re perpetually running in the background as I use the computer, the launch-time delay is a one-time annoyance at each (rare) system reboot.

Similarly, I’ve redirected my downloaded-files default (including a sizeable archive of program installers) to external storage, along with an encrypted virtual drive that’s necessary for day-job purposes. I find, in cases like these, that creating an alias from the old location to the new is a good reminder of what I’ve previously done, if I subsequently find myself scratching my head because I can’t find a particular file or folder.

The result

By doing all the above (steps 2-4, to be precise), I’ve relocated more than 200 GBytes (~233 GBytes at the moment, to be precise) of files to external storage, leaving me with nearly 25% free in my internal storage (~28 GBytes at the moment, to be precise). See what I meant when I earlier wrote that in the absence of relocation success, I’d “fill up the available 128 GBytes well prior to getting my full desired application suite installed”? I should clarify that “nearly 25% free storage” comment, by the way…it was true until I got the bright idea to command-line install recently released Wine 9, which restores MacOS compatibility (previously lost with the release of 64-bit-only MacOS 10.15 Catalina in October 2019)…which required that I first command-line install the third-party Homebrew package manager…which also involved command-line installing the Xcode Command Line Tools…all of which installed by default to internal storage, eating up ~10 GBytes (I’ll eventually reverse my steps and await a standalone, more svelte package installer for Wine 9 to hopefully come).

Thoughts on my experiments and their outcomes? Usefulness to other Unix-based systems? Anything else you want to share? Let me know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Workarounds (and their tradeoffs) for integrated storage constraints appeared first on EDN.

Silicon carbide (SiC) counterviews at APEC 2024

Thu, 03/21/2024 - 11:06

At this year’s APEC in Long Beach, California, Wolfspeed CEO Gregg Lowe’s speech was a major highlight of the conference program. Lowe, the chief of the only vertically integrated silicon carbide (SiC) company and cheerleader of this power electronics technology, didn’t disappoint.

In his plenary presentation, “The Drive for Silicon Carbide – A Look Back and the Road Ahead – APEC 2024,” he called SiC a market hitting the major inflection point. “It’s a story of four decades of American ingenuity at work, and it’s safe to say that the transition from silicon to SiC is unstoppable.”

Figure 1 Lowe: The future of this amazing technology is only beginning to dawn on the world at large, and within the next decade or so, we will look around and wonder how we lived, traveled, and worked without it. Source: APEC

Lowe told the APEC 2024 attendees that the demand for SiC is exploding, and so is the number of applications using this wide bandgap (WBG) technology. “Technology transitions like this create moments and memories that last a lifetime, and that’s where we are with SiC right now.”

Interestingly, just before Lowe’s presentation, Balu Balakrishnan, chairman and CEO of Power Integrations, raised questions about the viability of SiC technology during his presentation titled “Innovating for Sustainability and Profitability”.

Balakrishnan’s counterviews

While telling the Power Integrations’ gallium nitride (GaN) story, Balakrishnan narrated how his company started heavily investing in SiC 15 years ago and spent $65 million to develop this WBG technology. “One day, sitting in my office, while doing the math, I realized this isn’t going to work for us because of the amount of energy it takes to manufacture SiC and that the cost of SiC is so much more than silicon,” he said.

“This technology will never be as cost-effective as silicon despite its better performance because it’s such a high-temperature material, which takes a humongous amount of energy,” Balakrishnan added. “It requires expensive equipment because you manufacture SiC at very high temperatures.”

The next day, Power Integrations cancelled its SiC program and wrote off $65 million. “We decided to discontinue not because of technology, but because we believe it’s not sustainable and it’s not going to be cost-effective.” he said. “That day, we switched over to GaN and doubled down on it because it’s low-temperature, operates at temperatures similar to silicon, and mostly uses same equipment as silicon.”

Figure 2 Balakrishnan: GaN will eventually be less expensive than silicon for high-voltage switches. Source: APEC

So, why does Power Integrations still have SiC product offerings? Balakrishnan acknowledged that SiC can go to higher voltages and power levels and is a more mature technology than GaN because it started earlier.

“There are certain applications where SiC is very attractive today, but I’ll dare to say that GaN will get there sometimes in the future,” he added. “Fundamentally, there isn’t anything wrong with taking GaN to higher voltages and power levels.” He mentioned a 1,200 GaN device Power Integrations recently announced and claimed that his company plans to announce another GaN device with even a higher voltage very soon.

Balakrishnan recognized that there are problems to be solved. “But these challenges require R&D efforts rather than a technology breakthrough,” he said. “We believe that GaN will get to the point where it’ll be very competitive with SiC while being far less expensive to build.”

Lowe’s defense

In his speech, Lowe also recognized the SiC-related cost and manufacturability issues, calling them near-term turbulence. However, he was optimistic that undersupply vs demand issues encompassing crystal boules, substrate capability, wafering, and epi will be resolved by the end of this decade.

“We will continue to realise better economic value with SiC by moving from 150-mm to 200-mm wafers, which increases the area by 1.7x and decreases the cost by about 40%,” he said. His hopes for resolving cost and manufacturability issues also seemed to lie in a huge investment in SiC technology and the automotive industry as a major catalyst.

For a reality check on these counterviews about the viability of SiC, a company dealing with both SiC and GaN businesses could offer a balanced perspective. Hence, Navitas’ booth at APEC 2024, where the company’s VP of corporate marketing, Stephen Oliver, explained the evolution of SiC wafer costs.

He said a 6-inch SiC wafer from Cree cost nearly $3,000 in 2018. Fast forward to 2024, a 7-inch wafer from Wolfspeed (renamed from Cree) costs about $850. Moving forward, Oliver envisions that the cost could come down to $400 by 2028 while being built on 12-inch to 15-inch SiC wafers.

Navitas, a pioneer in the GaN space, acquired startup GeneSiC in 2022 to cater to both WBG technologies. At the show, in addition to Gen-4 GaNSense Half-Bridge ICs and GaNSafe, which incorporates circuit protection functionality, Navitas also displayed Gen-3 Fast SiC power FETs.

In the final analysis, Oliver’s viewpoint about SiC tilted toward Lowe’s pragmatism in SiC’s shift from 150-mm to 200-mm wafers. The recent technology history is a testament to how economy of scale has been able to manage cost and manufacturability issues, and that’s what the SiC camp is counting on.

A huge investment in SiC device innovation and the backing of the automotive industry should also be helpful along the way.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon carbide (SiC) counterviews at APEC 2024 appeared first on EDN.

A self-testing GPIO

Wed, 03/20/2024 - 15:49

General purpose input-output (GPIO) pins are the simplest peripherals.

The link to an object under control (OUC) may become inadvertently unreliable due to many reasons: a loss of contact, short circuit, temperature stress or a vapor condensate on the components. Sometimes a better link can be established with the popular bridge chip by simply exploring the possibilities provided by the chip itself.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The bridge, such as NXP’s SC18IM700, usually provides a certain amount of GPIOs, which are handy to implement a test. These GPIOs preserve all their functionality and can be used as usual after the test.

To make the test possible, the chip must have more than one GPIO. This way, they can be paired, bringing the opportunity for the members of the pair to poll each other.

Since the activity of the GPIO during test may harm the regular functions of the OUC, one of the GPIO pins can be chosen to temporary prohibit these functions. Very often, when this object is quite inertial, this prohibition may be omitted.

Figure 1 shows how the idea can be implemented in the case of the SC18IM700 UART-I2C bridge.

Figure 1: Self-testing GPIO using the SC18IM70pytho0 UART-I2C bridge.

The values of resistors R1…R4 must be large enough not to lead to an unacceptably large current; on the other hand, they should provide sufficient voltage for the logic “1” on the input. The values shown on Figure 1 are good for the most applications but may need to be adjusted.

Some difficulties may arise only with a quasi-bidirectional output configuration, since in this configuration it is weakly driven when the port outputs a logic HIGH. The problem may occur when the resistance of the corresponding OUC input is too low.

If the data rate of the UART output is too high for a proper charging of the OUC-related capacitance during the test, it can be decreased or, the corresponding values of the resistors can be lessened.

The sketch of the Python subroutine follows:

PortConf1=0x02 PortConf2=0x03 def selfTest(): data=0b10011001 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b10100101 bridge.writeRegister(PortConf2, data) #PortConfig2 #--- write 1 cc=0b11001100 bridge.writeGPIO(cc) aa=bridge.readGPIO() # 0b11111111 if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check # partners swap data=0b01100110 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b01011010 bridge.writeRegister(PortConf2, data) #PortConfig2 #---write 1 cc=0b00110011 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check # check quasy-bidirect data=0b01000100 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b01010000 bridge.writeRegister(PortConf2, data) #PortConfig2 #---write 1 cc=0b00110011 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check return True

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A self-testing GPIO appeared first on EDN.

15-bit voltage-to-time ADC for “Proper Function” anemometer linearization

Tue, 03/19/2024 - 15:55

Awhile back I published a simple design idea for a thermal airspeed sensor based on a self-heated Darlington transistor pair. The resulting sensor is simple, sensitive, and solid-state, but suffers from a radically nonlinear airspeed response, as shown in Figure 1.

Figure 1 The Vout versus airspeed response of the thermal sensor is very nonlinear.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Veteran design idea contributor Jordan Dimitrov has provided an elegant computational numerical solution for the problem that makes the final result nearly perfectly linear. He details it in Proper function linearizes a hot transistor anemometer with less than 0.2 % error.

However, a consequence of performing linearization in the digital domain after analog to digital conversion is a significant increase in required ADC resolution, e.g., from 11 bits to 15, here’s why…

Acquisition of a linear 0 to 2000 fpm airspeed signal resolved to 1 fpm would require an ADC resolution of 1 in 2000 = 11 bits. But inspection of Figure 1’s curve reveals that, while the full scale span of the airspeed signal is 5 V, the signal change associated with an airspeed increment of 1999 fpm to 2000 fpm is only 0.2 mV. Thus, to keep the former on scale while resolving the latter, needs a minimum ADC resolution of:

 1 in 5 / 0.0002 = 1 in 25,000 = 14.6 bits

15-bit (and higher resolution) ADCs are neither rare nor especially expensive, but they’re not usually integrated peripherals inside microcontrollers as mentioned in Mr. Dimitrov’s article. So, it seems plausible that a significant cost might be associated with provision of an ADC with resolution adequate for his design. I wondered about what alternatives might exist.

Here’s a design for simple and cheap high-resolution ADC built around an old, inexpensive, and widely available friend: the 555 analog timer chip. 

See Figure 2 for the schematic.

Figure 2 High resolution voltage-to-time ADC suitable for self-heated transistor anemometer linearization. An asterisk denotes precision components (1% tolerance).

 Signal acquisition begins with the R2, R3, U1 summation network combining the 0 to 5 V input signal with U1’s 2.5v precision reference to form:

V1 = (Vin + 2.5v)/2 = 1.25 to 3.75v = (0 to 3) * 1.25v

 V1 accumulates on C1 between conversion cycles with a time constant of:

(R2R3/(R2 + R3) + R1)C1 = 1.1M * 0.039 µF = 42.9 ms

 Thus, for 16 bit accuracy, a minimum settling time is required of:

42.9 ms LOGe(216) = 480 ms

 The actual conversion cycle can then be started by inputting a CONVERT command pulse (>2.5v amplitude and >1 microsecond duration) to the 555 Vth (threshold) pin 6 as illustrated in Figure 3.

 Figure 3 ADC cycle begins with a CONVERT Vth pulse that generates an OUT pulse of duration Tout = LOGe(V1 / 1.25 V)R1C1.

The OUT pulse (low true) begins with the rising edge of CONVERT and is coincident with the 555 Dch (discharge) pin 7 being driven to zero volts, beginning the discharge of C1 from V1 to the 555 trigger voltage (Vtrg = Vc/2 = 1.25v) on pin 7. The duration of C1 discharge and Tout, accumulated digitally (a counter of 16 bits and resolution of 1µs is adequate) by a suitable microcontroller, are given by:

Tout = LOGe(V1 / 1.25 V)R1C1 = LOGe(V1 / 1.25 V) 1M * 0.039 µF

= LOGe((Vin + 2.5 V) / 2.5 V) 39 ms

= LOGe(1) 39 ms = 0 for Vin = 0

= LOGe(3) 39 ms = 42.85 ms for Vin = 5 V

At the end of Tout, Dch is released so the recharge of C1 can commence, and the conversion result:

(N = 1 MHz * Tout)

is available for linearization computation. The math to decode and recover Vin is given by:

Vin = 2.5 V (EXP(N / 39000) – 1)

A final word. You may be wondering about something. Earlier I said a resolution of 1 part in 25000 = 14.6 bits would be needed to quantify the Vin delta between 1999 and 2000 fpm. So, what’s all this 42850 = 15.4 bits stuff?

The 42850 thing arises from the fact that the instantaneous slope (rate of change = dV/dT) of the C1 discharge curve is proportional to the voltage across, and therefore the current through, R1. For a full-scale input of Vin = 5 V, this parameter changes by a factor of 3 from V1 = 3.75 V and 3.75 µA at the beginning of the conversion cycle to only 1.25 V and 1.25 µA at the end. This increase in dV/dT causes a proportional but opposite change in resolution. Consequently, to achieve the desired 25000:1 resolution at Vin = 5 V, a higher average resolution is needed.

The necessary resolution factor bump is square root (3) = 1.732… of which 42850 / 25000 = 1.714 is a rough and ready, but adequate, approximation.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 15-bit voltage-to-time ADC for “Proper Function” anemometer linearization appeared first on EDN.

Harnessing the promise of ultracapacitors for next-gen EVs

Tue, 03/19/2024 - 13:40

Electronics design engineers bear the responsibility of overcoming the world’s concerns with electric vehicle (EV) power sources. Lithium-ion batteries are heavy, put pressure on natural resources, and sometimes are slow to charge. The logical next step in EV development is using ultracapacitors as a complementary power source for when there are not enough batteries to go around, allowing electrification to scale and tone down the detractors of modern charging electronics.

Looking to ultracapacitors may remove some of the market uncertainties surrounding other EV power generators. The electrostatic storage provides higher capacitance than the chemical method of conventional EV batteries. Additionally, the designs remove several rare metals from the composition, making it less challenging to acquire specific materials.

It may not have the density of chemical makeup, but its disruptively long-life cycle and lightning-fast fueling could make EV ownership more attractive. Whereas repeated charges in other batteries produce notable degradation, ultracapacitors can experience over 1 million charge and discharge cycles before noticeable damage.

Car manufacturers can install ultracapacitors alongside batteries for supplementary power. The energy boost is ideal for large-capacity fleet vehicles driving long ranges. Sometimes, they need more instantaneous power bursts climbing steep inclines than waiting for a chemical reaction. The two technologies working in conjunction reduce strain on both, extending their life cycles.

Ultracapacitor commercialization

Ultracapacitor designers started to see interest pique in the last several years. In 2020, an Estonian manufacturer received $161 million in new contracts for individual and public transportation needs. This signals electronics design engineers must create robust, accessible ultracapacitors for increasing demand and combating the climate crisis.

Lithium-ion batteries have an advantage over all other EV power sources because of their density, even if their heft and life span negatively affect their reputation. They are still the go-to device for auto manufacturers. Engineers must consider these design aspects for future ultracapacitor blueprints:

  • Materials with higher surface area and greater capacitance
  • Electrolytes with higher conductivity using additives
  • Thermal management for improved temperature regulation and reduced runaway
  • Seamless compatibility when integrated with other batteries
  • Porous electrode designs for increased performance

Additionally, these specs inform engineers how to size the ultracapacitor for driving applications. Everything from maximum voltage potential to discharge duration affects this, which must be communicated to OEMs, so they can integrate it into manufacturing.

What’s next for design engineers

Electronics design engineers must collaborate with renewable energy experts to make the transition to market-friendly ultracapacitors a reality. Engineers must validate a design’s electromagnetic compatibility and signal integrity. These efforts only matter if power providers are consistent and reliable to support charging infrastructure.

Grid stability with high frequency and voltage is the foundation for success, so communicating ultracapacitor design needs to the renewable sector is critical. Similarly, while one of the selling points for ultracapacitors is their charging time, there are few options for fueling these vehicles. Stations must be equipped with local battery packs instead of directly connected to the grid to prevent overloads and shutdowns.

The final frontier electronics design engineers could explore is a vehicle capable of running solely on an ultracapacitor. Currently, using them with other batteries is the next step. Research and development should explore its potential as a sole generator, though it does not appear feasible in 2024’s developmental landscape.

Electrification needs lithium-ion to become commercially viable, and it is the most cost-effective option right now for consumers. However, the circuit designs and engineers’ prototypes for ultracapacitors show a bright future for the EV industry. This power source, alongside other battery options, will lead to more comprehensive compliance considerations, intersector collaboration, and cost optimization for the EV market.

Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Harnessing the promise of ultracapacitors for next-gen EVs appeared first on EDN.

Single button load switches on the chip 222

Mon, 03/18/2024 - 16:39

The 222-microcircuit project described earlier in [1, 2] is an analog of the 555 microcircuit. Her main purpose is the generation of rectangular pulses with an adjustable fill factor and independent frequency control. Such a chip is not produced industrially, although it is not difficult to assemble its layout using two comparators and five resistors, Figure 1. The pin configuration and functions chip and the typical circuit of its inclusion are also shown. There are several devices for which the 222 chip can be used, these are also given in [1, 2].

Figure 1 The internal structure of the project of the chip 222, its pin configuration and functions and a typical circuit of inclusion.

Wow the engineering world with your unique design: Design Ideas Submission Guide

On the basis of the chip 222, simple switching devices can be created, which are turned on by briefly pressing the start button. The device is turned off by pressing the same button for a longer time. Figure 2 shows a diagram of such a device.

Figure 2 Switching device controlled by one button.

In the initial state, a fixed voltage is applied to the input Cx of the chip (pin 2) from the resistive divider R1, R2. The voltage at the control input ADJ (pin 5 of the chip) is zero. The voltage at the PWM output (pin 4) is also zero. Transistor Q1 is closed, the load Rload is de-energized. The capacitor C1 is charged via the contacts of the button S1 to the supply voltage of the device. When the S1 button is briefly pressed, the voltage from the charged capacitor C1 enters the ADJ input (pin 5 of the chip). The voltage at the PWM output (pin 4) increases to the supply voltage of the device and through the resistor R3 enters the input ADJ (pin 5) of the chip. The state of the chip is fixed, a constant high voltage level appears at its output and remains. When transistor Q1 switches its state, the load is connected to the power source.

In order to turn off the load, it is necessary to press the S1 button again, holding it in the pressed position for a longer time. Capacitor C1 will discharge to resistor R5 and R6 to a voltage below the switching voltage of the chip 222, the device will return to its original state.

The second version of the device, Figure 3, works on a different principle. When the S1 button is pressed, the U1 222 chip the state is being switched, the load is connected to the power source. You can return the device to its original state by briefly pressing the S2 button. Formally, this is a two-button device that performs the role of a thyristor.

Figure 3 A pseudo-thyristor device on a 222 chip.

The following Figure 4 shows a combined load control scheme. You can enable and disable the load by pressing the S1 button for a short or long time. Also, the load can be turned off by a short-term reset of the supply voltage when pressing the S2 button.

Figure 4 Is a combined device for push-button switching on and off of the load using the 222 chip.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 750 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

Related Content

References

  1. Shustov M.A. “Chip 222 – alternative 555. PWM generator with independent frequency control”, International Journal of Circuits and Electronics, 2021, V. 6, P. 23–31. Pub. Date: 06 September 2021. https://www.iaras.org/iaras/home/computer-science-communications/caijce/chip-222-alternative-555-pwm-generator-with-independent-frequency-control
  2. Shustov M.A. “Adjustable threshold devices on a chip 222”, Radioamateur (BY), 2023, No. 6, pp. 20–21.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Single button load switches on the chip 222 appeared first on EDN.

After TSMC fab in Japan, advanced packaging facility is next

Mon, 03/18/2024 - 13:15

Japan’s efforts to reboot its chip industry are likely to get another boost: an advanced packaging facility set up by TSMC. That seems a logical expansion to TSMC’s $7 billion front-end chip manufacturing fab built in Kumamoto on Japan’s southern island Kyushu.

In other words, a back-end packaging facility will follow the front-end fab to complement the chip manufacturing ecosystem in Japan amid considerations to diversify semiconductor supply chains beyond Taiwan due to geopolitical tensions. Trade media has been abuzz about TSMC setting up an advanced packaging plant and a new Reuters report supports this premise.

Especially when TSMC has already set up an advanced packaging R&D center in Ibaraki prefecture, northeast of Tokyo, in 2021. The demand for advanced semiconductor packaging has surged due to high-end chips serving artificial intelligence (AI) and high-performance computing (HPC) applications. The rise of chiplets has also brought advanced packaging technologies into the limelight.

The above factors call TSMC, the world’s largest semiconductor factory, to plan additional packaging capacity; in fact, it’s already working to set up a new packaging facility in Chiayi, southern Taiwan. However, as TrendForce analyst Joanne Chiao notes, TSMC’s advanced packaging facility in Japan will likely be limited in scale. That’s mainly because most of TSMC’s packaging customers are based in the United States.

Figure 1 TSMC’s advanced packaging technology encompasses front-end 3D stacking techniques such as chip-on-wafer (CoW) and wafer-on-wafer (WoW) as well as back-end packaging technologies like integrated fan-out (InFO) and chip-on-wafer-on-substrate (CoWoS). Source: TSMC

with this new plant, TSMC’s CoWoS packaging technology will be transferred to Japan. It’s a 2.5D wafer-level packaging technology developed by TSMC that allows multiple dies to be integrated on a single substrate, providing higher performance and integration density than traditional packaging technologies. Currently, TSMC’s CoWoS packaging capacity is entirely based in Taiwan.

Figure 2 In CoWoS, multiple silicon dies are integrated on a passive silicon interposer, which acts as a communication layer for the active die on top. Source: TSMC

On TSMC’s part, the packaging facility in Japan will have closer access to the country’s leading semiconductor materials and equipment suppliers and a solid customer base. TSMC will also enjoy the generous subsidies from the Japanese government, which aims to revitalize the local semiconductor industry after losing ground to South Korea and Taiwan.

Finally, as the Reuters report notes, no decision on the scale and timeline of building the advanced packaging facility has been made yet. TSMC also declined to comment on this story. Still, with the construction of the TSMC fab in Kumamoto, industry observers firmly believe that Taiwan’s mega fab will inevitably set up an advanced packaging facility in Japan.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post After TSMC fab in Japan, advanced packaging facility is next appeared first on EDN.

AI boom and the politics of HBM memory chips

Fri, 03/15/2024 - 12:13

The high-bandwidth memory (HBM) landscape, steadily growing in importance for its critical pairing with artificial intelligence (AI) processors, is ready to move to its next manifestation, HBM3e, increasing data transfer rate and peak memory bandwidth by 44%. Here, SK hynix, which launched the first HBM chip in 2013, is also the first to offer HBM3e validation for Nvidia’s H-200 AI hardware.

HBM is a high-performance memory that stacks chips on top of one another and connects them with through-silicon vias (TSVs) for faster and more energy-efficient data processing. The demand for HBM memory chips has boomed with the growing popularity of generative AI. However, it’s currently facing a supply bottleneck caused by both packaging constraints and the inherently long production cycle of HBM.

Figure 1 SK hynix aims to maintain its lead by releasing an HBM3e device with 16 layers of DRAM and a single-stack speed of up to 1,280 GB/s.

According to TrendForce, 2024 will mark the transition from HBM3 to HBM3e, and SK hynix is leading the pack with HBM3e validation in the first quarter of this year. It’s worth mentioning that SK hynix is currently the primary supplier of HBM3 memory chips for Nvidia’s H100 AI solutions.

Samsung, now fighting back to make up for the lost space, has received certification for AMD’s AMD MI300 series AI accelerators. That’s a significant breakthrough for the Suwon, South Korea-based memory supplier, as AMD’s AI accelerators are expected to scale up later this year.

Micron, which largely missed the HBM opportunity, is also catching up by launching the next iteration, HBM3e, for Nvidia’s H200 GPUs by the end of the first quarter in 2024. Nvidia’s H200 GPUs will start shipping in the second quarter of 2024.

Figure 2 The 8H HBM3e memory offering 24 GB will be part of Nvidia’s H200 Tensor Core GPUs, which will begin shipping in the second quarter of 2024. Source: Micron

It’s important to note that when it comes to HBM technology, SK hynix has remained ahead of its two mega competitors—Micron and Samsung—since 2013, when SK hynix introduced HBM memory in partnership with AMD. It took Samsung two years to challenge its South Korean neighbor when it developed the HBM2 device in late 2015.

But the rivalry between SK hynix and Samsung is more than merely a first-mover advantage. While Samsung chose the conventional non-conductive film (NCF) technology for producing HBM chips, SK hynix switched to the mass reflow molded underfill (MR-MUF) method to address NFC limitations. According to a Reuters report, while SK hynix has secured about 60-70% yield rates for its HBM3 production, Samsung’s HBM3 production yields stand at nearly 10-20%.

The MUF process involves injecting and then hardening liquid material between layers of silicon, which in turn, improves heat dissipation and production yields. Here, SK hynix teamed up with a Japanese materials engineering firm Namics while sourcing MUF materials from Nagase. SK hynix adopted the mass reflow molded underfill technique ahead of others and subsequently became the first vendor to supply HBM3 chips to Nvidia.

Recent trade media reports suggest Samsung is in contact with MUF material suppliers, though the memory supplier has vowed to stick to its NFC technology for the upcoming HBM3e chips. However, industry observers point out that Samsung’s MUF technology will not be ready until 2025 anyway. So, it’s likely that Samsung will use both NFC and MUF techniques to manufacture the latest HBM3 chips.

Both Micron and Samsung are making strides to narrow the gap with SK hynix as the industry moves from HBM3 to HBM3e memory chips. Samsung, for instance, has announced that it has developed an HBM3e device with 12 layers of DRAM chips, and it boasts the industry’s largest capacity of 36 GB.

Figure 3 The HBM3E 12H delivers a bandwidth of up to 1,280 GB/s and a storage capacity of 36 GB. Source: Samsung

Likewise, Idaho-based Micron claims to have started volume production of its 8-layer HBM3e device offering 24-GB capacity. As mentioned earlier, it’ll be part of Nvidia’s H200 Tensor Core units shipping in the second quarter. Still, SK hynix seems to be ahead of the pack when it comes to the most sought-after AI memory: HBM.

It made all the right moves at the right time and won Nvidia as a customer in late 2019 for pairing HBM chips with AI accelerators. No wonder engineers at SK hynix now jokingly call HBM “Hynix’s Best Memory”.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI boom and the politics of HBM memory chips appeared first on EDN.

Scalable MCUs tout MPU-like performance

Thu, 03/14/2024 - 20:46

Outfitted with an Arm Cortex-7 core running at up to 600 MHz, ST’s STM32H7R/S MCUs provide the performance, scalability, and security of a microprocessor. They embed 64 kbytes of bootflash and 620 kbytes of SRAM on-chip to speed execution, while fast external memory interfaces support data transfer rates up to 200 MHz.

The STM32H7R and STM32H7S microcontrollers come with powerful security features. They include protection against physical attacks, memory protection, code isolation to protect the application at runtime, and platform authentication. Additionally, STM32H7S devices provide immutable root of trust, debug authentication, and hardware cryptographic accelerators. With these features, the MCUs target safety certifications up to SESIP3 and PSA Level 3.

The lines are further divided into general-purpose MCUs (STM32H7R3/S3) and those with enhanced graphics-handling capabilities (STM32H7R7/S7). With their dedicated NeoChrom GPU, these MCUs deliver rich colors, animation, and 3D-like effects. Developers can share software between the two lines for efficient use of project resources and to achieve faster time-to-market for new products.

STM32H7R/S MCUs are scheduled to enter volume production starting in April 2024. Sample requests and pricing information are available from local ST sales offices. For more information about the STM32H7R/S general-purpose and graphics lines of MCUs, click here.

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Scalable MCUs tout MPU-like performance appeared first on EDN.

Rad-tolerant LNA spans 2 GHz to 5 GHz

Thu, 03/14/2024 - 20:45

An off-the-shelf S-Band low-noise amplifier, the TDLNA2050SEP from Teledyne, tolerates up to 100 krads of total ionizing dose (TID) radiation. This makes the part suitable for use in high-reliability satellite communication systems and phase-array radar.

According to the manufacturer, the MMIC amplifier delivers a gain of 17.5 dB from 2 GHz to 5 GHz, while maintaining a noise figure of less than 0.4 dB and an output power (P1dB) of 19.5 dB. The device should be biased at a VDD of +5.0 V and IDDQ of 60 mA.

The TDLNA2050SEP low-noise amplifier is built on a 90-nm enhancement-mode pHEMT process and is qualified per MIL-PRF-38534 Class K (space) or Class H (military). It comes in a 2×2×0.75-mm, 8-pin plastic DFN packages.

Devices are available from Teledyne e2v HiRel or an authorized distributor.

TDLNA2050SEP product page

Teledyne e2v HiRel Electronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Rad-tolerant LNA spans 2 GHz to 5 GHz appeared first on EDN.

Retimers boost PCIe 6.x connectivity

Thu, 03/14/2024 - 20:45

Astera Labs has expanded its Aries PCIe/CXL Smart DSP retimer portfolio with devices that ensure robust PCIe 6.x and CXL 3.x connectivity. Doubling bandwidth to 64 GT/s per lane with automatic link equalization, Aries 6 retimers enable critical connectivity for AI server platforms and cloud infrastructure.

The protocol-aware, low-latency retimers integrate seamlessly between a root complex and endpoints, extending the reach threefold. They maintain signal integrity by compensating channel loss up to 36 dB at 64 GT/s with PAM4 signaling. Aries 6 also boasts low power at 11 W typical for a PCIe 6.x 16-lane configuration.

Aries 6 retimers are available in 16-lane and 8-lane lane variants to support PCIe 6.x and PCIe 5.x applications. They also come in multiple form factors, including silicon chips, Smart Cable modules, and boards. Seamless upgrading from second-generation Aries 5 retimers to third-generation Aries 6 is facilitated through adherence to industry-standard footprints.

Astera will demonstrate the Aries 6 retimers at this month’s NVIDIA GTC 2024 AI conference and expo.

Aries 6 product page

Astera Labs 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Retimers boost PCIe 6.x connectivity appeared first on EDN.

Edge AI/ML models team with Arm Keil MDK

Thu, 03/14/2024 - 20:44

Embedded developers can deploy AI and ML models developed on Edge Impulse’s platform directly in Arm’s Keil microcontroller development kit (MDK). The partnership between the two companies makes it easier for engineers to collaborate with other cross-disciplinary teams to build edge AI products and bring them to market.

Keil MDK is a widely deployed software development suite used to create, build, and debug embedded applications for Arm-based microcontrollers. The Edge Impulse integration brings the company’s edge AI tools directly to the Keil ecosystem via the Common Microcontroller Software Interface Standard (CMSIS). Models developed in Edge Impulse Studio can be deployed as an Open-CMSIS Pack and imported into any Arm Keil MDK project.

Developers can improve the performance of their applications by combining Edge Impulse’s Edge Optimized Neural (EON) compiler with Arm’s latest compiler. According to Edge Impulse, the EON compiler runs models with up to 70% less RAM usage and up to 40% less flash usage. This is in addition to the savings achieved with the Arm compiler.

To get started with the Edge Impulse CMSIS Pack for the Arm Keil MDK, click here. Read more about the Arm Keil integration on the Edge Impulse blog.

Edge Impulse

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Edge AI/ML models team with Arm Keil MDK appeared first on EDN.

Pages