EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 38 min ago

Active bridge driver replaces lossy diodes

Fri, 03/10/2023 - 01:36

Alpha & Omega’s AOZ7203AV active bridge driver controls two external N-channel MOSFETs to replace two low-side diodes in an AC/DC bridge rectifier. The self-powered dual-driver IC can help improve efficiency and reduce standby power consumption when used in AC/DC adapters and power supplies.

“Today’s power-hungry gaming laptops, game consoles, and high-performance desktops demand high efficiency from the AC-DC power supply. Using the AOZ7203AV with AOS’ low-ohmic high-voltage external MOSFETs significantly improves the efficiency of the power converter as the typical rectifier-diode forward-conduction losses are reduced by 50%. Efficiency can improve up to about 0.7% at 90 V (AC) mains voltage,” said Armin Hsu, power IC senior marketing manager at AOS.

The AOZ7203AV is the latest member of the 600-V AlphaZBL family of active bridge drivers. In addition to minimizing forward-conduction losses, the part features ultra-low operating current, integrated X-capacitor discharge (CB safety certified), and a wide operating temperature range of -40°C to +125°C. A break-before-make function prevents shoot-through current when driving the low-side high-voltage MOSFETs in an active bridge rectifier circuit.

Available now in production quantities, the AOZ7203AV in 8-pin SO packages costs $1.68 each in lots of 1000 units.

AOZ7203AV product page

Alpha & Omega Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Active bridge driver replaces lossy diodes appeared first on EDN.

250-MHz MCUs offer scalable security

Fri, 03/10/2023 - 01:35

ST’s STM32H5 series of MCUs leverages an embedded Arm Cortex-M33 core running at 250 MHz and the STM32Trust TEE Security Manager. Aimed at the mid-range class of MCU-based applications, these devices deliver 375 DMIPS for an EEMBC CoreMark industry-reference score of 1023.

In addition to the security provided by the Cortex-M33 core’s TrustZone architecture, ST offers scalable security services accessed via an industry-standard API. The STM32Trust TEE Secure Manager, developed with partner ProvenRun, allows developers to choose from a range of services to achieve the required level of security assurance.

STM32H5 MCUs raise dynamic efficiency to 61 µA/MHz in switched mode (SMPS) and 120 µA/MHz running the linear (LDO) converter (VDD = 3.3 V and 25°C) in run mode with peripherals off. Power management lets developers optimize performance versus power consumption in all operating modes.

Prices for the STM32H5 MCUs with 128 kbytes of flash memory start at $1.44 in lots of 10,000 units. Devices with 2 Mbytes of flash are priced from $2.93 each. Mass production is beginning now, starting with the STM32H503 and STM32H563. The full lineup and package choices will be introduced in June.

STM32H5 series product page

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 250-MHz MCUs offer scalable security appeared first on EDN.

Analyzing energy loss in a capacitor circuit

Wed, 03/08/2023 - 18:26

We often run into the situation where there are two capacitors, one of them charged up to some voltage level and the other at some lower voltage level which, for purposes of this discussion, we will say means discharged to zero volts. Then, we close some switch connected as shown in Figure 1.

Figure 1 Diagram of a circuit switching two capacitors together where the voltage at t = 0 is Va for C1 and 0 for C2. Then the switch is closed and, as time moves to infinity, both C1 and C2 will be charged to voltage Vb. Source: John Dunn

If we wait until the end of time, the voltages of the two capacitors will eventually become equal to each other. If on the other hand, we only wait for some reasonable time lying within our personal life spans, the two capacitors’ voltages will get pretty close to each other. How long that process will seem to take will be a function of the circuit’s time constant which will itself be some value that is proportional to the resistance value of “R”.

There will be some loss of energy during the described process which we can examine in two ways, first by the conservation of charge (Figure 2) and then by calculus (Figure 3).

Figure 2 First analysis with the conservation of charge where the energy loss is worked out algebraically.

Figure 3 Second analysis yields the same result as the first and is done with calculus where total energy loss is equivalent to the resistor energy and is ultimately derived from the resistor’s power.

The semi-obvious conundrum which someone might notice is that there has been a loss of energy in a circuit in which if the value of R goes to zero and yet there is still a non-zero energy loss. Therefore, where does that lost energy go?

Someone once suggested to me that energy loss for R = 0 comes about via the magical radiation of a pulse of electromagnetic energy. Say “no” to that idea. The two energy loss analyses yield the same result, which is that given unlimited time, the energy loss is of the amount shown and is invariant with respect to the value of resistance, R.

If we allow R to go to zero, the energy loss remains constant, but it happens more quickly which means the event’s power level rises, heading for infinity, as the event’s time duration shrinks while R is heading for zero. Through all that, the energy loss remains constant (Figure 4).

Figure 4 A statement of the final revealed truth.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Analyzing energy loss in a capacitor circuit appeared first on EDN.

TSMC’s 3-nm progress report: Better than expected

Wed, 03/08/2023 - 14:49

TSMC, which vowed to kickstart its 3-nm process node in the second half of 2022, barely made it by cutting the ribbon on this cutting-edge manufacturing node on December 29 at its expanded fabrication unit in Southern Taiwan Science Park (STSP). Nearly six months after Samsung began 3-nm chip production based on gate-all-around (GAA) technology, TSMC has successfully conducted its full-node advance from 5-nm to 3-nm chip manufacturing process based on tried-and-test FinFET transistor architecture.

According to media reports, Apple has secured 100% of the initial supply of N3, TSMC’s first-generation 3-nm process, starting as a baseline. The early reports show that the N3 process yield could be as high as 80%. Next, TSMC plans to move to a more advanced 3-nm version, N3E, in the second half of 2023. That’s when other TSMC customers—AMD, Intel, and Qualcomm—plan to adopt the 3-nm process for their chips.

N3, which uses an ultra-complex process with 24-layer multi-pattern extreme ultraviolet (EUV) lithography, is denser and thus offers higher logic density. On the other hand, N3E, which uses a simpler 19-layer single-pattern technology, is easier to produce and is less expensive. It’ll also use less power and clock higher compared to the baseline N3 process.

TSMC’s CEO C. C. Wei expects the 3-nm manufacturing process to be worth more than $1.5 trillion business within five years of volume production. For now, however, N3 wafers cost around $20,000 compared to $16,000 for TSMC’s 5-nm node called N5. That’s partly why chip developers besides Apple are known to sit on the sidelines while waiting for the more economical N3E process node.

Figure 1 TSMC’s first 3-nm node, known as N3, claims to offer up to 15% improvement in performance at the same power or 30% more efficiency at the same speed when compared to the company’s 5-nm node. Source: TSMC

Compared to its 5-nm node, which TSMC launched in 2020, the mega-fab claims that its N3 process offers 60% to 70% higher logic density and 15% higher performance while consuming 30% to 35% less power. Here, it’s important to note that TSMC’s enhanced version of N5, which Apple calls 4 nm, essentially produces 5 nm chips. Moving from this souped-up 5-nm process to the new 3-nm process will also have substantial benefits, including lower power draw and much higher density with 60% or more additional chip logic and cache in the same area.

3 nm at the center of smartphone wars

TSMC’s 3-nm technology will likely be used in Apple’s A17 system-on-chip (SoC), which is expected to power the iPhone 15 Pro and iPhone 15 Pro Max models to be launched later this year. Here, 3-nm technology is expected to deliver a 35% power efficiency improvement over the 4-nm technology used to make the A16 SoC for the iPhone 14 Pro and Pro Max.

According to media reports, Apple could also build its M3 chips around TSMC’s 3-nm technology for MacBook Air to be launched later in 2023. Likewise, for MacBook Pros, to be launched in 2024, M3 Pro and M3 Max chips are also expected to utilize the 3-nm manufacturing process node.

Apple, TSMC’s largest customer accounting for 25% of its revenue, has reportedly booked the entire N3 supply. On the other hand, Apple’s major competitors on the smartphone processor frontier, Qualcomm and MediaTek, have decided to wait for TSMC’s second-generation 3-nm technology, N3E, which will be available later this year. Here, it’s worth noting that while Apple’s custom-designed SoCs are incorporated in its own end-products, companies like Qualcomm and MediaTek have to make a profit on the chips alone.

That’s probably why smartphone processor vendors like Qualcomm and MediaTek are waiting for a more economical N3E node to arrive later this year. That will inevitably push Android phones powered by chips fabbed on the 3-nm node to 2024 and beyond, further extending Apple’s lead in the smartphone market. There are conflicting reports about Samsung’s Exynos 2300 chip, which could use Samsung’s 3-nm process; it’s not widely expected to appear in the S23 or other high-end Galaxy phones.

Figure 2 The N3E process node is expected to start commercial production in the second half of 2023. Source: TSMC

The N3E process node is also on the radar of GPU suppliers like AMD and Nvidia for applications such as PC gaming.

Raising Arizona

From a technological standpoint, things are going well for TSMC on this burgeoning 3-nm process, and the ramp for the N3 node has been smooth so far. TSMC senior executives call N3 a blockbuster node. The company’s CEO Wei also expects the 3-nm process to be more profitable than its predecessor 5-nm node. However, the advent of the 3-nm node coincides with TSMC’s expansion in new regions and its supply chain woes amid international trade restrictions and sanctions.

TSMC founder Morris Chang captured the tension beneath the company’s new fab order when he proclaimed that globalization was “almost dead.” The company’s CEO Wei also complained about geopolitical conflict, which according to him, has created an environment in which TSMC could no longer sell wafers across the world.

Figure 3 TSMS is also building 3-nm manufacturing capacity at its fab in Arizona. Source: TSMC

TSMC’s new fab in Arizona—announced in 2020—will start producing 4-nm chips in 2024. It will be followed by the construction of a second fab that will begin producing 3-nm chips in 2026. That means the United States will get hold of the 3-nm fabrication process node nearly three years after Taiwan.

Likewise, the United States will get TSMC’s 4-nm fabrication technology two years after it was made available in Taiwan. It’s worth mentioning here that while TSMC is expanding its fabrication facilities in the United States and beyond, it’s committed to keeping the best chip manufacturing technology in its Taiwan fabs.

A trillion-dollar business

TSMC plans to spend between $32 billion and 36 billion on capital expenditure this year to meet the rising demand for contract chip manufacturing. And 70% of this budget will be spent on advanced nodes: 7 nm and smaller. Here, while TSMC’s 2-nm node built around the new GAA technology is likely to be unveiled in 2025 or so, the 3-nm process looks like its primary advanced node for the foreseeable future.

A DigiTimes report claims that TSMC has scaled up 3-nm process capacity at a gradual pace with monthly output set to reach 45,000 wafers in March 2023. And according to TSMC chairman Mark Liu, the 3-nm node’s yields are comparable to those of the previous generation at the same point in the manufacturing cycle.

That, along with Apple being the first chip developer to adopt this cutting-edge process node, shows that TSMC’s 3-nm process technology is on track. Though the semiconductor behemoth’s baseline N3 node revenues are at single-digit percentages with Apple as a sole consumer, it still amounts to multiple billions of dollars. Once N3’s enhanced version, N3E, comes on board later this year and other large chip developers targeting high-performance computing (HPC) and smartphone applications join the 3-nm fray, it could become a trillion-dollar business once all the money has been counted in several years.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post TSMC’s 3-nm progress report: Better than expected appeared first on EDN.

Considerations in the selection of UV LEDs for germicidal applications

Tue, 03/07/2023 - 15:19

LEDs emitting light in the range of 200 nm to 400 nm are classified as UV LEDs. Within this range, there are three bands, UV-A with wavelengths from 300 nm to 400 nm, UV-B with wavelengths between 280 nm to 315 nm, and UV-C with wavelengths from 200 nm to 280 nm. UV-C LEDs are of special interest in germicidal applications as they are positioned to replace the incumbent low pressure mercury vapor lamp. As such, a growing number of LED source and LED lighting manufacturers are offering UV-C LED products. There are several characteristics that should be considered when evaluating an LED-based system against a lighting system employing mercury vapor lamps.

One of the most important considerations in terms of total cost of ownership is overall system efficiency. While a mercury vapor lamp may have a higher lamp efficiency, the overall system performance is dictated by several other factors as well. The first is spectral response. The spectral response of a UV-C mercury vapor lamp peaks at approximately 185 nm and 254 nm. These are fixed emissions that cannot be adjusted. Studies (here, here, and here) have shown that the optimal wavelength band for the disruption of microorganism RNA and DNA is about 265 nm (Figure 1).

Figure 1 The low and medium pressure mercury vapor lamp spectral response compared to E. coli germicidal effectiveness. Source: Wikipedia

UV-C LEDs are currently available at several different wavelengths including 265 nm, thereby optimizing the efficiency of the lighting system (Figure 2).

Figure 2 The UV-C LED (265 nm) spectral response compared to the E. coli germicidal effectiveness. Source: Wikipedia

A second factor, wall plug efficiency, is defined as the ratio of the output radiant power to input electrical power. Currently, the wall-plug efficiency of mercury vapor lamps exceeds that of UV-C LED-based systems. However, this advantage may be negated by the significantly lower typical lifetime of mercury vapor. A commonly accepted metric for LED luminaire lifetime is L70 (the time at which light emission has degraded to 70% of its initial value). Similarly, UV-C products are characterized by an R70 metric. The R70 of typical mercury vapor lamps ranges between 2,000 and 8,000 hours, as compared to a properly designed LED-based product which can achieve an R70 of 10,000 hours.

One last consideration in terms of overall cost of ownership is warm-up time. Mercury vapor lamp warm-up times can vary from 1 to 5 minutes, creating an incentive to leave the lamps powered at all times. By contrast UV-C LEDs, like all other LEDs, can be cycled indefinitely with instantaneous on and off times, meaning that they need only be deployed when circumstances require.

Other factors that do not directly involve cost of ownership include environmental factors (mercury versus no mercury), physical size (LED-based products can be small enough to fit into spaces that mercury vapor lamps cannot), as well as safety related considerations and required input power (high voltage for mercury vapor versus low voltage for LEDs).

In addition, there are factors to consider when comparing one UV-C LED lighting system against another. As mentioned above, UV-C LEDs can be designed for emission at essentially any wavelength. An article in the New England Journal of Medicine published in 2020 provides a virucidal efficiency curve for UV wavelengths. Unsurprisingly, the virucidal efficiency diminishes with an increasing positive or negative delta from the optimal wavelength of 265 nm. Applying this factor to the R70 of a given product provides a better picture of the effectiveness of that product over its expected lifetime.

The R70 of a product is directly related to the epitaxial material used in the LED die fabrication. UV LEDs are produced using an epitaxial material composed of gallium nitride, aluminum gallium nitride, or aluminum nitride. A higher aluminum content translates to shorter wavelengths and also lower lifetimes. Therefore, even though LEDs made without aluminum are less optimal in terms of virucidal efficiency, they can provide an overall better performance in the long-term due to a higher R70.

Speaking of R70, it is not uncommon to see an R70 specification composed of a single number for example, “R70 = 10,000 hours”. This specification is deficient because it lacks both a reference temperature and input current. The long-term performance of UV-C LEDs, like all other LEDs, is inversely related to the junction temperature of the LED die, that is, the higher the junction temperature, the more quickly the LED will degrade. Junction temperature is dependent upon both the ambient temperature and the input current. A complete R70 specification will include both of these parameters for example, R70 = 10,000 hours at 25°C and 100 mA.

One last consideration in evaluating UV-C LED-based products is the paradigm shift that these products represent. LED luminaires for general lighting—because of their endless variety of form factors, their wavelength and color tuning capabilities, and their ability to easily integrate with other building systems—have transformed our collective understanding of what a light source should look like and do. Due to the increased interest in implementation of germicidal systems, a similar transformation is poised to occur in the world of UV-C LED products. We may soon see products in all sorts of new applications in facilities and other environments, made possible by the improving performance and flexibility of UV-C LEDs.

Yoelit Hiebert has worked in the field of LED lighting for over 10 years and has experience in both the manufacturing and end-user sides of the industry.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Considerations in the selection of UV LEDs for germicidal applications appeared first on EDN.

Metrics are good, but insight is best

Tue, 03/07/2023 - 14:32

A situation faced by all engineers when the schematic is complete is “Will my schematic fit on the given PCB area?”, and “How long will it take to do the layout?”

I have faced these same questions for decades and about 10 years ago, I finally had the time to take a statistical approach to the issue [1]. This involved writing a script that takes the schematic bill of materials (BOM) including all the parts area, the parts pins, and the quantity of each part in the design. This script then gives me an output of the footprint to PCB utilization area and outputs the PCB utilization ratio and the number of pins to be routed (Figure 1).

There have been a few (very few) attempts at PCB estimation reported in the past and some of the methods use much more complex criteria as inputs to their estimators. My training on “analysis of variance” [2] taught me to “keep it simple” and start with the (probable) most important things first. The part’s footprint to PCB area and the total number of pins in the design seemed like a good start to me based on the type of work I do the most.

Figure 1 A schematic and a board blank, the question any designer faces is: “Will it fit?”. Source: Steve Hageman

Analysis of my typical design

I do a lot of baseband analog and RF design, but these designs always have some sort of logic control—be it something as simple as latches to more complex control FPGA or microprocessors—so everything is a mixed signal design. While I may have to length match some traces, this is not a large percentage of my designs. RF traces take longer to layout than baseband signals, but I didn’t break them out separately in my first-order analysis.

Even if the schematic is not complete and if you are just working off of a block diagram with experience, a decent estimate of the parts required can be made. This is especially true today when you can look at a myriad of reference designs and just count the parts that they require (Figure 2). This is 1000% better than some wild guess, which always tends to be low.

Figure 2 Simple analysis of the total part area, total PCB area, and pin count can provide a simple answer to the question posed in Figure 1. Source: Steve Hageman

Results of analysis

Over the course of the next 10 or so PCB designs, I had enough information to start to make sense of the data. The first interesting point is to have a good sense of the ratio of parts to PCB areas upfront instead of guessing. This directly leads to understanding the number of layers and via technology needed. Naturally, large amounts of RF circuitry will also drive the PCB and via technology required. As the PCB gets dense, adding routing layers makes the routing job easier and if the design starts to get very dense, then using via technology like via in pad helps routing as well.

At first, I thought I might have to break out the baseband and RF designs into different estimates, but a very interesting fact emerged. I found that the layout time for a reasonably dense PCB hardly depended on what I was routing, rather it was driven almost totally by the number of pins that needed to be routed.

Even though it takes longer to route an RF trace than a baseband trace, I found that this is driven by the number of pins that have to be routed. I found that because a baseband IC has almost all of the pins that need to be routed while a typical RF IC has a lot of ground pins that are pretty much routed automatically by the copper layer fills. This means that while it takes longer to route a RF trace, the myriad of ground pins on a typical RF IC made up for it, and on a per part basis the time per pin came out to be the same no matter what technology I was routing.

This resulted in a very simple two-variable metric:

  1. It takes me the same average amount of time to route each pin irrespective of what kind of signal it is.
  2. A modifier has to be made if:
    1. The density of the PCB starts to get too high. This is an exponential modifier. As the density ratio gets higher, the time to route goes up exponentially.
    2. Any constraints on layer count, or vias, or weird PCB shapes, etc. make the design more difficult to route.

The analysis result was quite surprising for me, as it all came down to the total number of pins in the design and was irrespective of the specific technology being routed.

I came up with a metric of: X hours minimum to start a PCB design, this included making the project and finally getting all the output files together to build the design, then a Y number of minutes per pin for a 0 to Z% density PCB; outside of these constraints some modified need to be added.

The final estimation equation is,

Time Estimate = X + (Y * Z)

My X, Y, and Z metrics are for me and the work I usually do; you will need to determine what these metrics are for yourself and if they even apply to your specific line of work.

Application to firmware/software

I also typically write the firmware driver layers for my designs and, by applying the same analysis principles outlined above, I have found that it takes on average T number of hours to write and test the IC device drivers. Some of the device drivers will take shorter amounts of time because of reuse, or because there are manufacturer-supplied code examples, and some will take longer because of unforeseen problems. This, however, all tends to average out over any given project.

Likewise, turn on and test is roughly equivalent to a certain amount of time to verify the testing of each firmware driver and each IC. This too will even out over the course of a typical project.

I found this result is similar to the results that other people have found in different disciplines as well. For instance, Allen Holub, et. al. have found that one can estimate software projects simply by counting the number of user stories [3]. I wager that many seemingly complex human endeavors can be accurately estimated in a similar manner. BUT don’t just believe some previously published results, you probably can’t skip the data acquisition and analysis steps for yourself as they reaffirm if you are on the right track or not giving you valuable insight that allows you to personally determine if they apply to your specific workflow.

Using simple tools to get a good handle on things

It seems simplistic and obvious after the analysis was done however, it was anything but obvious what the outcome would be when I started this project. This is another case of applying an “analysis of variance” methodology [4] to any given problem to get a handle on a process, be it design or production. As I found in my previous column on “managing chaos”, a few very simple tools can make a difference in understanding and getting a good handle on things.

Steve Hageman has been a confirmed “Analog-Crazy” since about the fifth grade. He has had the pleasure of designing op-amps, switched-mode power supplies, gigahertz-sampling oscilloscopes, Lock In Amplifiers, Radio Receivers, RF Circuits to 50 GHz and test equipment for digital wireless products. Steve knows that all modern designs can’t be done with Rs, Ls, and Cs, so he dabbles with programming PCs and embedded systems just enough to get the job done.

  Related Content

References

  1. Hageman, Steve, “Estimating PCB Design Time and Complexity”, PCB Design 007, March 1, 2011
  2. Hageman, Steve, “Managing Chaos”, EDN February 7, 2023. https://www.edn.com/managing-chaos/
  3. Holub, Allen, “No Estimates”, July 5, 2015. https://www.youtube.com/watch?v=QVBlnCTu9Ms
  4. Wheeler, Donald J, “Understanding Variation: The Key To Managing Chaos”, 1993, SPC Press, Knoxville, TN, ISBN: 0-945320-35-3
googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Metrics are good, but insight is best appeared first on EDN.

Infineon’s WBG catch-up with GAN Systems acquisition

Mon, 03/06/2023 - 18:27

At a dinner held during CES 2023 in January, I had the opportunity to talk to a senior Infineon executive sitting next to me about the company’s widely talked about acquisition ambitions for this year expressed in a statement. The rationale was that while the semiconductor industry is going through a downward cycle, it’s a good window of opportunity to acquire strategic assets at a good price.

A few days later, when I sat with GAN Systems CEO Jim Witham to get a sense of what’s new and latest in gallium nitride (GaN) semiconductor technology, what I found most striking was his remark about how quickly this market has emerged over the past decade or so. “If you go back five years, people were asking when GaN will happen, and now while GaN devices has established themselves in the market, it’s a little late to join the GaN party,” Witham said. Now it’s really about market penetration, he added.

At that moment, I instantly began thinking about large power semiconductor players and what they will do to address this market reality. One company that came to mind was Microchip, which acquired Microsemi in 2018 to get hold of silicon carbide (SiC) assets. Very good timing, indeed.

Another supplier of wide bandgap (WBG) semiconductors, UnitedSiC, was acquired by Qorvo in 2021. But what about other big analog and power semiconductor suppliers? What also came to mind was Infineon’s failed attempt to acquire Cree’s SiC business, Wolfspeed, back in 2017. Cree later named itself on its most successful brand, Wolfspeed, and its SiC business has rapidly grown since then.

Figure 1 GaN semiconductors facilitate higher power density, higher energy efficiency, and smaller device form factor. Source: Infineon Technologies

Well, Infineon Technologies has answered the call by snapping GAN Systems for $830 million. Industry watchers believe this could trigger a consolidation wave around the emerging WBG semiconductor industry. Besides Ottawa, Canada-based GAN Systems, other GaN semiconductor specialists include Cambridge GaN Devices, Efficient Power Conversion (EPC), Navitas Semiconductor, Transphorm, and Vanguard International Semiconductor.

Acquisition merits

While we’ve seen several SiC-centric acquisitions in recent years, Infineon’s deal marks the first major GaN asset purchase. And, given GaN’s crucial significance in power electronics, it’s very likely that it won’t be the last.

For Infineon, the number one power semiconductors business in terms of market share, it’s also critical that it fills the GaN missing link in its power semiconductors portfolio. Though not a prolific acquisition player, Infineon has made two highly successful deals in recent years. First, in 2014, it bought power semiconductor pioneer International Rectifier to bolster its automotive offerings. The acquisition also brought Infineon some GaN technology assets as an add-on to this deal.

Next, in 2019, Infineon acquired Cypress Semiconductor to further boost its portfolio for the automotive markets. Both deals—International Rectifier and Cypress—have gone well for Infineon. So, given Infineon’s track record and prevailing market conditions, the GAN Systems purchase looks like a timely call.

Figure 2 GaN semiconductors have been incorporated into Canoo’s on-board charger (OBC) to convert AC power from the wall receptacle into the DC power that charges the EV battery. Source: GAN Systems

Besides strengthening Infineon’s GaN technology roadmap, it provides the German chipmaker timely access to applications such as mobile charging, data center power supplies, residential solar inverters, and on-board chargers (OBCs) for electric vehicles (EVs). As GAN Systems chief Witham points out, the deal will also combine Infineon’s in-house manufacturing with GaN Systems’ foundry corridors. TSMC is GAN Systems’ manufacturing partner.

Here, it’s important to note that in February 2022, Infineon announced to double down on WBG manufacturing by investing more than €2 billion in a new front-end fab in Kulim, Malaysia. The first wafers will leave the fab in the second half of 2024, complementing Infineon’s existing WBG manufacturing capacities in its fab at Villach, Austria.

End of another chip startup story

It’s the end of the road for the Canadian startup founded in Ottawa in 2006. Girvan Patterson and John Roberts co-founded GaN Systems after seeing GaN as an opportunity and growth venue in the power industry, especially in data centers, industrial motors, and mobile chargers. Ottawa, the Canadian capital, had a research facility, National Research Council (NRC), which gave the upstart a small GaN fab to acquire fast learning cycles. “That’s why we could develop working GaN transistors fast enough,” Witham told EDN at CES 2023.

Figure 3 GaN Systems, based in Ottawa, has more than 200 employees.

GaN Systems has been a success story in the rapidly emerging WBG semiconductors arena. GaN semiconductors are now being targeted at a wide range of power applications due to their smaller form factor and energy-saving credentials enabled by higher power density and energy efficiency. According to market research firm Yole, the GaN revenue for power applications is expected to grow by 56% CAGR to approximately $2 billion by 2027.

GaN Systems claims it’s the only GaN semiconductors company with a production program for automotive. At CES 2023, it displayed an OBC from Canoo and a DC-DC converter from Vitesco for EVs. The adoption of GaN devices in EVs is at a tipping point, as Infineon CEO Jochen Hanebeck acknowledged in the press release announcing this acquisition.

All this makes GaN Systems buy an interesting acquisition deal.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Infineon’s WBG catch-up with GAN Systems acquisition appeared first on EDN.

Freeing a three-way LED light bulb’s insides from their captivity

Mon, 03/06/2023 - 15:10

To quote a past blog post of mine from mid-2019:

Some of my most popular blogs and teardowns have a common theme … LED light bulbs! You all sure love your incandescent-successor illumination sources!

Well, here’s another one ;-). At the beginning of this year, one of the archaic three-way incandescent bulbs in our living room table lamps fully (more on that adverb shortly) failed. Exemplifying that I too love incandescent-successor illumination sources, not to mention that I bleed green, I replaced it with an LED-based successor. And while I was at it, I replaced its equally geriatric (but still fully functional) companion in the other table lamp, relegating the incandescent predecessor to “emergency” spares storage.

The motivation here was admittedly at least partly aesthetic; since I was replacing the initial dead incandescent anyway, I also migrated from a “soft” (sometimes also called “warm”) white 3000K bulb to a more book reading-friendly 4000K “natural” (aka, “cool”, “natural” or “neutral”) white one. And since it’d look odd to have two lamps in the same room with different color temperature illumination sources…well…

Specifically, I found a three-way LED four-pack in claimed “Used-Like New” condition on Amazon Warehouse for less than $13, which was also less than half the brand-new four-pack equivalent price. Alas, as is sometimes the case with stuff acquired from the Warehouse area of Amazon’s website, while three of the bulbs worked fine, the fourth one was as dead as a doornail (or, if you prefer, a parrot). All’s good; Amazon promptly partial-refunded ¼ of what I paid to cover the cost of the nonfunctional luminary, and I didn’t even need to send it back. Prompting the one-word thought…you guessed it…“teardown!”

Part of my analysis motivation was personal. I’d conceptually already figured out how three-way incandescents worked; there were two filaments inside, with varying lumen light output capabilities, and the socket switch selected between “off”, “low-output filament”, “high-output filament” and “low-and-high output filaments at the same time” (strictly speaking, therefore, they’re actually “four-way” bulbs). Common combinations are 30/60/100W and 50/100/150W; to my earlier “fully” adverb foreshadowing, when one filament burned out the other would still sometimes work for a while…which is why I must confess to having a couple of other three-way incandescents also in “emergency” spares storage, one with masking tape stuck to it with the phrase “50W-only” scrawled on it, and the other adorned with “100W-only”. Thrifty of me, eh?

But how did the one/other/both/none switching work at the socket? And how did the operation translate to an LED bulb-based implementation? That’s what I wanted to find out, and to share with you. Voila our victim, without further ado:

Here’s where the initial functional action happens, at the base:

You might not notice what’s different, until you see this comparative image of a conventional (albeit dimmable) LED light bulb also in my teardown pile:

(By the way, notice the hint of what looks like a filament structure inside this dimmable LED bulb. You’re going to have to wait for a future teardown to find out more about that!)

See the extra contact ring in the three-way? It’s the key. Here’s what an incandescent three-way equivalent looks like, courtesy of Wikipedia’s entry:

And here are a couple of Wikipedia-provided images of an associated three-way socket:

I’d always figured that the three-way operational concept was pretty simple, but as the bulk of lights in my residences have either been traditional two-way (on-and-off) or fully variable dimmable, I’d just never bothered digging into the details until now. Here’s Wikipedia’s take:

A standard screw lamp socket has only two electrical contacts. In the center of the bottom of a standard socket is the hot contact (contact one in [earlier] photo), which typically looks like a small metal tongue bent over. The threaded metal shell is itself the neutral contact (contact three in photo). When a standard bulb is screwed into a standard socket, a matching contact on the bottom of the bulb presses against the metal tongue in the center of the socket, creating the live connection. The metal threads of the bulb base touch the socket shell, which creates the neutral connection, and this is how the electrical circuit is completed.

 A 3-way socket has three electrical contacts. In addition to the two contacts of the standard socket, a third contact is added. This contact is positioned off-center in the bottom of the socket (contact two in photo). This extra contact matches a ring-shaped contact on the bottom of a 3-way bulb, which creates the connection for the second filament inside the bulb… The center contact of the bulb typically connects to the medium-power filament, and the ring connects to the low-power filament. Thus, if a 3-way bulb is screwed into a standard light socket that has only a center contact, only the medium-power filament operates.

Here’s an animation of how it all plays out:

That’s all well and good, but LED light bulbs don’t have filaments. So…??? This particular bulb doesn’t work, as you already know, and I’m not going to rip the globe off a perfectly good one to observe it in illuminated operation, but maybe if we look inside this one, we can hazard a reasonable guess. Onward and inward:

This is interesting. At the center is a MT7606E high current precision liner LED driver from a Chinese company called Maxic Technology, a four-contact female connector (whose male equivalent, currently connected (as you can see from the pins’ ends sticking through, presumably is soldered to the PCB in the bulb sleeve), and a passive (a resistor, if the PCB markings are correct). Surrounding them are two concentric rings of LEDs: 15 for the inner, and 36 for the outer. My guess is that the inner ring handles the 50W-equivalent “low” setting, while the outer does 100W-equivalent “medium” duties, and all LEDs illuminate for “high”.

Let’s keep going (new LED light bulbs are certainly easier to disassemble than their forebears!):

All we get is a peek at the PCB inside; for a fuller view, we first need to twist off the base:

Notice the three wires extending from the PCB to the bulb end; the white one goes to the center “hot” contact while the grey one goes to the off-center “hot” contact ring. Meanwhile the black “neutral” wire press-connects to the base during bulb assembly. Snip the first two wires and the base separates from the bulb end, providing a fuller perspective of both:

There we go:

Disconnecting the PCB from the LED plate gives us unobstructed visages of both sides of both:

Several big caps and a transformer up top:

And on the other side, a diode, two more resistors, two MB10F bridge rectifiers and a mystery IC labeled “MT7712S”. Presumably it’s also from Maxic Technology, but I can’t find any information on it online; knowledgeable reader insights are as-always welcome!

And that’s all she (or more accurately, in this case) wrote! I hope you’ve found this teardown “illuminating” (insert sad trombone sound). Sound off with your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Freeing a three-way LED light bulb’s insides from their captivity appeared first on EDN.

Intel expands ecosystem with quantum computing SDK

Fri, 03/03/2023 - 17:03

Intel’s Quantum SDK version 1.0 is a full quantum computer in simulation that allows developers to program quantum algorithms. The kit features a C++ programming interface using an industry-standard low-level virtual machine (LLVM) compiler toolchain. It also interfaces with Intel’s quantum hardware, including the Horse Ridge II control chip and quantum spin qubit chip, scheduled for release later this year.

“The Intel Quantum SDK helps programmers get ready for future large-scale commercial quantum computers,” said Anne Matsuura, director of Quantum Applications & Architecture, Intel Labs. “It will not only help developers learn how to create quantum algorithms and applications in simulation, but it will also advance the industry by creating a community of developers that will accelerate the development of applications, so they are ready when Intel’s quantum hardware becomes available.”

The development kit features a quantum runtime environment optimized for executing hybrid quantum-classical algorithms. Developers have the choice of two target backends for simulating qubits to either represent a higher number of generic qubits or Intel hardware.

The first backend is the Intel Quantum Simulator (IQS), an open-source generic qubit simulator capable of 32 qubits on a single node and more than 40 qubits on multiple nodes. The second is a target backend that simulates Intel quantum dot qubit hardware and enables compact model simulation of Intel silicon spin qubits.

The Quantum SDK 1.0 is available now on the OneAPI Intel Dev Cloud. Registration is required. To read more about Intel’s quantum computing efforts, click here.

Intel 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Intel expands ecosystem with quantum computing SDK appeared first on EDN.

Nonlinear models ease GaN device simulation

Fri, 03/03/2023 - 17:03

Gallium Semi has released a library of nonlinear models for all of its broadband dual flat no-lead (DFN) plastic and air cavity ceramic (ACC) packaged GaN transistors. The 22 models are designed and validated with broadband S-parameters and load pull measurements to ensure accurate simulations.

Gallium’s GaN-based semiconductor devices can be used in a broad range of applications, including 5G mobile communication, aerospace, defense, industrial, scientific, and medical. Discrete transistors are available with varying output power levels and frequency ranges.

The library is available free of charge for qualified customers and is compatible with both the Cadence AWR Design Environment and the Keysight PathWave Advanced Design System (ADS) software. For product information and to request access to the models, click here.

Gallium Semiconductor

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Nonlinear models ease GaN device simulation appeared first on EDN.

GaN system-in-package targets USB-C PD adapters

Fri, 03/03/2023 - 17:03

Weltrend announced its first GaN-based system-in-package integrating an AC/DC controller and Transphorm’s 240-mΩ, 650-V SuperGaN FET. The WT7162RHUG24A is intended for USB Type- C Power Delivery (PD) adapters used to charge smartphones, tablets, laptops, and other smart devices from 45 W to 100 W.

The multimode flyback PWM controller and GaN FET are housed together in a surface-mount 24-pin, 8×8-mm QFN package, achieving a power density of 26 W/in.3. Key specifications include a peak power efficiency of greater than 93%, a maximum operating frequency of 180 kHz, and a wide output voltage range. The device is compliant with USB PD 3.0, which includes a programmable power supply (PPS) function that allows the output voltage to be adjusted in increments as small as 20 mV from 3.3 V to 21 V.

The WT7162RHUG24A operates in quasi-resonant (QR) mode during heavy load and discontinuous conduction mode with valley switching during light load. Built-in protection features include brown-out, overvoltage, overcurrent, and output short circuit.

Transphorm will showcase the Weltrend system-in-package at the 2023 Applied Power Electronics Conference (APEC 2023). Samples of the WT7162RHUG24A SiP will be available in the second quarter of 2023.

WT7162RHUG24A product page

Weltrend Semiconductor 

Transphorm

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN system-in-package targets USB-C PD adapters appeared first on EDN.

pureLiFi readies light antenna module for smartphones

Fri, 03/03/2023 - 16:19

Light Antenna ONE from pureLiFi leverages Light Fidelity (LiFi) wireless communication technology to transmit data over visible light. Smaller than a dime, the antenna module is poised to enable LiFi communication in millions of connected devices and smartphones.

A light antenna is an optoelectrical antenna that integrates into an end product the same as a conventional RF antenna and enables LiFi in connected devices. By harnessing the light spectrum, LiFi can deliver faster, more reliable, and secure wireless communications compared to conventional technologies, such as Wi-Fi and 5G.

The Light Antenna ONE will be compliant with the upcoming IEEE 802.11bb Light Communication standard, which is in its final stages of ratification. According to pureLiFi, the antenna module is optimized for performance, size, and cost to meet the production requirements of smartphone and connected-device manufacturers. The company believes this will enable LiFi integration at scale.

pureLiFi introduced the Light Antenna ONE at the recent World Mobile Congress (MWC 2023). A datasheet was not available at the time of this announcement.

pureLiFi

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post pureLiFi readies light antenna module for smartphones appeared first on EDN.

RedCap module seeks to expand 5G IoT reach

Fri, 03/03/2023 - 16:18

Quectel unveiled the Rx255C series of 5G NR Reduced Capability (RedCap) modules based on Qualcomm’s Snapdragon X35 5G modem-RF device. The wireless modules provide low-latency 5G communication, while offering considerable optimization in size, energy savings, and cost effectiveness.

The RedCap specification introduced in 3GPP 5G NR Release 17 addresses wireless devices that do not require full 5G NR capabilities. RedCap devices will help drive the reach of 5G technology into a variety of new business verticals and mobile broadband scenarios, such as entry-level broadband, compute, industrial automation, smart city, smart energy, and smart wearables.

Rx255C modules support 5G standalone (SA) mode and a maximum bandwidth of 20 MHz on the sub-6 GHz frequency band. The series is backward-compatible with LTE networks and covers nearly all the mainstream carriers worldwide. Modules provide a theoretical peak downlink data rate of approximately 220 Mbps and an uplink data rate of about 100 Mbps.

To optimize the cost and size of the Rx255C modules, customers can choose the number of antennas, reduce the transmitting and receiving bandwidth, and select optional 64QAM/256QAM modulation. A wide range of interfaces is available, including PCIe 2.0 and USB 2.0, as well as supplementary functions like Voice over LTE and firmware over-the-air upgrades.

Quectel debuted the Rx255C series at the recent Mobile World Congress (MWC 2023). Engineering samples of the Rx255C will be available in the first half of 2023. A datasheet was not available at the time of this announcement.

Quectel Wireless Solutions

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RedCap module seeks to expand 5G IoT reach appeared first on EDN.

Silicon carbide (SiC) and the road to 800-V electric vehicles

Thu, 03/02/2023 - 18:43

The move from 400-V to 800-V battery systems in electric vehicles (EVs) brings silicon carbide (SiC) semiconductors to the fore in traction inverters, on-board chargers (OBCs), and DC/DC converters. According to market research firm IDTechEx, two drivers are critical in the move from 350-400 V to 800 V powertrains: higher power levels of DC fast charging (DCFC) like 350 kW and drive cycle efficiency gains.

However, while DCFCs aren’t widely available yet, drive cycle efficiency gains are crucial in reducing power losses and downsizing high-voltage cabling in EVs. Especially, with SiC MOSFETs, it can lead to 5-10% efficiency gains, which can downsize the expensive battery, save costs, and improve the vehicle’s range. The potential areas for drive cycle efficiency encompass battery chemistry, high-voltage cable reduction per vehicle, and improved motor design.

Figure 1 The emergence of wide bandgap (WBG) materials like SiC will transform power density, energy efficiency, and packaging in power system designs for EVs. Source: IDTechEx

Many carmakers and tier 1 suppliers are embracing 800-V drive systems to achieve much faster charging and help reduce EV weight. The 800-V drive trains—twice as much as today’s 400-V systems—can cut charging times in half. Take the Hyundai Ioniq5 and Kia EV6, which can deploy 200 kW and go from 10% to 80% charge in 18 minutes. As a result, 800-V EVs will reduce range anxiety by enabling much faster charging times.

However, the IDTechEx press release notes that the move to 800 V in EVs has been a mixed bag. Lucid Air, the first 900-V EV in production, sold around 7,000 units in 2022 after setting an initial target of 20,000 cars. Likewise, Porsche’s Taycan sales declined in 2022. In both cases, parts shortages and supply chain woes like wire harness shortages due to the Russia-Ukraine war have been widely linked to commercial pitfalls.

On the other hand, Hyundai’s 800-V vehicles—IONIQ 5 and Kia EV6—have doubled their sales in 2022, selling around 70,000 units in a year while taking EVs out of the luxury segment and into the mainstream. Here, it’s worth mentioning that Hyundai spent 2022 diversifying its SiC supply chains, bolstering its existing relationships with Infineon and Vitesco, and signing new deals with onsemi and STMicroelectronics.

Figure 2 SiC devices, which have a higher voltage rating in relation to their die size, are rapidly gaining traction in EV applications and fast-charging EV infrastructures. Source: STMicroelectronics

The success of Hyundai’s EVs, when seen in conjunction with the automaker’s efforts to bolster SiC-related supply chains, points toward this WBG technology’s crucial role in the future of 800-V cars. Especially when drive cycle efficiency comes to the fore of powertrain design centered around the power density, energy efficiency, and reliability attributes.

The move to 800-V cars is another sign that the time has come for high-voltage WBG power electronics, and here, the role of SiC semiconductors will be vital for high-voltage EV batteries and chargers.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon carbide (SiC) and the road to 800-V electric vehicles appeared first on EDN.

RMS stands for: Remember, RMS measurements are slippery

Thu, 03/02/2023 - 16:43

The RMS value of a time dependent voltage or current is based on the simple concept of equality of power dissipation; however, its theoretical calculation or instrument implementation may not be easy. Reference 1 explains the principles of RMS measurements and compares performance of 20 digital multimeters (DMMs). The results are alarming. Many units report lower RMS values than expected; for some instruments, numbers are almost half of what they should be.

Twenty years later, the traps of RMS measurements are still there. Technology made a large step forward and most people trust the readings of their true-RMS multimeters and simulation tools without verification. However, there are important details. This paper is aimed at improving awareness of RMS measurements.

The generic formula to calculate the RMS value of a signal is:

(1)

The calculation takes three steps: squaring the AC signal, finding the mean (also, average or DC) value of the squared signal and, getting the square root of the mean, thus the name RMS.

The easiest way to verify if an instrument does the right job is to apply a square-wave signal to it, as the expected output can be calculated with simple graphical analysis. Figure 1 shows an example with a 5-V single polarity signal. Recall that the average value calculation transforms the time dependent signal into a box that spans the whole period of the signal and has the same area as the area under the signal; the height of the box is the average value. We end up with the RMS value of 3.535 V.


Figure 1 A graphical calculation of the RMS value.

Let us check if the equipment produces the same result: Figure 2 shows the performance of a simulation tool. The square wave signal is connected to a DC voltmeter at the left and a probe and an AC voltmeter at the right. The DC value is correct (be careful as this is the DC value of the input, not the squared input signal), the RMS value reported by the probe is correct; however the reading of the AC voltmeter is far below expectations.

Figure 2 The RMS numbers reported by the probe and the AC voltmeter do not match in the simulation.

Things become clear when we remove the DC component in the input signal and calculate the expected RMS value. The graphical analysis in Figure 3 yields the value of 2.5 V.

Figure 3 RMS calculation when the input signal has no DC component in it.

Obviously, the AC voltmeter silently blocks the DC component of the input signal and defines the RMS value of the modified signal.

You can still get the correct RMS result by plugging the readings of the two meters in Figure 2 in the following formula:

(2)

 

The same problem appears with many digital multimeters and some digital oscilloscopes. They produce correct RMS values for DC-free signals and wrong values when the signal has a DC component in it. Some oscilloscopes offer two options for RMS measurements called DC RMS and AC RMS which can be confusing. How can one survive this?

The fastest way is to test the instrument with a square wave signal. Oscilloscopes have a built-in generator that provides a unipolar square wave signal to adjust probe compensation. Connect that signal to the input, select DC coupling and measure VMAX and VMIN. Using the two results, calculate the mean and the RMS value of the signal as presented in Figure 2. Then measure mean and RMS values. The two numbers should differ. Check if they match the calculated values. Now subtract the mean value from VMAX and VMIN and calculate the RMS value as presented in Figure 3. Select AC coupling—the mean and the RMS readings should be very close. Check if the numbers match the calculated values.

DMMs need an external signal to test DC and AC performance. The easiest way is to apply the probe compensation signal of a scope to the meter. The advantage is that you already know the expected results. Be careful: the unit may display slightly different numbers for DC and AC measurements despite it blocking the DC component of the signal. The reason for this is due to measurement errors, an inherent feature of every measurement.

If you want to skip manual calculations, you can use the simple circuit presented in Figure 4. Similar to the circuit in Reference 2, it generates three signals with different duty cycles. The table displays expected values of VDC, VAC and VRMS for each signal.

Figure 4 A simple circuit can provide three test signals with a frequency of 1 kHz and different duty cycles (D). The values of VDC, VAC and VRMS change accordingly.

Use the circuit and the table to tell whether you can take the RMS reading of your instrument as is, or if you must measure VDC and VAC separately and use formula (2) to calculate VRMS.

You can also use a function generator to create these signals. Make sure the signal spans from 0 to 5 V, otherwise you cannot rely on the numbers in the table.

So, be careful with RMS measurements, as wrong results can lead to wrong conclusions and wrong decisions. As a rule, it generally takes a (very) short time to fix problems. In order to avoid sleepless nights, make sure you clarify how your equipment works before you start measuring.

A recent video (Reference 3) provides examples of RMS measurements with many instruments and various waveforms, crest factors and frequencies. Give it a watch!

References

  1. Williams J., T. Owen. Understanding and selecting rms voltmeters. EDN, May 11, 2000, 54-58.
  2. Stofka M. Waveform generator produces 25 and 75 % duty cycles. EDN, Mar 18, 2010.
  3. Measuring RMS Voltages: How true is your True RMS multimeter really https://www.youtube.com/watch?v=jhy_kfhwwbo

Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. He teaches electrical and electronics courses at a Toronto community college.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RMS stands for: Remember, RMS measurements are slippery appeared first on EDN.

Apple’s latest product pronouncements: No launch events, just news announcements

Wed, 03/01/2023 - 16:40

Back in early October 2022, a month after Apple’s most recent product launch event, I wrote:

I’m wondering if Apple will be skipping its second fall event this year, instead choosing to announce whatever (if anything) it plans to only via press releases. And if so, I wonder what this might say about Apple’s current technology and product health: is it having difficulties taping out higher-end Pro and Max variants of the M2 SoC unveiled in June, following in the footsteps of the M1 and its M1 Pro and Max bigger siblings, for example? And/or is foundry partner TSMC having problems getting them to yield sufficiently to allow for system production based on them? And speaking of M2-based systems, is an Apple Silicon-based successor to the legacy x86-based Mac Pro and/or iMac Pro still in the cards for 2022? Inquiring minds want to know.

I instead filled that early-October editorial “slot” with coverage of the latest from Amazon and Google. As it turns out, I was right; there (atypically) was no second Apple fall event in 2022 (although as you’ll soon see, there’s intriguing evidence that one was originally planned). Instead, since September all we’ve gotten is a string of press releases, which is…interesting. And as I write these words in late February 2023, there’s been no hint of an upcoming (typical) spring event, either. Judging from the steady stream of leaks, Apple seemingly continues to work on its upcoming “mixed reality” headset, initially rumored to be released this spring, although the latest prognostications peg it for this summer’s yearly Worldwide Developer Conference (WWDC). Truth be told, WWDC is where I always suspected the “reveal” would happen, since new product platforms rely heavily on third-party app and service embraces for success.

That all said, Apple’s press release-unveiled products are still interesting, at least somewhat and at least some of them. I’ll begin with the M2 Pro and Max follow-ons to the baseline M2 SoC unveiled at last June’s WWDC. For chronological comparison purposes, here’s when the M1 SoC family rolled out:

The dual-die transposer-linked M2 Ultra (not to mention the rumored quad-die M2 Extreme) haven’t arrived yet and, if rumors are to be believed, may never appear. But the M2 Pro and Max are here, as of mid-January. Conceptually, the evolutionary path is reminiscent of that in the earlier M1 family trajectory:

  • The “Pro” variant has more CPU cores, along with an altered performance-vs-efficiency core proportion, than the baseline precursor, along with more graphics cores, while
  • The “Max” variant is identical to its “Pro” sibling from a CPU subsystem standpoint, albeit with a doubled-up graphics subsystem.

Both devices’ architectural upgrades come with commensurate larger transistor-count budgets, with all three family members on a common “second-generation 5 nm” lithography foundation (and benefitting from common M2-family enhancements over M1 predecessors). Specifically:

M2:

  • 8-core CPU (4 “performance”, 4 “efficiency”)
  • Up-to-10 (functional) graphics cores
  • 20 billion transistors

M2 Pro:

  • Up-to-12 (functional)-core CPU (6 or 8 “performance”, 2 “efficiency”)
  • Up-to-19 (functional) graphics cores
  • 40 billion transistors

M2 Max:

  • Up-to-12 (functional)-core CPU (6 or 8 “performance”, 2 “efficiency”)
  • Up-to-38 (functional) graphics cores
  • 67 billion transistors

If you’re wondering why I repeatedly clarified these sets of specs with the word “(functional)”, allow me to explain. If you look at a die shot of the M2 Pro, for example, you’ll see all 12 CPU cores and 19 graphics cores present there. But each of them takes up a not-insignificant chunk of silicon area, leading to a statistically not-insignificant probability that a fabrication flaw (particulate, etc.) will end up within one of them versus elsewhere on the die. In such a case, instead of discarding the entire sliver of silicon, Apple and its manufacturing partners “fuse” off the flawed circuitry and ship the chip as a lower-core-count, lower-priced product option.

This methodology, also commonly used (for example) by graphics chip companies, is why the M2 Pros within Apple’s systems come in versions with both six- and eight-“performance” CPU cores, along with sixteen- and nineteen-graphics core allocations (currently, at least: additional future proliferations are of course also possible). The SoC’s “efficiency” CPU cores, in contrast, are not only few in number but also comparatively tiny, and are therefore comparatively immune to flaw-induced failures. Such a strategy might seem wasteful at first glance, and in a sense, it is: you end up solely with large-die chips that consume more foundry wafers than a number of smaller-die chips would require. But you also reduce the number of “line items” that not only have to be designed upfront but must also be ongoingly managed throughout the supply chain.

About that “19” graphics core count; yes, I agree, it’s an odd number to see for us engineers who are used to encountering only even counts of things (and power-of-two ones, at that). But according to Apple, that’s the actual number of cores on the die, and the optimum number from a layout efficiency standpoint. And regarding my earlier question, “Is Apple having difficulties taping out higher-end Pro and Max variants of the M2 SoC unveiled in June?”, in fairness I should point out that my last-fall concerns here were for naught. The delay from the launch of the M2 SoC (including systems containing it) to that of the M2 Pro/Max ended up being seven months; the prior launch cadence from the M1 to the M1 Pro/Max was 11 months.

About those systems…Apple also unveiled several in January: 14” and 16” MacBook Pros based on the M2 Pro and Max SoCs:

along with upgraded variants of the Apple Silicon-based Mac mini containing both M2 and M2 Pro processor options:

The M2 Pro version of the mac Mini, whose backside is shown here, was most interesting to me. For one thing, as you can see, it (unlike its M1 and base M2 siblings) addressed the comparative-I/O and -maximum driven displays shortcoming versus the prior-generation Intel-based mac Mini that I’ve mentioned before. For another, benchmark testing I’ve seen suggests that it’s a compelling alternative to the M1 Max version of the Mac Studio introduced a year ago, while at the same time being not only more svelte but also lower priced. This isn’t the only situation where the Mac Studio seems to already be getting squeezed…hold that thought a bit.

But speaking of price, unless you can’t afford anything more, I’d suggest staying away from the lowest-end M2 Mac mini. Not only does it contain only 8 GBytes of RAM and a 256 GByte SSD (both non-upgradeable), the SSD is (following in the footsteps of the M2 MacBook Air and MacBook Pro) slower than the same-capacity SSD in the M1 precursor. Regular readers already know that I’m not shy about shaming Apple for exhibiting obsolescence by design whenever I think it’s happening, but in this case I think the root cause is more innocent.

As I’ve mentioned before, a larger number of simultaneously accessible flash memory die (and chips based on them) within a SSD sometimes leads to higher overall performance, assuming the media management controller is capable of exploiting this parallelism potential. Balance that, however, against the fact that memory chips are getting increasingly dense thanks to shrinking fabrication lithographies (along with ever-more exotic multi-level cell data storage techniques, although I don’t think the latter is a factor in this particular case). The lowest-capacity SSD in the M2 mac Mini has half the chips of that in the comparable-capacity M1 predecessor, leading (I suspect) to the retrograde speed.

As I alluded to in a recent teardown, Apple also resurrected the full-size HomePod in January:

If you’re thinking that it looks just like the first-generation HomePod, introduced in mid-2017 and discontinued in the spring of 2021, you’d be almost right; they’re near-identical save for slight dimensional variances. And addressing my earlier criticism, the redesigned version does come with a $50 lower price tag ($299 for the second-gen versus $349 for the first-gen), although consider that the competitive high-end Amazon Echo Studio is still $100 less ($199) regular-priced, and is regularly promotion-discounted even further. Presumably to lower the BOM cost, Apple reduced the number of tweeters from seven (first-gen) to five (second-gen); that said, reviewers report that the two versions sound near-identical. The second-gen unit also back-steps from a Wi-Fi perspective, from 802.11ac to 802.11n. And adding further insult to injury, the two generations can’t be paired to each other in a stereo configuration, presumably due at least in part to their differing SoC foundations (iPhone-tailored A8 vs watch-tailored S7). Or maybe Apple just wants you to retire your first-gen HomePod and buy a replacement, too…

Oh, by the way…I earlier “teased” that although there was no second Apple fall event in 2022, there’s intriguing evidence that one was originally planned. Here’s the back story. Along with the various press releases published in January 2023, Apple also unveiled a slick 18-minute accompanying video reminiscent of the ones done at prior virtual events, and focused on the new M2-based systems (reminiscent of the systems launched at prior October events):

That video, it turns out, contained metadata dating from October 2022. Eureka!

Thus concludes January 2023. But these weren’t the only products Apple’s released since its last event in early September 2022. October 2022 did bring us a few more press releases: of a new baseline iPad in four (vibrant, so say Apple’s marketeers) color options:

Upgraded 11” and 12.9” iPad Pros based on the M2 SoC (at a time, candidly, when the performance potential of their M1-based precursors is still largely going untapped, even with latest-generation iPadOS 16):

And a next-generation Apple TV 4K that starts at $129, in contrast to (for example) Roku’s high-end 2022-model Ultra set-top box at $30 less than this (99.99) regularly-priced, and, like the Echo Studio I mentioned earlier, frequently promo-priced even notably lower:

As you can probably tell, I was underwhelmed.

This all leads to “what’s next” ponderings. I’ve already mentioned the “mixed reality” headset. But what about computers, and the chips they’re based on? As I discussed in June 2022, and is still the case as I write these words nearly a year later, Apple has blown way past its original “two year” forecast to convert its entire computing product line from Intel to Apple Silicon. With the aforementioned release of the M2 Pro-based Mac mini, one more Intel-based bowling pin has fallen. But the Intel-based Mac Pro remains standing. And even if Apple forcefully topples it over, its customers might not be pleased with what takes its place.

Here’s my take. For years now, the bulk of Apple’s computing portfolio has consisted of products (various MacBook variants, along with the iMac and Mac mini) that were:

  • Modestly powerful at-best
  • Often power- (if battery-operated) and/or thermal- (if small form factor) constrained, and
  • Rarely if ever end-user upgraded, assuming they could even be

Turns out, Apple’s Arm-based SoCs were pretty much perfect for such scenarios. But high-end professional (content-creation, etc.) systems like the Mac Pro are a completely different beast. Their users have long been “trained”, and therefore now have the reasonable expectation, that not only will these systems be wicked-fast (trading off power consumption and heat dissipation in the process), they’ll also be long-term upgradeable: higher-capacity mass storage, more RAM, newer and more powerful graphics cards, oodles of expansion busses and I/O, etc.

Those aspirations, unfortunately for Apple, don’t line up well against systems based on highly integrated SoCs with non-upgradeable graphics and dallops of memory. And here’s another thing: I already mentioned earlier that the latest Mac minis are squeezing the just-introduced-a-year-ago Mac Studio from below. I’m assuming that, given how long Apple’s been promising us an Arm-based Mac Pro, the company will sooner or later be compelled to follow through and deliver something. But if it’s as non-upgradeable as I suspect it will be, won’t it also squeeze the Mac Studio from above? And if so, why’d Apple even bother introducing the Mac Studio? Then again, we’ve been here before: the company was forced (IMHO) to unveil a “Pro” iMac variant to fill the time gap between the underwhelming “trash can” Mac Pro and its belated successor.

Other rumors include a finally-15” variant of the MacBook Air and an SoC-upgraded (24”?) iMac, one of the first Apple Silicon-based computers Apple released but, as I write these words, subsequently still stuck at the M1 generation. We shall see. Until next time (WWDC?), I look forward to your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s latest product pronouncements: No launch events, just news announcements appeared first on EDN.

Closing the gaps in your digital oscilloscope waveforms

Tue, 02/28/2023 - 18:17

It is no secret that digital oscilloscopes are sampled data instruments and do not acquire a continuous record of the input signal. We know from sampling theory that the input signal, properly sampled at greater than twice the signal bandwidth, can be recovered from the acquired samples. So, how do the samples stored in the acquisition memory get converted into a continuous signal? Also, how can measurements made on sampled data values be accurate? Finally, how can we measure time intervals smaller than the sampling period? The answer to these questions is simple, interpolation!

 Display interpolation

Interpolation is the addition of computed sample points between the acquired signal samples. This increases the effective sample rate but does not improve the bandwidth of the acquired signal.  The effect of interpolation is to fill in the gaps in the waveform as shown in Figure 1.

Figure 1 The top trace shows a signal rendered showing only the real sample points.  The lower trace shows the same signal with interpolation turned on. The interpolated points fill in the gaps between the real sample points which are highlighted as intensified dots.

Most digital oscilloscopes offer a choice of either of two interpolation processes: linear or sin(x)/x interpolation for display interpolation. The interpolation method is generally selected in the input setup. In the oscilloscope used in the example, interpolation is individually controlled for each input channel, in other oscilloscopes, interpolation is global affecting all acquisition channels. Linear interpolation basically assumes that a straight line connects the real samples. This can be implemented by convolving a triangular window function with the signal. One way to do this is to use an appropriately configured digital filter.

Sin(x)/x interpolation convolves a sin(x)/x function with the signal. The sin(x)/x, or SINC function, in the time domain has a frequency spectrum of a low pass filter as seen in Figure 2.

Figure 2 The sin(x)/x function in the time domain (upper trace) has a low pass filter response in the frequency domain (lower trace).

The bandwidth of the sin(x)/x frequency response is the reciprocal of the period of the oscillation in the sin(x)/x function. Since convolution in the time domain is multiplication in the frequency domain, the sin(x)/x interpolation is basically a low pass filtering operation. 

Both the linear and sin(x)/x interpolation methods have an increased validity as the ratio of the sample rate to the bandwidth, or the oversampling ratio, increases. Interpolation always improves as the sample rate is made higher for a given bandwidth. There are, however, some differences in performance. Linear interpolation works very well when the oversampling ratio, is at least ten-to-one. Examples showing linear interpolation with different oversampling ratios are shown in Figure 3.

Figure 3 Examples of the performance of the linear interpolator on a 500 MHz sine wave with oversampling ratios of 20:1 (top left), 10:1 (middle left), 5:1 (bottom left), 2:1 (top right). The persistence display (center right) of the 2:1 case shows that it is still a sinewave.

While not visually ‘pretty’ all versions are technically correct. If infinite display persistence is turned on the discontinuous looking waveforms will trace out the original sinewave as varying phases of the signal are sampled.  The use of persistence to view a history of multiple acquisition is an operating hint that can be useful when dealing with sampled waveforms with low oversampling ratios.

Sin(x)/x interpolation works very well with oversampling ratios greater than two-to-one. They do have issues if the oversampling ratio drops below two-to-one as shown in Figure 4.

Figure 4 Comparing linear (top traces) and sin(x)/x interpolation (bottom traces) on a step function with a 27ns risetime with sampling rates of 250MS/s (left traces) and 25 MS/s (right traces).

The step function is a lower frequency signal that has high frequency components due to the transition in the middle. The 27ns rise time of the step has a nominal bandwidth of 13 MHz. Both interpolation methods work fine at the 250 MS/s sampling rate, approximately a 20:1 oversampling. The 25 MS/s rate, with a sample period of 40ns per point, is slightly less than a 2:1 oversampling ratio. The linear interpolator has only a single sample on the edge and will not define the risetime correctly but the waveshape is basically correct. The sin(x)/x interpolator is operating below the Nyquist limit and is showing pre-shoot and overshoot that is not really on the waveform, an effect called “Gibbs Ears”. So, it is important to keep an eye on the sampling rate and make sure it is greater than the Nyquist limit when using any interpolator.

Interpolation math function

The oscilloscope used in this article offers interpolation as a math function as well. The math function version includes linear, sin(x)/x, and cubic interpolation. Cubic interpolation fits a third order polynomial between samples. Its performance, in terms of computational speed, is intermediate between sin(x)/x and linear interpolation. The interpolation math function allows a user selectable interpolation factor between 2 and 50 interpolated samples between acquired sample points. Figure 5 shows an example of a 5:1 interpolation using the math function.

Figure 5 The controls for the interpolator math function setup to increase the number of samples by a factor of five using a cubic interpolator.

The interpolator math function offers greater flexibility with a great range of up-sampling and controls to customize the interpolation filter. Unlike the input channel interpolator, the math function allows viewing of both the input and output of the interpolator simultaneously to check for proper responses.

The interpolation math function allows users to increase the number of samples in a waveform, this can be useful before applying the signal to a digital filter where the cutoff frequency of the filter is a function of the sampling rate. It is also useful in characterizing waveform measurements as discussed in the next section.

 Measurement Interpolation

Timing measurement in oscilloscopes are performed by finding the time at a crossing of a voltage threshold of the waveform. The time between crossing of the same slope yields a period measurement. Similarly, the difference in crossing time between edges with opposite slopes give a width measurement. In many cases, the rise time of the signal is very fast and with a sampling rate of say 20 GHz there are only a few samples on the edge. Simply drawing a line between the samples around the threshold is the most obvious choice for finding the crossing, however, this can lead to large errors when the samples are not symmetrically spaced on either side of the threshold. Interpolation is used internally during measurements to locate measurement threshold crossings more exactly with a precision much better than sample period intervals. The measurement process uses a dual interpolation where cubic interpolation is used to add samples between the acquired samples and the threshold crossing time is found by linear interpolation between the two interpolated samples on either side of the threshold as shown in Figure 6.

Figure 6 Using cubic interpolation in combination with linear interpolation to increase the time resolution of the internal timing measurements in a digital oscilloscope.

Time is measured at the point where the waveform amplitude crosses a predefined threshold. Samples are spaced at the sample interval (50 ps at 20 GS/s for this example). Cubic interpolation is used on the waveform followed by linear interpolation of the points nearest the crossing to find the exact time of the threshold crossing. The resultant measurement has much greater time resolution than the raw sample spaced at the sampling period. Cubic interpolation is used because it provides greater computational efficiency, combining accurate sample insertion with greater calculation speed than sin(x)/x interpolation. 

 Timebase interpolator

A less familiar but even more important interpolator is the one that measures the sub-sample time delay between the trigger event and the sample clock.  Generally, the trigger event is asynchronous with the oscilloscope’s sample clock. The sampling phase or horizontal offset of each acquisition is random. If you were to histogram the time from the trigger to the first sample, it would exhibit a uniform distribution between a 0 and 1 sample period. Due to the random horizontal offset, a persistence display of multiple waveforms shows all possible locations of the sample points which was shown in Figure 3. 

A stable triggered display requires that each acquired waveform trace be aligned with the trigger point in exactly the same time location. For a timebase with no trigger delay offset, the trigger location is usually at time zero. Measuring the time difference between the trigger and the sample clock is done using a device called a time-to-digital converter (TDC), basically a high-resolution counter, to measure the time delay. This time delay is the horizontal offset of the waveform. When the waveform is displayed, the horizontal offset is used to line up the triggers from multiple acquisitions, Figure 7 shows six acquisitions of a complex waveform.

Figure 7 Six acquisitions of an ultrasonic waveform (top grid) are each horizontally expanded using the zoom traces to show the horizontal offset for each trace in the lower grid. The labels Z1 through Z6 point to the real sample points before each trigger which is marked by the cursor at t=0.

The area about the trigger was expanded using horizontal zoom to see the range of variation in the horizontal offsets of the six acquisitions. The sample period is 20ns (50MS/s). For the six acquisitions, the horizontal offset varied between 2.5ns to 17.7ns before the trigger at t=0. This is within the one sample period range previously discussed. The time resolution of the TDC depends on the specific oscilloscope model and is related to the oscilloscope’s maximum sample rate. The oscilloscope specification that summarizes TDC performance is “trigger and interpolator jitter”. For high performance oscilloscope that specification is typically less than 2ps rms.  Oscilloscope designers have improved this using software assisted triggering, reducing this specification to less than 0.1ps. The use of the TDC along with software assisted triggering makes precise measurement of time related events like jitter possible.  Without the TDC hardware and software, time measurement resolution would be limited to the sampling period.

Conclusion

Interpolation is an extremely useful tool in an oscilloscope. It is a method of filling the gaps in sampled data records usually applied to improve accuracy in measurements or for better display interpretation.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Closing the gaps in your digital oscilloscope waveforms appeared first on EDN.

Thermoelectric generators are getting more R&D attention

Mon, 02/27/2023 - 18:16

Thermoelectric generators (TEGs) use heat—or more accurately, temperature differences—and the well-known Seebeck effect to generate electricity. Their applications range from energy harvesting of available heat and especially “wasted” heat in industrial and other situations, to being the heat-to-electrical energy converter using radioactive-based powers sources for spacecraft in radioisotope thermal generators (RTGs).

TEG-based RTGs use the heat of natural decay of plutonium-238. They have been used in nearly every space mission since 1961 (see References) as well as for remote Earth-based applications. They don’t get a lot of attention compared to highly visible, clean-looking, often dazzling solar panels in space, but the reality is that solar panels alone can’t provide adequate power themselves, even for many orbiting or close-to-Earth missions. The option of electrochemical batteries is a non-starter as they don’t function in the intense cold of space, which is at about 2.7 K if there is no solar-heating effect.

TEGs, as with most energy-harvesting transducers and arrangements seem like a good idea in principle since you are getting something worthwhile for almost nothing. In practice, however, they have several drawbacks: they are relatively hard to manufacture (especially in bulk), and they are inefficient (around 10%). That efficiency figure, while low, is often acceptable when the heat would otherwise be wasted, or there is no other viable choice.

We usually associate the Seebeck effect with bimetallic-junction thermocouples and temperature measurement rather than energy capture. In fact, many heat-recovery TEG devices use highly doped semiconductors made from bismuth telluride (Bi2Te3), lead telluride (PbTe), calcium manganese oxide (Ca2Mn3O8), as well as other materials, depending on application and temperature.

The other problem with TEGs is that they are hard to manufacture in quantity and difficult to produce inexpensively. These shortcomings are also incentives for researchers to see what enhancements or improvements can be made in their materials and production processes, as two very different projects clearly demonstrate.

Project 1

A team led by researchers at the University of Notre Dame (Indiana) addressed the problem that TEGs generally lack a high-throughput processing method, and so they developed a much-faster way to create high-performance devices. They used machine-learning techniques to optimize sintering the thermoelectric materials rapidly while maintaining their high thermoelectric properties, Figure 1.

Figure 1 The researchers use a three-stage interactive process, with (i) laser-driven sintering followed by (ii) assessment of thermoelectric properties and then (iii) Bayesian optimization, leading back to (i). Source: University of Notre Dame

The novel process uses intense pulsed light to sinter thermoelectric material in less than a second, while conventional sintering in thermal ovens can take hours. The team sped up this method of turning nanoparticle inks into flexible devices by using machine learning to determine the optimum conditions for the ultrafast but complex sintering process.

They integrated high-throughput experimentation and Bayesian optimization (BO) to accelerate the discovery of the optimum sintering conditions of silver–selenide TE films using an ultrafast intense pulsed light (flash) sintering technique. Due to the nature of the high-dimensional optimization problem of flash sintering processes, a Gaussian process regression (GPR) machine learning model was used to rapidly recommend the optimum flash sintering variables based on Bayesian expected improvement, Figure 2.

Figure 2 The feature-feature correlation matrix of the top features guides the improvement process. Source: University of Notre Dame

They produced a flexible TE film with an ultrahigh-power factor of 2205 μW/m–K2 and with a zT of 1.1 at 300 K; zT is a dimensionless figure of merit, where zT = S2ρ−1κ−1T, and it is calculated from the Seebeck coefficient (S), electrical resistivity (ρ), and thermal conductivity (κ). The sintering time was less than one second, which is several orders of magnitude shorter than that of conventional thermal-sintering techniques.

The films also showed excellent flexibility with 92% retention of the power factor (PF) after one-thousand bending cycles with a 5-mm bending radius, Figure 3. In addition, a wearable thermoelectric generator based on the flash-sintered films generates a very competitive power density of 0.5 mW/cm2 at a temperature difference of 10 K.

Figure 3 The flexibility test of the flash-sintered films under different bending angles demonstrated the film’s resilience and longevity.  Source: University of Notre Dame

They believe that ultrafast flash sintering assisted by machine learning will make it possible to produce high-performance devices much faster and at far lower cost than possible at present. The work is described in detail in their 12-page paper “Machine learning-assisted ultrafast flash sintering of high-performance and flexible silver–selenide thermoelectric devices” published in the journal Energy & Environmental Science; there is also a posted 17-page Supplementary Information file which provides additional insight and information.

Project 2

A team at the prestigious Karlsruhe Institute of Technology (KIT) (Germany) has developed a way to produce TEGs using printable thermoelectric polymers and composite materials using a low-cost, fully screen-printed flexible design. Using a unique two-step “origami-style” folding technique, they produced a mechanically stable 3D cuboidal device from a 2D layout printed on a thin flexible substrate with thermoelectric inks based on PEDOT [poly(3,4-ethylene dioxythiophene)] nanowires and a TiS2: Hexylamine-complex material, Figure 4.

Figure 4 Details of the fabrication and folding technique. [Yellow: n-type material, blue: p-type material, gray: substrate material. Arrows indicate the current flow through the device resulting from an applied temperature difference (hot side: red, cold side: cyan). Dashed arrows indicate folding procedures.] a) 2D layout of four thermocouples on a substrate with an extra strip of unprinted substrate. b) Origami folded TEG with four thermocouples with inlaid substrate material for electrical insulation of the thermocouples. Source: Karlsruhe Institute of Technology

The device’s architecture resulted in a high thermocouple density of 190 units per cm² by using the thin substrate as electrical insulation between the thermoelectric elements, yielding a high-power output of 47.8 µW/cm² from a 30 K temperature difference. The device properties are adjustable via the print layout, and the thermal impedance of the TEGs can be tuned over several orders of magnitudes, thus enabling matching of the thermal impedance to any heat source, Figure 5.

Figure 5 a) A 2D print layout for an origami TEG with 254 p-legs (blue) and 253 n-legs (yellow) (green: overlapping area) arranged in a checkerboard pattern of 13 columns by 39 rows. b) Screen printed TEGs with TiS2 as n-type material and PEDOT as p-type material with extended contact fields of PEDOT deposited by calligraphy. c) First folding step stacking all columns plus one extra strip of substrate. d) Fully folded thermoelectric ribbon. e) Thermoelectric ribbon creased at the fold lines. f) Fully folded thermoelectric generator fixed with a Kapton ribbon. Source: Karlsruhe Institute of Technology

They tested the units under various conditions, Figure 6. The output power at the maximum power point (MPP) was high enough to supply low-power electronic circuits. The output power increased with ΔΤ² reaching 243 µW for ΔT = 60 K. Even for ΔT = 30 K, this device generated PMPP = 63.4 µW and an open-circuit voltage Voc = 534 mV, corresponding to a power density of 47.8 µW/cm2 while the internal resistance is 1124 Ω.

Figure 6 a) A histogram of the internal electrical resistance of the devices unfolded after printing (light) and after the origami folding (dark). b) TEG characterization setup with two copper blocks as thermal contacts. c) Open-circuit voltage versus applied temperature difference for TEG #6. d) I–V characteristics and output power versus voltage for different applied-temperature differences at TEG #6. e)  Output power versus electrical load for different applied-temperature differences at TEG #6. f) Histogram of the maximal output power and the open-circuit voltages of all produced TEGs at ∆T = 30 K. Source: Karlsruhe Institute of Technology

As a practical test and demonstration of the usefulness, they built a self-powered weather station measuring ambient temperature, humidity, and pressure using off-the-shelf components including a Bosch BME280 sensor and a Texas Instruments power-management IC, all reporting via a BLE (Bluetooth Low Energy) interface.

Full details on their process, the deep analysis of  the material-science physics behind it, and their test arrangements and results are in the eight-page paper “Fully printed origami thermoelectric generators for energy-harvesting” published in Nature; there’s also a 13-page Supplementary Information file which has further analysis as well as full weather-station construction detail, plus a 30-second video of the first stage of the production process.

Have you ever used a TEG, other than a basic bimetallic thermocouple, for energy harvesting or power capture? Did it work out technically or were there unexpected issues which made it an inadequate choice for “free” power?

References – TEGs and RTGs

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Thermoelectric generators are getting more R&D attention appeared first on EDN.

PUF update: New IP bypasses the need for ID enrollment

Mon, 02/27/2023 - 14:48

Security is no longer an afterthought in embedded systems, especially the connected devices serving the Internet of Things (IoT), and it’s apparent from traction that new technologies like physically unclonable function (PUF) are getting in chips spanning from microcontrollers to high-performance FPGAs. The PUF technology facilitates root-of-trust in an easy, cost-effective, and flexible manner without needing to store keys.

Figure 1 PUF exploits the variations inherent in the device to produce a unique, unclonable response from the device to a given input. Source: Secure-IC

However, while PUFs have been introduced to generate specific key numbers for a chip, it’s challenging to guarantee a low probability of identical IDs across separate chips. According to Secure-IC, a Cesson-Sévigné, France-based security solutions provider for embedded systems and connected objects, about 90% of PUF technologies cannot function independently due to their subpar performance. As a result, PUFs require an extensive enrollment phase and a rebuilding phase to improve the quality of the ID or key.

In short, PUFs can only serve as a reliable security source with enrollment phase for the cryptographic key construction. And enrollment phase is a costly process since each chip must be personalized on its own. It comprises four phases: lengthy measurements, characterization, helper data derivation, and eventually, helper data programming. But that’s not supportive of the efficient personalization steps required at the test stage when producing chips at scale.

Moreover, the need for enrollment leaves the door open to hackers trying to subvert the enrollment, for instance, by forcing all the bits of the key to be the same. To address the challenges related to enrollment and rebuilding phases, high costs, and concerns regarding the system’s vulnerability to attacks, Secure-IC has joined hands with hardware and software security specialist Trasna to introduce a PUF solution that does not require any enrollment phase nor a rebuilding phase.

Figure 2 The new PUF IP eliminates the need for an enrollment phase for cryptographic key construction. Source: Secure-IC

The new PUF IP can generate one or a few unique IDs or keys working straight out of the box. These unique IDs can serve as the foundation for secure booting of the chip, root-of-trust, and lifecycle management.

This development shows how PUFs are overcoming design hurdles and making headway in the IoT security realm despite being a new technology. The new PUF IP from Secure-IC, which complies with the ISO/IEC 20897 cybersecurity standard, has been integrated into Trasna’s system-on-chip (SoC) solution serving narrowband NB-IoT applications.

PUFs are being streamlined for integration into chips aiming to bolster their security credentials. Embedded World 2023 will be a good place to gauge their design progression and their place in future SoCs and chiplets.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PUF update: New IP bypasses the need for ID enrollment appeared first on EDN.

Dynamic vector threading for vRAN, massive MIMO in 5G

Fri, 02/24/2023 - 16:28

Now that we’re getting comfortable with 5G, network operators are already planning for 5G-Advanced, release 18 of the 3GPP standard. The capabilities enabled by this new release—extended reality, centimeter-level positioning, and microsecond-level timing outdoors and indoors—will create an explosion in compute demand in Radio Access Network (RAN) infrastructure. Consider fixed wireless access for consumers and businesses.

Here, beamforming through massive MIMO for remote radio units (RRUs) must manage heavy yet variable traffic, while user equipment (UE) must support carrier aggregation. Both need more channel capacity. So, solutions must be greener, high performance and low latency, more efficient in managing variable loads, and more cost effective to support widescale deployment.

Figure 1 5G networks are evolving in several vectors, all pointing toward network openness and sophistication. Source: ABI Research

As a result, 5G infrastructure equipment builders want all the power, performance, and unit cost advantages of chips, plus all these added capabilities in a more efficient package. Start with virtualized RAN (vRAN) components that offer the promise of higher efficiency by being able to run multiple links simultaneously on one compute platform.

Virtual RANs and vector processing

The vRAN components aim to deliver on decade-old goals of centralized RAN: economies of scale, more flexibility in suppliers and central management of many-link, high-volume traffic through software. We know how to virtualize jobs on big general-purpose CPUs, so the solution to this need might seem self-evident. Except that those platforms are expensive, power hungry, and inefficient in the signal processing at the heart of wireless designs.

On the other hand, embedded DSPs with big vector processors are expressly designed for speed and low power in signal processing tasks such as beamforming, but historically have not supported dynamic workload sharing across multiple tasks. Adding more capacity required adding more cores, sometimes large clusters of them, or at best through a static form of sharing through a pre-determined core partitioning.

The bottleneck is vector processing since vector computation units (VCUs) occupy the bulk of the area in a vector DSP. Using this resource as efficiently as possible is essential to maximize virtualized RAN capacity. The default approach of doubling up cores to handle two channels requires a separate VCU per channel. But at any one time, software in one channel might require vector arithmetic support where the other might be running scalar operations; one VCU would be idle in those cycles.

Now imagine a single VCU serving both channels with two vector arithmetic and register files. An arbitrator decides dynamically how best to use these resources based on channel demands. If both channels need vector arithmetic in the same cycle, these are directed to the appropriate vector ALU and register files. If only one channel needs vector support, the calculation can be stripped across both vector units, accelerating computation.

Dynamic vector threading

This method for managing vector operations between two independent tasks looks very much like execution threading, maximizing use of a fixed compute resource to handle one or more than one simultaneous task. This technique, dynamic vector threading (DVT), allocates vector operations per cycle to either one or two arithmetic units (in this instance).

Figure 2 DVT maximizes use of a fixed compute resource to handle one or more than one simultaneous task. Source: CEVA

You can imagine this concept being extended to more threads, even further optimizing VCU utilization across variable channel loads since vector operations in independent threads are typically not synchronized.

Support for DVT requires several extensions to traditional vector processing. Operations must be serviced by a wide vector arithmetic unit, allowing for say 128 or more MAC operations per cycle. The VCU must also provide a vector register file for each thread so that vector register context is stored independently for threads. A vector arbitration unit provides for scheduling vector operations, effectively through competition between the threads.

How does this capability support virtualized RAN? At absolute peak load, signal processing requirements on such a platform will continue to be served as satisfactorily as they would be on a dual-core DSP, each with a separate VCU. When one channel needs vector arithmetic and the other channel is quiet or occupied in scalar processing, the first channel completes vector cycles faster by using the full vector capacity. That delivers higher average throughput in a smaller footprint than two DSP cores.

DSPs with DVT in virtualized RANs

Another example of how DVT can support more efficiency in baseband processing can be understood in 5G-Advanced RRUs. These devices must support massive MIMO handling for beamforming. A massive MIMO-based RRU will be expected to support up to 128 active antenna units, including support for multiple users and carriers. This implies massive compute requirements at the radio device, which becomes much more efficient with DVT. In UEs— terminals and CPEs supporting fixed wireless access—carrier aggregation also benefits from DVT. So, DVT benefits at both ends of the cellular network, infrastructure and UEs.

It might still be tempting to think of big general-purpose processors as the right answer to these virtualization needs but, in signal-processing paths, that could be a backwards step. We cannot forget that there were good reasons the infrastructure equipment makers switched over to ASICs with embedded DSPs. Competitive fixed wireless access solutions need to explore the benefits of DSP-based ASICs to leverage support for dynamic vector threading.

Nir Shapira is business development director for mobile broadband business unit at CEVA.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Dynamic vector threading for vRAN, massive MIMO in 5G appeared first on EDN.

Pages