EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 47 min 33 sec ago

Portable thermal anemometer hot-transistor bias compensation nulls battery discharge droop

Fri, 05/31/2024 - 15:00

All thermal anemometers work by inferring air speed from measurements of thermal impedance (Z) between a heated sensor and the surrounding air:

Z = T / P         (1)

Where P is the power dissipated by the sensor and T is the temperature difference between the sensor and ambient.

Wow the engineering world with your unique design: Design Ideas Submission Guide

There are two basic schemes for doing this.

  1. Hold P constant and measure the resulting temperature difference T
  2. Hold T constant and measure the power P required to do it

An example of the constant power type can be found in “Nonlinearities of Darlington airflow sensor and VFC compensate each other”…

…and examples of the constant temperature type can be found in “Linearized portable anemometer with thermostated Darlington pair”…

…and in Figure 1

Figure 1’s anemometer is unusual because it melds the sensor transistor into a direct PFC (Power to Frequency Converter) loop.

Figure 1 Constant-temperature anemometer with direct power-to-frequency conversion.

To understand how the Figure 1 circuit works, consider the case of zero airflow. You use ZERO trimmer R2 to set the quiescent base-bias currents for Q1 and ambient reference Q2. With the proper adjustment, Q1’s temperature rise (~50°C) in still air, caused by collector power dissipation, reduces Q1’s VBE (by ~2 mV/°C) to equal or slightly below Q2’s. The noninverting input of comparator U1a is then slightly less positive than the inverting input. The output therefore switches low, holding C1 discharged and resetting multivibrator U1b, whose output goes high.

This condition does two things: It forces Fout = 0 and holds Q3 off.

Now let’s blow some air at Q1. The resulting increase in cooling tends to reduce Q1’s temperature, causing its Vbe to increase relative to that of Q2. This makes the comparison between U1a’s inputs reverse, releasing the reset on C1. C1 then charges through R9 and turns on Q3, driving a t = 700-µsec pulse to Q1’s base through CALIBRATE trimmer R3.

The resultant pulse of collector current forced in Q1 can be seen in Equation 2 (where hFE = Q1 current gain and Rcal = R3 + R4):

IC = hFEIB  = hFEV/(Rcal),      (2)

This deposits a quantum of heat on Q1’s junction:

t P = t ICV = t IB hFEV  = t (V/Rcal)hFEV = t hFEV2/Rcal   (3)

which tends to return Q1 ‘s temperature to a value warm enough to restore the original zero-flow voltage balance with ambient-sensor Q2. Until Q1 achieves that temperature, U1 continues to oscillate, cycle Q3 on, and pump heat into Q1.

Thus, a feedback loop is established that acts to maintain a constant temperature differential between Q1 and Q2. The average frequency appearing at U1b’s output is therefore proportional to the extra power required to heat Q1. The maximum output frequency for the circuit values in Figure 1 is 1 kHz. Appropriate adjustment of R3 establishes almost any desired full-scale flow. Temperature tracking between the Q1 and Q2 Vbe voltages provides good compensation for changes in ambient temperature.

The direct connection of Q1 to the power rail results in good efficiency (>90%) power utilization, so while power draw is (by definition!) dependent on airflow, as shown in Figure 2, it’s typically modest: 200 to 350 mW.

Figure 2 Q1 power draw versus air flow is typically a modest 200 to 350 mW.

In fact, power consumption is low enough that portable battery operation, with a cheap multimeter for frequency readout, looked attractive. An inexpensive stack of four AA alkaline batteries promised tens of hours of continuous operation which could equate to hundreds of air velocity readings. However, as shown in Figure 3, direct battery power of Figure 1 wouldn’t work very well, due to the ±20 % roll-off of battery voltage during discharge. 

Figure 3 Typical AA cell discharge droop curves with an undesirable ±20 % roll-off of battery voltage during discharge, resulting in a degradation of anemometer calibration accuracy.

The resulting degradation of anemometer calibration accuracy would be extreme, especially considering Equation 4:

t P = t ICV = t IB hFEV  = t (V/Rcal)hFEV = t hFEV2/Rcal   (4)

that shows the square-law dependence of Q1 heating on supply voltage!

Meanwhile, the seemingly obvious remedy of supply voltage regulation wouldn’t be very attractive either, due to the resulting impact on complexity, efficiency, and cost. Fortunately, Figure 4 shows an alternative simple, cheap, and efficient solution: base bias compensation.

Figure 4 Figure 1’s anemometer modified with U2, A1, and R11 – 14 to servo Q1 and Q2 bias currents to (mostly) null the effects of battery voltage droop.

Figure 5 shows the resulting compensated power curve (black) versus what would result without it (red): better than an order of magnitude improvement!

Figure 5 Nulled (black) and uncompensated (red) Q1 heating versus battery voltage droop (5 ±1 volts).

 Still not perfect, but arguably good enough.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Portable thermal anemometer hot-transistor bias compensation nulls battery discharge droop appeared first on EDN.

Infineon expands GaN HEMT lineup

Fri, 05/31/2024 - 00:13

Infineon is adding two more families of high and medium voltage GaN transistors to its portfolio of CoolGaN HEMTs spanning 40 V to 700 V. According to the company, this expansion will enable customers to use gallium nitride in a broader array of applications that help drive digitalization and decarbonization.

G5 and G3 generations of CoolGaN devices are manufactured on 8-in. in-house foundry processes in Malaysia and Austria. The 650-V G5 family addresses applications in consumer, data center, industrial, and solar markets. The medium voltage G3 transistor series supports four voltage classes: 60 V, 80 V, 100 V, and 120 V. It also includes a 40-V bidirectional switch. The G3 family targets motor drive, telecom, data center, solar, and consumer applications.

The CoolGaN 650-V G5 will be available in Q4 2024. The medium voltage CoolGaN G3 will be available in Q3 2024. Samples are available now. For more information on Infineon’s CoolGaN high-electron-mobility transistors (HEMTs), click here.

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Infineon expands GaN HEMT lineup appeared first on EDN.

5G mMIMO predriver offers high gain

Fri, 05/31/2024 - 00:13

A massive MIMO (mMIMO) predriver from Qorvo, the QPA9822 provides gain of 39 dB at 3.5 GHz and output power of 28 dBm P1dB. The linear driver amplifier enables wideband 5G NR instantaneous signal bandwidths of up to 530 MHz. This makes it well-suited for the n77 band used for 5G deployment and other mMIMO applications.

The QPA9822 is internally matched to 50 Ω over the entire operating frequency range of 3.3 GHz to 4.2 GHz. It offers an enable/disable function through the VEN pin for time-division duplexing (TDD) operation.

The part, which operates from a 5-V supply, is housed in a 3×3-mm, 16-pin surface-mount package. The QPA9822 is both footprint and pin-compatible with the company’s QPA9122M driver amplifier to allow easy integration into existing and new designs.

Use the link to the product page below to request a datasheet or to order samples.

QPA9822 product page  

Qorvo

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 5G mMIMO predriver offers high gain appeared first on EDN.

Thin electrolytic capacitor achieves low ESR

Fri, 05/31/2024 - 00:13

Joining Murata’s ECAS series of polymer aluminum electrolytic capacitors is a device that boasts low equivalent series resistance (ESR) of 4.5 mΩ. This tiny capacitor, designated the ECASD40E477M4R5KA0, is housed in a 7.3×4.3-mm surface-mount case with a maximum height of only 2.0 mm.

Despite its diminutive size, the part provides a capacitance of 470 µF ±20%, and capacitance remains stable when DC voltage is applied. The device can be used to smooth or even out voltage fluctuations in a variety of power supply circuits. As as a smoothing capacitor, the ECASD40E477M4R5KA0 helps ensure a stable power supply for CPUs, GPUs, and FPGAs in servers, accelerators, and laptop PCs.

In addition to an ESR of 4.5 mΩ measured at 100 kHz and +25°C, the capacitor offers a rated voltage of 2.5 VDC and leakage current of 117.5 µA. Operating temperature range is -40°C to +105°C.

The ECASD40E477M4R5KA0 is now in mass production. Samples are available upon request (registration required). For more information about the ECAS series of polymer aluminum electrolytic capacitors, click here.

ECASD40E477M4R5KA0 product page

Murata Manufacturing 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Thin electrolytic capacitor achieves low ESR appeared first on EDN.

Board set kickstarts wireless charging design

Fri, 05/31/2024 - 00:13

A pair of 50-W Qi-compatible development boards from ST enables rapid wireless charging using the company’s ST Super Charge (STSC) protocol. The STEVAL-WBC2TX50 transmitter and STEVAL-WLC98RX receiver boards accelerate the development of wireless charging for products ranging from medical and industrial equipment to home appliances and computer peripherals.

By employing the STSC protocol, the transmitter board delivers up to 50 W of output power at a faster wireless charging rate than standard protocols used with smartphones and similar devices. The board also supports the Qi 1.3 5-W Baseline Power Profile (BPP) and 15-W Extended Power Profile (EPP) specifications. Onboard components include an Arm Cortex-M0-based transmitter system-in-package, application-specific front end, and MOSFET gate drivers. ST’s STSAFE-A110 secure element provides QI authentication.

Like the transmitter board, the receiver board also offers up to 50 W of charging power, full STSC capability, and BPP/EPP charging. Its adaptive rectifier configuration (ARC) mode extends charging distance by up to 50% to allow lower-cost coils and configuration flexibility. The board’s wireless power receiver IC, based on an Arm Cortex-M3 processor, features a synchronous rectifier power stage that provides a programmable output voltage up to 20 V.

Prices for the STEVAL-WBC2TX50 transmitter and STEVAL-WLC98RX receiver start at $109.03 and $113.93, respectively. Both boards are available from the ST eStore.

STEVAL-WBC2TX50 transmitter product page

STEVAL-WLC98RX receiver product page

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Board set kickstarts wireless charging design appeared first on EDN.

RF amplifier series gains 300-W model

Fri, 05/31/2024 - 00:12

The latest addition to the R&S BBA300 family of RF amplifiers delivers output power of 300 W P1dB or software-adjustable saturation power up to 450 W. Operating at up to 6 GHz, the broadband amplifier can generate the high field strengths required for critical test environments, making it useful for EMC, OTA coexistence, and RF component testing.

The 300-W model is available in the both the BBA300-CDE and BBA300-DE series, which have respective continuous frequency ranges of 380 MHz to 6 GHz and 1 GHz to 6 GHz. This wide frequency range enables the instrument to cover GSM, LTE, and 5G/NR mobile communication standards, as well as WLAN, Bluetooth, and Zigbee wireless standards. The amp also supports continuous sweeping of RF signals across the entire frequency range.

The PK1 software option offers two tools for tailoring the RF output signal: bias point adjustment, which allows toggling between class A and class AB, and a choice between maximum output power or high mismatch tolerance.

To request a price quote for the BBA300 RF amplifier, use the link to the product page below.

BBA300 product page

Rohde & Schwarz  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RF amplifier series gains 300-W model appeared first on EDN.

Has Malaysia’s ‘semiconductor moment’ finally arrived

Thu, 05/30/2024 - 16:43

Malaysia and Taiwan were among the early semiconductor outposts during the late 1960s when U.S. companies like Intel began to outsource their assembly and test operations to Asia. Over half a century, while Taiwan has reached the design and manufacturing pinnacle, Malaysia mostly remained busy on back-end tasks related to chip assembly, packaging, and testing.

Malaysia—which currently accounts for 13% of the semiconductor packaging, assembly, and testing market—is now looking to position itself as a global IC design and manufacturing hub amid U.S. restrictions on China’s chip industry. According to a report published in Reuters, the Malaysian government plans to pour $107 billion into its semiconductor industry for IC design, advanced packaging, and manufacturing equipment for semiconductor chips.

Malaysia, long seeking to move beyond back-end chip assembly and testing and into high-value, front-end design work, is confident that time is now on its side. It’s worth noting here that Malaysia isn’t merely eyeing U.S. or western semiconductor outfits; chip firms in China aiming to diversify supply chains are also considering Malaysia for packaging and assembly operations as well as setting up design centers.

While Intel is setting up a $7 billion advanced packaging plant and Infineon is building a $5.4 billion power semiconductors fab in Malaysia, a Reuters report provides details of Chinese chip firms tapping Malaysian partners to assemble a portion of their high-end chips in the wake of U.S. sanctions.

Take the case of Xfusion, formerly a Huawei unit, joining hands with NationGate to assemble GPU servers in Malaysia and thus avoid U.S. sanctions. Likewise, chip assembly and testing firm TongFu Microelectronics is building a new facility in Malaysia in a joint venture with AMD. Next, RISC-V processor firm StarFive is setting up a design center in Penang.

However, Malaysia will need an immaculate execution besides pouring money into its ambitions to move up the semiconductor ladder as other destinations like India and Vietnam are also vying for a stake in chip design and manufacturing services. Moreover, while U.S. restrictions on China’s chip industry bring Malaysia new possibilities, it’s important to note that the country has been trying to move beyond back-end chip assembly and testing and into high-value front-end design work for quite some time.

So, while China’s chip outfits moving to Malaysia will add weight to the country’s efforts to become a semiconductor hub in Asia, it will still require a strong execution besides tax breaks, subsidies, and visa exemption fees. Malaysia has an experienced workforce and sophisticated equipment, critical elements in the semiconductor design recipe.

What’s required next is a few promising startups in semiconductor design and advanced packaging domains, as hinted by Malaysian Prime Minister Anwar Ibrahim during his policy speech.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Has Malaysia’s ‘semiconductor moment’ finally arrived appeared first on EDN.

Contactless electric bell on a gradient relay

Thu, 05/30/2024 - 16:35

The operation of a contactless electric bell is based on a change in the electrical resistance of a temperature-sensitive element (thermistor) when a finger approaches the bell button or upon contact with it. To exclude the possibility of continuous calls, a gradient relay is used in the device, which turns on the bell only under the condition of a short change (increase) in the temperature of the thermosensitive element.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The operation of the contactless electric bell is based on the use of a gradient relay [1–3] with a temperature-sensitive sensor. When a finger approaches a temperature-sensitive sensor (thermistor), its temperature rises, therefore, its resistance changes. The gradient relay is activated, including the bell. The sensitivity of the device is such that a small local change in the temperature of the sensor leads to the activation of the bell. After removing the finger, the resistance of the thermistor will return to its original state, the bell will be disconnected.

The use of such a device is especially important during epidemics, since the transmission of viruses and microbes without contact with a dirty button is less likely.

The contactless electric bell in Figure 1 is made using the comparator U1.1 of the LM339 chip. The device works as follows.

Figure 1 Electrical circuit of the non-contact doorbell.

The ratio of resistive divider resistances R1 and Rsens is desirable to choose 1:1. In the initial state, when the device is switched on, at the junction point of resistors R1 and Rsens, the voltage at the inputs of the comparator U1.1 is the same and approximately equal to half of the supply voltage. Therefore, the voltage at the output of the comparator is zero. The thermistor Rsens is an element that provides a contactless change in the state of the resistive divider of the input circuit of the device.

If you bring your finger to the thermosensitive element—resistor Rsens, its resistance will change. You can just breathe on this resistor. This will cause an imbalance in the voltage across the comparator inputs. The voltage at the right terminal of the resistor R3 due to the presence of the capacitor C1 will remain unchanged for some time. At the same time, the voltage at the left terminal of R3 will change, allowing the comparator to switch.

A high logic level voltage appears at the output of the comparator. This voltage is supplied to the base of the output transistor VT1 BC547 or its analogue, the transistor opens and connects the bell (electromagnetic sound generator with integrated oscillator circuit HCM1612X) to the power source. If you move your finger away from the resistor Rsens, the resistance of the thermistor will return to its original state, the device will return to its original state, and the bell will be disconnected.

Resistors with both positive and negative temperature coefficient of resistance can be used as resistor Rsens. The device will work in either case. To ensure proper operation of the device, you may have to swap the inputs of the comparator U1.1 (pins 4 and 5).

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 800 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

Related Content

References

  1. Shustov M.A. “Gradient relay”. Radioamateur (BY). 2000. No. 10. pp. 28–29.
  2. Shustov M.A., Shustov A.M. “Gradient Detector a new device for the monitoring and control of the signal deviations”. Elektor Electronica Fast Forward Start-Up Guide 2016–2017. 2017. pp. 44–47.
  3. Shustov M.A., Shustov A.M. “Electronic Circuits for All”. London, Elektor International Media BV, 2017, 397 p.; “Elektronika za sve: Priručnik praktične elektronike”. Niš: Agencija EHO, 2017; 2018, 392 St. (Serbia).
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Contactless electric bell on a gradient relay appeared first on EDN.

Microsoft’s Build 2024: Silicon and associated systems come to the fore

Wed, 05/29/2024 - 14:00

Microsoft’s yearly Build developer conference took place last Tuesday-Thursday, March 21-23 (as I write these words on Memorial Day), and was rife with AI-themed announcements spanning mobile-to-enterprise software and services.

Curiously, however, many of these announcements were derived from, and in general the most notable news (IMHO) came from, a media-only event held one day earlier, on Monday, March 20. There, Microsoft and its longstanding Arm-based silicon partner Qualcomm co-announced the long-telegraphed Snapdragon X Elite and Plus SoCs along with Surface Laptop and Pro systems based on them. Notably, too, Microsoft-branded computers weren’t the only ones on the stage this time; Acer, Asus, Dell, HP, Lenovo and Samsung unveiled ‘em, too.

To assess the importance of last week’s news, let’s begin with a few history lessons. First off, a personal one: as longtime readers may recall, I’ve long covered and owned Windows-on-Arm operating systems and computers, beginning with my NVIDIA Tegra 3 SoC-based Surface with Windows RT more than a decade back:

Three years ago, I acquired (and still regularly use, including upgrading it to Windows 11 Pro) a Surface Pro X powered by the Snapdragon 8cx SC8180X-based, Microsoft-branded SQ1 SoC:

More recently, I bought off eBay a gently used, modestly discounted “Project Volterra” system (officially: Windows Dev Kit 2023) running a Qualcomm Snapdragon 8cx Gen 3 (SQ3) SoC:

And even more recently, as you can read about in more detail from my just-published coverage, I generationally backstepped, snagging off Woot! (at substantial discount) a used example of Microsoft and Qualcomm’s first developer-tailored stab at Windows-on-Arm, the ECS LIVA Mini Box QC710 Desktop, based on a prior-generation Snapdragon 7c SC7180 SoC:

So, you could say that I’ve got no shortage of experience with Windows-on-Arm, complete with no shortage of scars, most caused by software shortcomings. Windows RT, for example, relied exclusively on Arm-compiled applications (further complicated by an exclusive Microsoft Store online distribution scheme); unsurprisingly, the available software suite garnered little adoption beyond Microsoft’s own titles.

With Windows 10 for Arm, as I complained about in detail at the time, while an emulation layer for x86-compiled content did exist, both its performance and inherent breadth and depth of functionality were subpar…so much so that Microsoft ended up pulling the plug on Windows 10 and focusing ongoing development on the Windows 11 for Arm successor, which has proven far more robust.

Here’s another personal narrative related to this post’s primary topic coverage: last fall, I mentioned that I’d acquired two generations’ successors to my long-used Surface Pro 5 hybrid:

A primary-plus-spare Surface Pro 7+:

 notably for backwards-compatibility with my Kensington docking station:

and the long-term transition destination, a pair of Surface Pro 8s:

What I didn’t buy instead, although it was already available at the time, was the Surface Pro 9. That’s because I wanted my successor systems to be cellular data-capable, and the only Surface 9 variants that supported this particular feature (albeit at a 5G cellular capability uptick compared to the LTE support in what I ended up getting instead) were Arm-based, with what I felt was insufficient upgrade differentiation from my existing Surface Pro X.

Flash forward to a bit more than two months ago, and Microsoft introduced the Surface Pro 10, along with the Surface Laptop 6. They’re both based on Intel Meteor Lake CPUs with integrated NPU (neural processing) cores, reflected in the dedicated Copilot key on each model’s keyboard. Copilot (introduced at last year’s Build), for those of you who don’t already know, is the OpenAi GPT-derived chatbot successor to Microsoft’s now-shuttered Cortana. But here’s an interesting thing, at least to me: the Surface Pro 10 and Surface Laptop 6 are both explicitly positioned as “For Business” devices, therefore sold exclusively to businesses and commercial customers, not available to consumers (at least through normal direct retail channels…note that I got my prior-generation SP7+ and SP8 “For Business” units via eBay resellers).

What about next-generation consumer models? The answer to that question chronologically catches us up to last week’s news. Microsoft’s new Surface Pro 11 (complete with a redesigned keyboard that can be used standalone and an optional OLED screen) and Surface Laptop 7, along with the newly unveiled systems from other Microsoft-partner OEMs, are exclusively Qualcomm Snapdragon X-based, which I suspect you’ll agree represents quite a sizeable bet (and gamble). They’re also labeled as being Copilot+ systems (an upgrade to the earlier Copilot nomenclature), reflective of the fact that Snapdragon X SoCs’ NPUs tout 40 TOPS (trillions of, or “tera”, operations per second) performance. Intel’s Meteor Lake SoC, unveiled last September, is “only” capable of 10 TOPs, for example…which may explain why, last Monday, the very same day, Intel “coincidentally” released a sneak peek of its next-generation Lunar Lake architecture, also claimed Copilot+ NPU performance-capable and coming later this year.

Accompanying the new systems’ latest-generation Arm-based silicon foundations is a further evolution of their x86 code-on-Arm virtualization subsystem, which Microsoft has now branded Prism and is analogous to Apple’s Rosetta technology (the latter first used to run PowerPC binaries on Intel microprocessors, now for x86 binaries on Apple Silicon SoCs), along with other Arm-friendly Windows 11 replumbing. Stating the likely already obvious, Microsoft’s ramped-up Windows-on-Arm push is a seeming reaction to Apple’s systems’ notably improved power consumption/performance/form factor/etc. results subsequent to that company’s own earlier Arm-based embrace. To wit, Microsoft did an interesting half-step a bit more than a year ago when it officially sanctioned running Windows-for-Arm virtualized on Apple Silicon Macs.

Speaking of virtualization, I have no doubt, based both on track record and personal experience, that Prism is capable technology that will continue to improve going forward, since Microsoft has lengthy experience with numerous emulation and virtualization schemes such as:

  • Virtual PC, which enabled running x86-based Windows on PowerPC Macs, and
  • Windows Virtual PC (aka Windows XP Mode), for running Windows XP as a virtualized guest on a Windows 7 Host
  • The more recent, conceptually similar Windows Subsystem for Linux
  • And several generations’ worth of virtualization for prior-generation Xbox titles on newer-generation Xbox consoles, both based on instruction set-compatible and -incompatible CPUs.

To wit, I wonder how Prism is going to play out. Clearly, no matter how robust the emulation and virtualization support, its implementation will be inefficient in comparison to “native” applications. So, I’m assuming that Microsoft will encourage its developers to in-parallel code for both the x86 and Arm versions of Windows, perhaps via an Apple-reminiscent dual-mode “Universal” scheme (in combination with “destination-tailored” downloads from online stores). But, supplier embarrassment and sensationalist press hypothesizing aside, I seriously doubt that Microsoft intends to turn its back on x86 in any big (or even little) way any time soon (in contrast to Apple’s abrupt change in course, in no small part thereby explaining its success in motivating its developer community to rapidly embrace Apple Silicon). Developing for multiple CPU architectures and O/S version foundations requires incremental time, effort, and expense; if you’re an x86 Windows coder and Prism works passably, why expend the extra “lift”?

Further evidence of Apple being in Microsoft’s gunsights comes from the direct call-outs that company officials made last week , particularly against Apple’s MacBook Air. Such comparative assessments are a bit dubious, for at least a couple of reasons. First off, Microsoft neglected to openly reveal that both its and OEM partners’ systems contained fans, whereas the MacBook Air is fan-less; a comparison to the fan-inclusive and otherwise more thermally robust MacBook Pro would be more fair. Plus, although initial comparative benchmarks are seemingly impressive, even against the latest-generation Apple M4 SoC, there’s also anecdotal evidence that Snapdragon X system firmware may sense that a benchmark is being run and allow the CPU to briefly exceed normal thermal spec limits. Any reality behind the comparative hype, both in an absolute and relative sense, will come out once systems are in users’ hands, of course.

So why is Microsoft requiring a standalone NPU core, and specifically such a robust one, in processors that it allows to be branded as Copilot+? While CPUs and GPUs already in systems are alternatively capable of handling various deep learning inference operations, they’re less efficient in doing so in comparison to a focused-function NPU alternative, translating to both lower effective performance and higher energy consumption. Plus, running inference on a CPU or GPU steals away cycles from other applications and operations that could alternatively use them, particularly those for which a NPU isn’t a relevant alternative. One visibly touted example is “Recall”, a newly added Windows 11 feature which, quoting from Microsoft’s website:

…uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds. The snapshots are encrypted and saved on your PC’s hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots. Once you find the snapshot that you were looking for in Recall, it will be analyzed and offer you options to interact with the content.

Recall will also enable you to open the snapshot in the original application in which it was created, and, as Recall is refined over time, it will open the actual source document, website, or email in a screenshot. This functionality will be improved during Recall’s preview phase.

Copilot+ PC storage size determines the number of snapshots that Recall can take and store. The minimum hard drive space needed to run Recall is 256 GB, and 50 GB of space must be available. The default allocation for Recall on a device with 256 GB will be 25 GB, which can store approximately 3 months of snapshots. You can increase the storage allocation for Recall in your PC Settings. Old snapshots will be deleted once you use your allocated storage, allowing new ones to be stored.

Creepy? Seemingly, yes. But at least it runs completely (according to Microsoft, at least) on the edge computing device, with no “cloud” storage or other involvement, thus addressing privacy.

Here’s another example, admittedly a bit more “niche” but more compelling (IMHO) in exemplifying my earlier conceptual explanation. As I most recently discussed in my CES 2024 coverage, upscaling can decrease the “horsepower” of a system’s GPU required in order to render a given-resolution scene to the screen. Such an approach only works credibly, however, only if it comes with no frame rate reduction, image artifacts, or other quality degradations. AI-based upscalers are particularly robust in this regard. And, as discussed and demonstrated at Build, Microsoft’s Automatic Super Resolution (ASR) algorithm runs on the Snapdragon X Elite NPU, leaving the (integrated!) GPU free to focus on its primary polygon and pixel rendering tasks.

That all said, at least one looming storm cloud threatens to rain on this Windows-on-Arm parade. A quick history lesson: NUVIA was a small startup founded in 2019 by ex-Apple and Google employees, in the former case coming from the team that developed the A-series SoCs used in Apple’s smartphones and other devices (and with a direct lineage to the M-series SoCs subsequently included in Apple Silicon-based Macs). Apple predictably sued NUVIA that same year for breach of contract and claimed poaching of employees, only to withdraw the lawsuit in early 2023…but that’s an aside, and anyway, I’m getting chronologically ahead of myself.

NUVIA used part of its investment funding to acquire an architecture license from Arm. A quote from a decade-plus-back writeup at SemiAccurate (along with additional reporting from AnandTech), that as far as I can tell remains accurate, explains (with fixed typos by yours truly):

On top of the pyramid is both the highest cost and lowest licensee count option…This one is called an architectural license, and you don’t actually get a core; instead, you get a set of specs for a core and a compatibility test suite. With all of the license tiers below it, you get you a complete core or other product that you can plug-in to your design with varying degrees of effort, but you cannot change the design itself. If you license a Cortex-A15 you get exactly the same Cortex-A15 that the other licensees get. It may be built with very different surroundings and built on a different process, but the logic is the same. Architectural licensees conversely receive a set of specs and a testing suite that they have to pass; the rest is up to them. If they want to make a processor that is faster, slower, more efficient, smaller, or anything else than the one Arm supplies, this is the license they need to get.

Said more concisely, architecture licensed cores need to fully support a given Arm instruction set generation, but how they implement that instruction set support is completely up to the developer. Cores like those now found in Snapdragon X were already under development under NUVIA’s architecture license when Qualcomm acquired the company for $1.4B in early 2021. And ironically, at the time of the NUVIA acquisition, Qualcomm already had its own Arm architecture license, which it was using to develop its own Kryo-branded cores.

Nevertheless, Arm filed a lawsuit against Qualcomm in late summer 2022. Per coverage at the time from The Register (here’s a more recent follow-up writeup from the same source):

Arm has accused Qualcomm of being in breach of its licenses, and wants the American giant to fulfill its obligations under those agreements, such as destroying its Nuvia CPU designs, plus cough up compensation…

According to Arm…the licenses it granted Nuvia could not be transferred to and used by its new parent Qualcomm without Arm’s permission. Arm says Qualcomm did not, even after months of negotiations, obtain this consent, and that Qualcomm appeared to be focused on putting Nuvia’s custom CPU designs into its own line of chips without permission.

That led to Arm terminating its licenses with Nuvia in early 2022, requiring Qualcomm to destroy and stop using Nuvia’s designs derived from those agreements. It’s claimed that Qualcomm’s top lawyer wrote to Arm confirming it would abide by the termination.

However, says Arm, it appeared from subsequent press reports that Qualcomm may not have destroyed the core designs and still intended to use the blueprints and technology it acquired with Nuvia for its personal device and server chips, allegedly in a breach of contract with Arm…

Arm says individual licenses are specific to individual licensees and their use cases and situations, and can’t be automatically transferred without Arm’s consent.

According to people familiar with the matter, Nuvia was on a higher royalty rate to Arm than Qualcomm, and that Qualcomm hoped to use Nuvia’s technology on its lower rate rather than pay the higher rate. It’s said that Arm wasn’t happy about that, and wanted Qualcomm to pay more to use those blueprints it helped Nuvia develop.

Qualcomm should have negotiated a royalty rate with Arm for the Nuvia tech, and obtained permission to use Nuvia’s CPU core designs in its range of chips, and failed to do so, it is alleged, and is now being sued.

As I write these words, the lawsuit is still active. When will it be resolved, and how? Who knows? All I can say with some degree of certainty, likely stating the obvious in the process, is:

  • Qualcomm is highly motivated for Snapdragon X to succeed, for a variety of reasons
  • Arm is equally motivated for not only Snapdragon X but also other rumored under-development Windows-on-Arm SoCs to succeed (NVIDIA, for example, is one obvious rumored candidate, given both its past history in this particular space and its existing Arm-based SoCs for servers, as is its public partner MediaTek)
  • And their common partner Microsoft is also equally motivated for Arm-based Copilot+ systems (with Qualcomm the lead example) to succeed.

In closing, a couple of other silicon-related comments:

And with that, and closing in on 3,000 words, I’m going to wrap up for today. Let me know your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Microsoft’s Build 2024: Silicon and associated systems come to the fore appeared first on EDN.

What’s a “thermal jumper” do, anyway?

Tue, 05/28/2024 - 16:57

I’ve always been interested in simple-looking components which solve well-defined, clear, bounded problems. One carpentry example I encountered and used many years ago is formally known as a hanger bolt, Figure 1.

Figure 1 (left) The schematic of the hanger bolt shows it interfaces a wood-screw thread with a machine-screw thread; (right) the hanger bolt allows a wooden furniture element to be connected to a metal fitting. Source: Plaster and Disaster

One end has a wood-screw thread and other has a machine screw for a nut or threaded fitting. It’s the mechanical “interface” between a wooden element such as a table leg and a metal mounting bracket.

There’s even a specialized version that features a reversed (left-hand) thread on the machine-screw side, used for suspending construction wiring or metal assemblies from wood. These reverse-thread hanger bolts solve a subtle problem, where the continuous rotation of an assembly would cause a standard right-hand threaded fastener to unscrew, while a left-hand fastener would remain securely in place.

There are also clever electrical components, of course. Given the number of years I’ve been “hanging around” electronic comments, circuits, and systems, I thought I was somewhat familiar with, or at least aware of, just about all of these, especially those related to management and removal of heat. I’ve had a long affinity for heat sinks, Figure 2, as well as heat pipes (yes, I know that sounds weird). They do one thing, they do it well, they’re reliable, they don’t push back, and they don’t need software, initialization, attention, or periodic upgrades.

Figure 2 Three of the heat sinks I have collected over the years: (left) slip-on “wings” for a TO-5 can transistor; (middle) heat sink designed for the Intel Pentium II from the late 1990s; (right) a large heat sink for a power-converter module. Source: Bill Schweber

Imagine my surprise when I saw a press release (“TMJ Thermal Jumpers Help Lower Temperatures for High Power Supplies”) from Stackpole Electronics, Inc. (SEI) for a component whose name and function were new to me: the “surface-mount thermal jumper resistor”, or simply “thermal jumper”, Figure 3. The word “resistor” definitely had me confused there, so I clicked over to the data sheet (“TMJ Series Surface Mount Thermal Jumper Chip Resistor”) but found that it had all the facts related to ratings, size, and so on, but did not have the “story” on applications.

Figure 3 The thermal jumper is very plain and gives no hint as to its function. Source: Stackpole Electronics, Inc.

Next step was a quick Google search and, not surprisingly, saw several pages of links to clothing outerwear thermal jumpers designed to keep you warm in cool but not cold weather. Eventually, I reached a page of technical links when I saw this entry from another component vendor (Vishay), which stated it clearly: “a thermal jumper allows the connecting of high-power devices to heat sinks without grounding or otherwise electrically connecting the devices.”

OK, now it made sense, or at least started to do so.

The thermal jumper uses an aluminum-nitride (AIN) substrate with high thermal conductivity to provide a low (not zero) path for thermal energy (heat) to get away from its source to a nearby heat sink of some type. At the same time, it offers a high insulation resistance between its electrical terminals.

This jumper is the thermal analog to a zero-ohm resistor. As that name indicates, the zero-ohm device looks like a conventional resistor but is actually a short circuit. It’s used as a machine-insertable jumper to work around PC board-layout challenges (especially on single-sided boards), as a placeholder when a board has multiple configurations, or to obscure circuit specifics by camouflaging some details.

I still wasn’t sure about how to actually use this component, but an application video (“ThermaWick® Thermal Jumper Demo”) from Vishay showed how it functions as a tiny bridge from a resistor as heat source to a nearby PCB copper area functioning as a heat sink, Figure 4.

Figure 4 The test arrangement has a one-watt resistor without heat sinking on the left side, and an identical resistor but with thermal jumper and PC-board copper as heat sink on the right side. Source: Vishay Intertechnology

Using a Fluke thermal imager, the video showed the resistor without thermal jumper was at about 140°C while the one with the jumper and the modest heat-sink area was at 100°C, a significant 40°C difference (of course, the difference is also a function of the size the associated PCB copper acting as a heat sink).

Figure 5 The left-right temperature differential between resistor was about 40°C. Source: Vishay Intertechnology

This thermal jumper is an effective way to solve a specific class of problems). Of course, although it is simple in appearance and function, it is not. It takes engineers, production specialists, material experts, and people skilled in many other disciplines to make it happen and do so in volume production.

Have you even found a small, unassuming passive or active electrical or mechanical component that is simple and clever, and at same time solves a pesky problem? Did it “save the day” and resolve a problem that was causing you to lose sleep, to use a cliché?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

 Related Content

 References

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post What’s a “thermal jumper” do, anyway? appeared first on EDN.

Resurrecting a diminutive, elementary Arm-based PC

Mon, 05/27/2024 - 18:43

I’ll admit upfront that there’s more than a bit of irony in the topic I’m about to cover today. As I write these words on Saturday, April 20, Qualcomm is rumored to next week be giving the next public update on the Snapdragon X family, the latest generation of its series of SoCs for computing applications, and following up on last October’s initial unveil.

In-between then and now, the company has collaborated with media partners on a series of performance “sneak peeks”, including more recent ones that, per the applications showcased, are of particular personal interest. And it’s a poorly kept secret at this point that Microsoft plans to roll out its next-generation Qualcomm- and Arm-based mobile computers exactly one month from now, again as I write these words (stay tuned for timely coverage to come on this topic).

My personal experience with Windows-on-Arm is longstanding and extensive, beginning with Microsoft’s Surface RT more than a decade back, which was Arm-based but wasn’t Qualcomm-based (it instead ran on a NVIDIA Tegra 3 SoC) and that I tore down after it eventually died. And in my current computing stable are two “Windows 11 on Arm64 (i.e., AArch64)” systems based on the current-generation Qualcomm Snapdragon architecture, a Microsoft Surface Pro X tablet/laptop hybrid running the SQ1 SoC (a clock-boosted Snapdragon 8cx SC8180X):

and a Windows Dev Kit 2023 (aka “Project Volterra”) desktop based on the SQ3 (Snapdragon 8cx Gen3) SoC, for which I provided a visual “sneak peek” a month back (as I’m writing this):

But what I’m covering today is Microsoft and Qualcomm’s first developer-tailored stab at Windows-on-Arm, the ECS LIVA Mini Box QC710 Desktop, based on a prior-generation Snapdragon 7c SC7180 SoC:

I went into this particular acquisition and hands-on evaluation with eyes wide open. I was already aware, for example, of the sloth-like performance of which other reviewers had already complained. To wit, note that Microsoft’s documentation refers to the QC710 as the “perfect testbed for Windows on Snapdragon (ARM) application developers” (italicized emphasis mine) vs an actual code development platform. Considering the QC710’s testing-focused aspirations, its anemic specs both in an absolute sense and versus the Project Volterra successor such as:

  • Only 4 GBytes of RAM, and
  • A 64 GByte eMMC SSD

neither user-upgradeable, to boot (bad pun intended), make at least a bit more sense than they would otherwise…if your code runs smoothly on this, it’ll run on anything, I guess?

So, why’d I take the purchase plunge anyway? For one thing, I’ve always been intrigued by the platform’s diminutive (119 x 116.6 x 35 mm/1.38” x 4.69” x 4.59”, and 230g/0.5 lb.) hockey puck-like form factor:

For another, it comes bundled with a 30W USB-C power supply. Right now, in fact, I’m reliably running mine off the 27W PSU (at top in the following photos) that normally recharges my 11” iPad Pro, believe it or not:

In fact, I recently (and accidentally) learned, when I plugged the wrong end of the USB-C cable into the QC710, that I could even boot it off the iPad Pro’s built-in battery, although the boot process understandably didn’t get very far (the QC710 got confused when it tried to access the iPad Pro’s unknown-format storage).

Price was another notable factor. The QC710 originally cost $219. When I got mine, it was down to $59.27 in open-box condition. And, speaking of “open box”, once I stumbled across initial evidence of the issues, I’ll cover in this writeup, Woot! offered me $30 in compensation to keep it in lieu of sending it back (where it’d likely have just ended up in a landfill).

I figured it’d make an interesting single-function PC acting as a Roon server and tethered to external storage over USB 3.2 Gen1 Type-A, 10/100 RJ45 and/or Wi-Fi 5 (802.11ac 2×2 MIMO, to be precise), for example (although scant system memory, not to mention limited CPU horsepower, might prove problematic). If nothing else, it’d be a decent entry-level donation to someone else. And yes, fundamental engineering curiosity was also an acquisition factor.

Here are some pics of my particular device, as usual starting with box shots:

and now of the QC710 itself, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

and its accompanying power supply:

So, what was that “initial evidence of issues” that I previously mentioned? In the spirit of “a picture paints a thousand words”, here’s what greeted me the first time I booted the QC710:

The QC710 originally shipped with Windows 10 Home, which doesn’t support BitLocker mass storage encryption. Apparently, though, the previous owner upgraded it to the Pro variant of either Windows 10 or Windows 11, and then either attempted to factory-reset the partition before returning it or Woot! did it prior to resale. Regardless, without the BitLocker key I wasn’t going to be able to get to the existing O/S build. And, by the way, about that “Press the Windows key” statement at the bottom of the screenshot? No go; neither the keyboard nor mouse I had plugged into the system’s two USB-A ports worked. The root issue wasn’t hardware; I stumbled onto the fact that if I hit “ESC” as soon as I saw the initial firmware boot screen:

I’d instead end up in Qualcomm’s BDS (Boot Device Selection) menu, from which the keyboard worked fine until Windows attempted to launch. BDS isn’t a cursor-amenable GUI, you can see, but the mouse was lit up underneath and was presumably also functional outside of Windows.

Alas, I had no BSD documentation, therefore no idea what to do next. Hold that thought.

“No problem,” I figured, “I’ll just reinstall a fresh copy of Windows for Arm” (at additional license key expense, but I digress). Problem, actually: There are currently only two ways to get an ISO of Windows for Arm to put on a USB flash drive. One, which I didn’t try, at least directly (again, hold that thought) involves enrolling as a Windows Insider. The other leverages an unsanctioned-by-Microsoft but slick site called UUP Dump, which I did try. And before any of you ask “what about Microsoft’s Media Creation Tool?”…I tried that too, from both of my Arm-based Windows 11 systems. Both times I ended up with Windows…for x86 CPUs.

So, I went the UUP Dump route instead, trying both Windows 10 and 11, both of which conceptually worked great. In combination with Rufus, I ended up with bootable installer USB flash drives which the QC710 recognized fine. And although I was left with only one free USB-A port, a USB hub attached to it enabled me to connect both my keyboard and mouse. But in both installation-attempt cases, although I ended up at the initial setup screen:

I couldn’t get any further because the keyboard and mouse again weren’t functional. And yes, I even tried separately powering the USB hub versus relying on system power supplied over USB-A. I realized at that point (and my colleague later confirmed) that, for reasons that remain baffling to me, the complete Qualcomm hardware driver stack isn’t natively bundled within the O/S installer. Obviously, USB mass storage support was enabled (therefore the boot-from-flash stick success) and baseline (at minimum) graphics were also functional, enabling me to see the setup screen. But no keyboard or mouse support? Really?

About “my colleague”…the only thing left of that I could think to do was to “throw a Hail Mary pass”…which thankfully ended up getting caught and turned into a touchdown (complete with a spike in the end zone). As I was doing initial research on the QC710 with the thought of perhaps doing a teardown on it (an aspiration with I may yet realize, especially if I can convince myself that it’d be nondestructive and cosmetically preserving) I searched the Internet to see if anyone else had already done one. I didn’t find much on the QC710 at all, and most of the little that I did uncover ended up being underwhelming-results reviews. But I struck gold when I stumbled across a detailed product page (even more detailed now, subsequent to our interaction) from a seasoned and very knowledgeable engineer named Rafael Rivera. The tagline on his LinkedIn profile, “Forward engineer by day, reverse engineer by night”, pretty much sums it up. 😀

I “out of the blue” emailed Rafael a quick summary of who I was and my situation with the  QC710, and he rapidly and enthusiastically responded with willingness to help after pulling his system out of storage and refreshing his memory on its details and quirks (his initial writeup was published in mid-November 2021). His suggested first step was a set of instructions (all now documented on his web page) that would:

  1. Use the Qualcomm BSD utility to put the QC710 in UEFI Shell mode, then and mount the QC710 SSD’s main partition as an external USB-cabled drive from another Windows machine (I used my Surface Pro 7+)
  2. Remotely reformat that partition, and then
  3. Remotely use Microsoft’s DISM utility to first put a fresh Windows “build” on that partition and then augment the “build” with the Qualcomm driver suite he’d also published to his web page.

Problem 1: I was able to remote-mount the QC710 partition from my Surface Pro 7+, but when I tried to reformat the new drive letter from within Windows Explorer, it disappeared from view never to return…although something had changed as the QC710 boot screen was now different:

At Rafael’s suggestion, I tried Windows’ Disk Management utility instead, which did the trick (it turned out that my earlier attempt had wiped the partition’s existing contents but the QC710 SSD then unmounted itself prior to reformat completion).

Problem 2: But when I then tried to run DISM using the instructions he sent me, I kept getting the following:

Error: 87
The parameter is incorrect.

In comparing notes afterwards, Rafael and I realized that since I was running a “stock” Windows 11 build on the Surface Pro 7+ versus his newer Developer build, my version of DISM was older (and apparently buggier) than his. But at this point, the only thing to do was to pack up the QC710 and ship it to him in for onsite diagnosis. He got it on a Friday afternoon and that same night initially reported back that DISM ran fine for him, and he was able to get the main partition rebuilt problem-free.

Shortly thereafter, however, he sent me another reply, noting that the system still wasn’t booting. He ended up spending a good chunk of his weekend working on the QC710, in the process discovering that two other SSD partitions, the EFI System Partition (ESP) and the Boot Configuration Data (BCD), also required re-creation. The following commentary from him will likely be helpful to anyone else striving to following in his footsteps:

We needed to also rebuild/repair the EFI system boot partition and recovery partition using standard tools, like diskpart and bcdboot. (To mount all partitions on the storage medium, as opposed to just the Windows basic data partitions, I used UsbfnMsdApp.efi -m “eMMC User”.)

When the system was back in my hands a couple of days later, it had an un-activated Insider Dev channel build of Windows 11 on it and was in default Out of Box Experience (OOBE) mode:

And yes, the keyboard was recognized this time (and the mouse, too) 😀

After going through the usual setup steps, I had a fully functional Windows 11 system:

which to date has received a handful of big-and-small updates:

Abundant thanks to Rafael, such a trooper that he even said “thanks for the challenge” after!

As for Microsoft and Qualcomm (and other Arm licensees) …I completely understand the underlying motivation for you to be investing so long and significantly on the Windows-on-Arm effort as an end user alternative to x86 hegenomy. It’s at the root of why I’ve been following the project for as long and in-depth as I have. But I was again reminded of its relative immaturity a couple of days ago when, striving to cut myself free from my kludgy wired keyboard and mouse-plus-USB hub setup for the QC710, I picked up an on-sale Microsoft All-in-One Media Keyboard:

but then had to search for, download and install the Microsoft Mouse and Keyboard Center app in order to get the trackpad to act as anything other than a rudimentary mouse (but hey, at least an ARM64 version of the app was available!).

I’ve had the occasional peripheral not work out of box (OOB) when I did an x86-based PC build in the past, but it was usually something relatively “obscure” like an optimized graphics driver set or a Wi-Fi or Bluetooth driver stack. That said, using the standard initial Windows build I was still able to passably drive the display and otherwise get Windows to a functional initial state where I could then connect to the Internet to download and install the additional software I’d need (over wired Ethernet, for example). And for goodness’ sake, the keyboard and mouse always worked OOB, at least to an elementary degree!

Even though Windows on Arm has far fewer hardware building blocks (and combinations of them) that it currently needs to support versus the legacy x86 alternative, it still seemingly undershoots even a modest modicum of functionality. And that it’s apparently so easy to corrupt a mass storage device’s partition contents to such a degree that the system containing it is rendered braindead in the absence of expert heavy lifting is equally troubling. Try, try again!

Sound off with your thoughts in the comments, please, readers. And thanks again for everything, Rafael!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Resurrecting a diminutive, elementary Arm-based PC appeared first on EDN.

Samsung’s memory chip business: Trouble in paradise?

Mon, 05/27/2024 - 04:00

The week of 20 May 2024 has been quite eventful for Samsung’s semiconductor unit, the world’s largest producer of memory chips like DRAMs, SRAMs, NAND flash, and NOR flash. Early this week, an unexpected change of guard at Samsung’s semiconductor business rocked the industry.

When Samsung abruptly replaced its semiconductor business chief Kyung Kye-hyun with DRAM and flash memory veteran Jun Young-hyun, the transition was mostly credited to the “chip crisis” associated with Samsung being a laggard in high bandwidth memory (HBM) business, where SK hynix has become a market leader.

Figure 1 Jun Young-hyun led Samsung’s memory chip business from 2014 to 2017 after working on the development of DRAM and flash memory chips. Source: The Chosun Daily

It’s worth noting that management reshuffles at Samsung are usually announced at the start of the year. However, being seen as a laggard in HBM technology has pushed the memory kingpin into a desperate position, and the appointment of a new chip unit head mostly reflects that sense of crisis at the world’s largest memory chip supplier.

HBM, a customized memory product, has enjoyed explosive growth in artificial intelligence (AI) applications due to its suitability for training AI models like ChatGPT. HBM, where DRAM chips are vertically stacked to save space and reduce power consumption, helps process massive amounts of data produced by complex AI applications.

SK hynix, Samsung’s Korean memory chip rival, produced its first HBM chip in 2013. Since then, it has continuously invested in developing this memory technology while bolstering manufacturing yield. According to media reports, SK hynix’s HBM production capacity is fully booked through 2025.

SK hynix is also the main supplier of HBM chips to Nvidia, which commands nearly 80% of the GPU market for AI applications, a premise where HBM memory chips are strategically paired with AI processors like GPUs to overcome data overheads. On the other hand, Samsung, currently catching up on HBM technology, is known to be in the process of qualifying its HBM memory chips for Nvidia AI processors.

During Nvidia’s annual event, GPU Technology Conference (GTC), held in March 2024 in San Jose, California, the company’s co-founder and CEO Jensen Huang endorsed Samsung’s HBM3e chips, then going through a verification process at Nvidia, with a note “Jensen Approved” next to Samsung’s 12-layer HBM3e device on display at GTS 2024 floor.

HBM test at Nvidia

While the start of the week stunned the industry with an unusual reshuffle at the top, the end of the week came with a bigger surprise. According to a report published in Reuters on Friday, 24th May, Samsung’s HBM chips failed to pass Nvidia’s test for pairing with its GPUs due to heat and power consumption issues.

In another report published in The Chosun Daily that day, Professor Kwon Seok-joon of the Department of Chemical Engineering at Sungkyunkwan University said that Samsung has not been able to fully manage quality control of through-silicon vias (TSVs) for packaging HBM memory chips. In other words, high yield in packaging multiple DRAM layers has been challenging. Another insider pointed to reports that the power consumption of Samsung’s HBM3E samples is more than double that of SK hynix.

Figure 2 According to the article published in Reuters, a test for Samsung’s 8-layer and 12-layer HBM3e memory chips failed in April 2024. Source: Samsung Electronics

While Nvidia declined to comment on this story, Samsung was quick to state that the situation has not been concluded, and that testing is still ongoing. The South Korean memory chipmaker added that HBM, a specialized memory product, requires optimization through close collaboration with customers. Jeff Kim, head of research at KB Securities, quoted in the Reuters story, acknowledged that while Samsung anticipated to quickly pass Nvidia’s tests, a specialized product like HBM could take some time to go through customers’ performance evaluations.

Still, it’s a setback for Samsung that could go to advantage of SK hynix and Micron, the remaining players in the high-stake HBM game. Micron, which claims that its HBM3e consumes 30% less power than its competitors, has announced that its 24-GB, 8-layer HBM3e memory chips will be part of Nvidia’s H200 Tensor Core GPUs, breaking the previous exclusivity of SK hynix as the sole HBM supplier for Nvidia’s AI processors.

A rude awakening?

Samsung, being a laggard in HBM, won’t be the only worry for the upcoming chief Jun. Despite the recovery in memory prices, Samsung’s semiconductor business is lagging in competitiveness on various fronts. According to another Reuters report, Samsung’s high-density DRAMs and NAND flash products are no longer ahead of the competition.

Next, the Korean tech heavyweight’s foundry operation is struggling to catch up with market leader TSMC. Samsung’s chip contract-manufacturing business has struggled to win big customers, while TSMC is still far ahead in terms of overall market share. Then there is the global AI wave in which Samsung is currently struggling to find its place besides its HBM woes.

Samsung is known for its fierce competitive skills, and the appointment of the new chief of its semiconductor unit signals that it means business. The Korean tech giant is facing an uphill battle in catching up in HBM memory technology, but one thing is for sure: Samsung is no stranger to charting hot waters.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Samsung’s memory chip business: Trouble in paradise? appeared first on EDN.

Single event upset and recovery

Fri, 05/24/2024 - 16:35

The effects of cosmic rays were once discussed in “Doubled-up MOSFETs“.

The idea was that component redundancy, paired MOSFETs in that case, would allow one MOSFET to still function even if its partner in a switched mode power supply were to be disabled from normal switching because of a cosmic ray event, a single event upset, or an SEU (Figure 1).

Figure 1 An SEU from a cosmic ray can lead to component failure.

However, an SEU doesn’t necessarily have to come from a cosmic ray. CMOS integrated circuits are sometimes seen to latch-up for no apparent reason. The latch-up event comes about from internal four-layer structures that look very much like SCRs which when triggered, can virtually short circuit the +Vcc rail pin to ground. Unlike the power MOSFET situation, component redundancy may not be possible. In such a case, SEU recovery may be the answer.

Figure 2 is conceptual, but it is derived from actual circuitry that was used in a more complex design. 

Figure 2 The SEU recovery concept where the circuitry in green in latch-up prone.

The basic idea is that Q1, Q2 etal in green represents a latch-up prone integrated circuit, probably CMOS, while V1 etal in blue represents a latch-up trigger. An RC pair in yellow provides a delay of the latch-up recovery process so that the recovery scenario can be more easily seen on the scope, but we will shortly remove that RC pair.

When the IC latches up, it drags down the output of the +5-volt regulator. When that voltage falls below the comparator threshold, +3 volts as shown here, the comparator sends a drive pulse to the power MOSFET which further lowers the rail voltage to where the IC latch cannot be sustained. When the power MOSFET turns off again, the +5-volt regulator output voltage returns to normal.

If we now remove that RC delay, the scenario proceeds the same way, but in this simulation it all happens too fast for the saturation voltage of the latched-up device to be viewable in the scope display (Figure 3).

Figure 3 SEU Recovery where the RC delay is now removed.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Single event upset and recovery appeared first on EDN.

Why HBM memory and AI processors are happy together

Fri, 05/24/2024 - 09:18

High bandwidth memory (HBM) chips have become a game changer in artificial intelligence (AI) applications by efficiently handling complex algorithms with high memory requirements. They became a major building block in AI applications by addressing a critical bottleneck: memory bandwidth.

Figure 1 HBM comprises a stack of DRAM chips linked vertically by interconnects called TSVs. The stack of memory chips sits on top of a logic chip that acts as the interface to the processor. Source: Gen AI Experts

Jinhyun Kim, principal engineer at Samsung Electronics’ memory product planning team, acknowledges that the mainstreaming of AI and machine learning (ML) inference has led to the mainstreaming of HBM. But how did this lover affair between AI and HBM begin in the first place?

As Jim Handy, principal analyst with Objective Analysis, put it, GPUs and AI accelerators have an unbelievable hunger for bandwidth, and HBM gets them where they want to go. “If you tried doing it with DDR, you’d end up having to have multiple processors instead of just one to do the same job, and the processor cost would end up more than offsetting what you saved in the DRAM.”

DRAM chips struggle to keep pace with the ever-increasing demands of complex AI models, which require massive amounts of data to be processed simultaneously. On the other hand, HBM chips, which offer significantly higher bandwidth than traditional DRAM by employing a 3D stacking architecture, facilitate shorter data paths and faster communication between the processor and memory.

That allows AI applications to train on larger and more complex datasets, which in turn, leads to more accurate and powerful models. Moreover, as a memory interface for 3D-stacked DRAM, HBM uses less power in a form factor that’s significantly smaller than DDR4 or GDDR5 by stacking as many as eight DRAM dies with an optional base die that can include buffer circuitry and test logic.

Next, each new generation of HBM incorporates improvements that coincide with launches of the latest GPUs, CPUs, and FPGAs. For instance, with HBM3, bandwidth jumped to 819 GB/s and maximum density per HBM stack increased to 24 GB to manage larger datasets.

Figure 2 Host devices like GPUs and FPGAs in AI designs have embraced HBM due to their higher bandwidth needs. Source: Micron

The neural networks in AI applications require a significant amount of data both for processing and training, and training sets alone are growing about 10 times annually. That means the need for HBM is likely to grow further.

It’s important to note that the market for HBM chips is still evolving and that HBM chips are not limited to AI applications. These memory chips are increasingly finding sockets in applications serving high-performance computing (HPC) and data centers.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why HBM memory and AI processors are happy together appeared first on EDN.

Demo board provides dual-motor control

Thu, 05/23/2024 - 20:03

ST’s demonstration board controls two three-phase brushless motors using an onboard STSPIN32G4 controller with an embedded MCU. The controller’s integrated MCU is based on a 32-bit Arm Cortex-M4 core, which delivers the processing power to manage both motors simultaneously.

The EVSPIN32G4-DUAL demo board can be used for developing industrial and consumer products, ranging from multi-axis factory automation systems to garden and power tools. It is capable of executing complex algorithms, like field-oriented control (FOC), in real time. MCU peripherals support sensored or sensorless FOC, as well as advanced position and torque control algorithms.

Along with the integrated gate driver of the STSPIN32G4 controller, the board employs an additional STDRIVE101 gate driver. The two power stages deliver up to 10 A with a maximum supply voltage of 74 V. Built-in safety features include drain-source voltage monitoring, cross-conduction prevention, several thermal protection mechanisms, and undervoltage lockout.

The EVSPIN32G4-DUAL demo board is available now with a single-unit price of $177.62.

EVSPIN32G4-DUAL product page

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Demo board provides dual-motor control appeared first on EDN.

Microchip grows rad-tolerant MCU portfolio

Thu, 05/23/2024 - 20:02

Offering high radiation tolerance, the SAMD21RT MCU from Microchip is capable of operating in the harsh environments found in space. The device, which is based on a 32-bit Arm Cortex-M0+ core running at up to 48 MHz, also meets the stringent size and weight constraints critical for space applications.

The SAMD21RT operates over a temperature range of -40°C to +125°C and tolerates up to 50 krads of total ionizing dose (TID) radiation. It also provides single event latch-up (SEL) immunity of up to 78 MeV.cm2/mg. Operating voltage is 3 V to 3.6 V.

Occupying a footprint of just 10×10 mm, the SAMD21RT MCU packs 128 kbytes of flash memory and 16 kbytes of SRAM in its 64-pin plastic or ceramic QFP package. It furnishes multiple peripherals, including a 12-bit ADC with up to 20 channels, a 10-bit DAC, 12-channel DMA controller, two analog comparators, and various timer/counters. To conserve power, the SAMD21RT offers idle and standby sleep modes.

Limited samples of the SAMD21RT microcontroller are available by contacting a Microchip sales representative.

SAMD21RT product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Microchip grows rad-tolerant MCU portfolio appeared first on EDN.

Dual-channel gate drivers fit IGBT modules

Thu, 05/23/2024 - 20:02

Scale-iFlex XLT plug-and-play dual-channel gate drivers from Power Integrations operate IGBT modules with blocking voltages of up to 2.3 kV. These ready-to-use drivers work with LV100 (Mitsubishi), XHP 2 (Infineon), and equivalent IGBT modules used in wind, energy storage, and solar renewable energy installations.

Each driver board features an electrical interface, a built-in DC/DC power supply, and negative temperature coefficient (NTC) readout for isolated temperature measurement of the power module. According to the manufacturer, NTC data reporting increases reliability and module utilization by as much as 30%. It also reduces hardware complexity, eliminating multiple cables, connectors, and additional isolation circuitry.

The dual-channel gate drivers support three IGBT voltage classes: 1200 V, 1700 V, and 2300 V. They have a maximum switching frequency of 25 kHz and operate over a temperature range of -40°C to +85°C. Output power is 1 W per channel at maximum ambient temperature. Protection features include short circuit, soft shutdown, and undervoltage lockout.

Scale-iFlex XLT gate drivers are now available for sampling.

Scale-iFlex XLT product page

Power Integrations

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Dual-channel gate drivers fit IGBT modules appeared first on EDN.

SiC MOSFETs reside in 7-pin D2Pak

Thu, 05/23/2024 - 20:02

Nexperia now offers 1200-V SiC MOSFETs in 7-pin D2Pak (TO-263-7) plastic packages with on-resistance values of 30 mΩ, 40 mΩ, 60 mΩ, and 80 mΩ. With the release of the NSF0xx120D7A0 series of SiC MOSFETs, the company is addressing the need for high-performance SiC switches in surface-mount packages like the D2Pak-7.

The N-channel devices can be used in various industrial applications, including electric vehicle charging, uninterruptible power supplies, photovoltaic inverters, and motor drives. Nexperia states its process technology ensures that its SiC MOSFETs offer industry-leading temperature stability. The parts’ nominal RDS(ON) value increases by only 38% over an operating temperature range of +25°C to +175°C. In addition, tight gate-source threshold voltage allows the discrete MOSFETs to offer balanced current-carrying performance when connected in parallel.

The MOSFET’s TO-263 single-ended surface-mount package has 7 leads with a 1.27-mm pitch and occupies a footprint area of 189.2 mm2. A Kelvin source pin speeds commutation and improves switching.

For more information about the NSF0xx120D7A0 series of SiC MOSFETs in the TO-263-7 package, click here.

Nexperia

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SiC MOSFETs reside in 7-pin D2Pak appeared first on EDN.

Gate driver duo optimizes GaN FET design

Thu, 05/23/2024 - 20:01

A two-chip set from Allegro delivers isolated gate drive for e-mode GaN FETs in multiple applications and topologies. Comprising the AHV85000 and AHV85040, the pair of ICs is the third product in the company’s high-voltage Power-Thru portfolio, transmitting both the PWM signal and bias power through a single external isolation transformer. This eliminates the need for an external auxiliary bias supply or high-side bootstrap.

Expanding on Allegro’s Power-Thru technology, the combo chipset offers the same benefits found in its existing gate drivers, but relocates the isolation transformer from internal to external. By doing so, the AHV85000 and AHV85040 afford greater design flexibility for isolation, power, and layout, as engineers can choose a transformer based on their design requirements. They are well-suited for use in clean energy applications, such as solar inverters and EV charging, as well as data center power supplies.

The AHV85000 and AHV85040 form the primary-side transmitter and secondary-side receiver of an isolated GaN FET gate driver. Together, they simplify system design and reduce EMI through reduced total common-mode capacitance. The chipset also enables the driving of a floating switch at any location in a switching power topology.

The AHV85000 and AHV85040 are sold as a two-chip set. Each chip comes in a 3×3-mm, 10-pin DFN surface-mount package. The parts are available through Allegro’s distributor network.

AHV85000/40 product page

Allegro Microsystems 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Gate driver duo optimizes GaN FET design appeared first on EDN.

Analog TV transmitter—analog problem

Thu, 05/23/2024 - 16:25

In the late 1980s the television station I worked at was still using an early 1970s transmitter, an RCA TT-50FH (50 kW, Series F, High-band VHF).

The transmitter was made with three cabinets: two 25 kW amplifiers, A and B, on the left and right, and a control cabinet with aural and visual exciters and intermediate power amplifiers (IPAs) in the center. The amplifier outputs were combined externally to produce the full 50 kW (Figure 1).

Figure 1 The TV transmitter was made with three cabinets: two 25 kW amplifiers, A and B, on the left and right, and a control cabinet with aural and visual exciters and intermediate power amplifiers (IPAs) in the center. 

Every four or five months we’d notice intermittent black lines running through the video. Apparently, this had been an ongoing problem for several years, with the problem originating in the A amplifier. The transmitter supervisor brought me out to the transmitter site, and we’d use his standard procedure, as follows:

  • Split the transmitter, so amplifier B fed the antenna and amplifier A fed the dummy load.
  • Slide the IPA chassis out from the center cabinet and remove its top.
  • Turn all of the adjustments on the IPA to a minimum.
  • Follow the IPA procedure in the maintenance manual to set up the IPA for proper operation.
  • Close up the IPA, slide the chassis back in place, and recombine the transmitter amplifiers.

This worked every time, eliminating the black lines for another few months.

After I saw this happen two or three times, I got a little suspicious, especially since the IPA adjustments always ended up exactly where they had started. I asked the transmitter supervisor how he came up with the fix. He learned it from his predecessor, who probably learned it from his predecessor.

This fix didn’t seem right.  It had more of a feel of a bad connection than an electronic component failure.

I took a look in the back of the transmitter, at the IPA’s connections. The IPA used a loop-through input, which allows a one signal to feed multiple devices. If that’s not necessary, the output is terminated with a 75-ohm resistor matching the characteristic impedance of the coax cable.

In more modern equipment, if you consider the 1980s modern, the loop-through is made with a pair of BNC connectors on a circuit board. In this transmitter, RCA built the device with N-connectors on a bracket. See Figure 2.

Figure 2 The IPA used a loop-through input, which allows a one signal to feed multiple devices. This was built with N-connectors on a bracket where the output is terminated with a 75-ohm resistor

When I checked the connections, I found the cables were tight on the chassis-mounted jacks, but the jacks themselves were not. The hex nuts on the rear of the bracket had loosened up over the years, so the ground connection, which depended on the metal bracket, was poor. We tightened the nuts, and the transmitter behaved itself for the rest of its life, well into the 1990s.

So why did the standard procedure fix the problem for a while each time? It didn’t, of course. It was the sliding back and forth of the chassis that was shaking up the cables and connectors and restoring a good ground connection, even if only slightly and only for a while.

Those were the good old days. With analog TV you could see the problem in the video or hear it in the audio. HDTV transmitters, on the other hand, just go dark and silent when there’s a problem. But those are stories for another day.

Robert Yankowitz retired as Chief Engineer at a television station in Boston, Massachusetts, where he had worked for 23 years. Prior to that, he worked for 15 years at a station in Providence, Rhode Island.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Analog TV transmitter—analog problem appeared first on EDN.

Pages