Українською
  In English
EDN Network
Simulator tool tests Microchip SiC power devices

The MPLAB SiC power simulator from Microchip evaluates the company’s SiC power devices and modules across various topologies during the design phase. Developed in collaboration with Plexim, the MPLAB SiC power simulator is a PLECS-based software environment that serves as an online tool that eliminates the need to purchase a simulation license.
By providing valuable benchmark data, the simulation tool helps accelerate the design process of common power converter topologies in DC/AC, AC/DC, and DC/DC applications before committing the design to hardware. It also reduces component selection time. A power electronics designer deciding between a 25-mΩ and 40-mΩ SiC MOSFET for a three-phase active front-end converter can get immediate simulation results, such as average power dissipation and peak junction temperature of the devices.
The free MPLAB SiC power simulator can be used to design power systems for e-mobility, sustainability, and industrial applications such as electric vehicles, on/off-board charging, power supplies, and battery storage systems.
To access the complimentary MPLAB SiC power simulator, click here. Design resources for Microchip’s SiC-based hardware and software can be found here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Simulator tool tests Microchip SiC power devices appeared first on EDN.
Supermicro launches NVIDIA-powered AI development system

At the heart of Supermicro’s AI development platform are four NVIDIA A100 80-GB GPUs to accelerate a wide range of AI and HPC workloads. The system also leverages two 4th Gen Intel Xeon Gold 6444Y processors running at a base clock rate of 3.6 GHz. Self-contained liquid cooling for all CPUs and GPUs offers whisper-quiet operation (approximately 30 dB) in office and data center environments.
Designated the SYS-751GE-TNRT-NV1, the deskside system delivers over 2 petaflops of AI performance. It comes preloaded with the Ubuntu 22.04 LTS operating system and the NVIDIA AI Enterprise software suite. Also included are 512 GB of DDR5 memory, six 1.9-TB drives providing a total of 11.4 TB of storage, and an NVIDIA ConnectX-6 DX network adapter.
The SYS-751GE-TNRT-NV1 platform comes with a three-year subscription license for NVIDIA AI Enterprise. This support and service subscription provides access to an extensive library of full-stack software, including AI workflows, frameworks, and over 50 NVIDIA pre-trained models.
SYS-751GE-TNRT-NV1 product page
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Supermicro launches NVIDIA-powered AI development system appeared first on EDN.
Keysight grows e-mobility test platform

Component-level and field test tools join Keysight’s e-mobility test portfolio to support the entire electric vehicle charging development cycle. These tools improve interoperability between electric vehicle (EV) and electric vehicle supply equipment (EVSE) products through conformance testing and type approvals by focusing on the complete range of communication protocols employed by the Combined Charging System (CCS) standards.
The new e-mobility charging test solutions include:
- SL1550A EV/EVSE charging communication interface tester for performing component-level testing of electric vehicle and supply equipment communication controllers
- SL1556A CCS charging protocol tracer for seamless observation of the CCS communication channel between EV and EVSE in the lab or in the field
- SL156xA EV/EVSE charging test robot series to automate the HMI interactions between the charging test system and the system under test
The company also offers smart charging emulation software. This customizable and configurable emulation environment enables scenario- and functional-driven EV and EVSE tests based on CCS Basic, CCS Extended, and CCS Advanced profiles supporting DIN 70121, ISO 15118-2/-3, and ISO 15118-20/-3.
For more information about Keysight’s e-mobility test systems and software, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Keysight grows e-mobility test platform appeared first on EDN.
System enables hybrid quantum-classical computing

NVIDIA debuted the DGX Quantum, a system for researchers working in high-performance, low-latency quantum-classical computing, at its GTC 2023 developers conference. The GPU-accelerated system blends NVIDIA’s Grace Hopper Superchip and CUDA Quantum open-source programming model with the OPX+ quantum controller from Quantum Machines.
The hardware/software platform allows developers to build powerful applications that combine quantum computing with state-of-the-art classical computing, while adding capabilities for calibration, control, quantum error correction, and hybrid algorithms. Connected via a PCIe cable, the Grace Hopper system and OPX+ enable sub-microsecond latency between GPUs and quantum processing units (QPUs).
NVIDIA’s Grace Hopper integrates the Hopper architecture GPU with the new Grace CPU. It delivers up to 10X higher performance for applications running terabytes of data. OPX+ brings real-time classical compute engines into the heart of the quantum control stack to maximize any QPU and open new possibilities in quantum algorithms. Both Grace Hopper and OPX+ can be scaled to fit the size of the system, from a few-qubit QPU to a quantum-accelerated supercomputer.
DGX Quantum also equips developers with CUDA Quantum, a hybrid quantum-classical computing software stack that enables integration and programming of QPUs, GPUs, and CPUs in one system.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post System enables hybrid quantum-classical computing appeared first on EDN.
Power Tips #115: How GaN switch integration enables low THD and high efficiency in PFC

The need for cost-effective solutions to improve power factor correction (PFC) at light loads and with peak efficiency while shrinking passive components is becoming difficult with conventional continuous conduction mode (CCM) control. Engineers are conducting significant research into complex multimode solutions to address these concerns [1], [2], and these approaches are attractive in that they enable you to shrink the size of the inductor while simultaneously improving efficiency with soft switching at lighter loads.
But in this power tip, I will present a new approach to achieving high efficiency and low total harmonic distortion (THD) that does not require the use of a complex multimode control algorithm and achieves zero switching losses under all operating conditions. This approach uses a high-performance gallium-nitride (GaN) switch with an integrated flag that indicates whether the switch turns on with zero voltage switching (ZVS). This approach enables high-efficiency ZVS under all operating conditions while simultaneously forcing the THD very low.
Topology
The topology used for this system is the integrated triangular current mode (iTCM) totem-pole PFC [3]. For high-power and high-efficiency systems, the totem-pole PFC offers a distinct advantage for conduction losses. The TCM version of this topology enforces ZVS by making sure that the inductor current always goes sufficiently negative before the switch turns on [4]. Figure 1 illustrates the iTCM version of totem-pole PFC.
Figure 1 The iTCM topology, showing AC line frequency current envelopes.
The difference between the TCM converter and the iTCM converter is the presence of Lb1, Lb2 and Cb. During normal operation, the voltage across Cb is equal to the input voltage Vac. Two phases operating 180 degrees out of phase take advantage of ripple current cancellation and reduce the root-mean-square current stress in Cb. Lb1 and Lb2 are sized to only process the high-frequency AC ripple current necessary for TCM operation. This removes the DC bias required for the inductor used in TCM, as defined in [4]. Ferrite cores for Lb1 and Lb2 help ensure low losses in the presence of the high flux swings necessary for ZVS. Lg1 and Lg2 are larger in value (as much as 10 times larger) than Lb1 and Lb2, which prevents most of the high-frequency current from flowing into the input source and subsequently reduces electromagnetic interference (EMI). In addition, the reduced ripple current in Lg1 and Lg2 enables the possible use of lower-cost core materials. Figure 1 also illustrates the ripple current envelopes for several key branches.
Control
Control is facilitated by the Texas Instruments (TI) TMS320F280049C microcontroller and LMG3526R030 GaN field-effect transistors (FETs). These FETs have an integrated zero-voltage-detection (ZVD) signal that is asserted anytime the switch turns on with ZVS. The microcontroller uses the ZVD information to adjust the switch timing parameters to turn the switch on with just enough current to achieve ZVS. For simplicity, Figure 2 illustrates a one-phase iTCM PFC converter. Table 1 defines the key variables used in this figure. The microcontroller uses an algorithm that solves the exact set of differential equations for the system. These equations use conditions that enforce ZVS on both switches and force the current to be equal to the current command. The equations are accurate, provided that the system is operating with the right amount of ZVS for both switches. When operating correctly, the algorithm yields the timing parameters for 0% THD and an optimal amount of ZVS. To facilitate the ZVS condition, each switch (S1 and S2) reports their respective ZVS turnon status on a cycle-by-cycle basis back to the microcontroller. In Figure 2, Vhs,zvd and Vls,zvd denote the ZVD reporting.
Figure 2 A single-phase iTCM schematic with control signals.
Table 1 Switch timing parameters and definitions.
Figure 3 illustrates the ZVD timing adjustment process. During every switching cycle, the microcontroller calculates the switch timing parameters (ton, toff, trp, and trv) based on the ZVD signal’s cumulative history. Figure 3b shows the system operating at the ideal frequency. By ideal, I mean that the THD is 0%, and you have the perfect amount of ZVS for the high- and low-side FETs. Figure 3a shows what happens when the operating frequency is 50 kHz lower than the ideal. Notice that the high-side FET loses ZVS (as indicated by the loss of the high-side ZVD signal), while the low-side FET has more negative current than is necessary to achieve ZVS. The result is a loss of efficiency and a distorted power factor. Figure 3c occurs when the operating frequency is 50 kHz higher than the ideal. In this case, the high-side FET has ZVS but the low-side FET loses ZVS. Again, there is a clear loss of efficiency and distortion.
Figure 3 ZVD behavior with low fs (a); ideal fs (b); and high fs (c).
Based on the presence or absence of the ZVD signal, the controller can increase or decrease the frequency to push the system to the optimum operating point. In this way, the control effort acts like an integrator that attempts to find the best operating frequency. The optimum will occur when the system is hovering right on the threshold of just barely getting ZVS every cycle.
Prototype performance
Figure 4 shows a prototype built with the topology and algorithm I’ve discussed so far.
Figure 4 A 400-V, 5-kW prototype with a power density of 120 W/in3.
Table 2 summarizes the specifications and important component values for the prototype.
Table 2 System specifications and important components
Figure 5 shows the prototype’s measurement nodes and Figure 6 illustrates the system waveforms of the prototype operating under full power (5 kW). The switch-node currents, IL,A and IL,B, are the sum of the current in Lg and Lb for their respective branch. The zoom section of the plot shows the waveform detail during the positive half cycle. The current waveforms have an ideal triangular shape, with just enough negative current to achieve ZVS as demonstrated by switch-node voltages VA and VB. Furthermore, the sinusoidal envelope of the current waveform suggests a low THD.
Figure 5 Prototype measurement nodes
Figure 6 System waveforms of the prototype operating under full power (Vin = Vout/2, load = 5 kW, Vin = 230 Vac, Vout = 400 V).
Figure 7 shows the measured efficiency and THD across the load range. The efficiency peaks above 99% and is above 98.5% for almost the entire load range. The THD has a maximum of 10% and is below 5% for most of the load range. In order to optimize performance, the unit phase sheds or adds phases at approximately 2 kW.
Figure 7 The prototype efficiency and THD across the load range.
Achieving a high efficiency and low THD for a totem-pole PFC
You can use the ZVD signal to control the operating frequency of a totem-pole PFC converter to achieve high efficiency and low THD. For more information about this approach, as well as a simulation model for the system, see the Variable-Frequency, ZVS, 5-kW, GaN-Based, Two-Phase Totem-Pole PFC Reference Design.
Brent McDonald is system engineer for the Texas Instruments Power Supply Design Services team. He received a bachelor’s degree in electrical engineering from the University of Wisconsin-Milwaukee, and a master’s degree, also in electrical engineering, from the University of Colorado Boulder.
Related Content
- Power Tips #114: A potential firmware mistake may lead to control instability
- Power Tips #113: Two simple isolated power options for 8 W or less
- Power Tips #112: Onboard fixtures for fault testing
- Power Tips #111: Why current sensing is a must in collaborative, mobile robots
- PFC totem pole architecture and GaN combine for high power and efficiency
- GaN transistors for efficient power conversion: buck converters
References
- Fernandes, Ryan, and Olivier Trescases. “A Multimode 1-MHz PFC Front End with Digital Peak Current Modulation.” Published in IEEE Transactions on Power Electronics 31, no. 8 (August 2016): pp. 5694-5708. doi: 10.1109/TPEL.2015.2499194.
- Lim, Shu Fan, and Ashwin M. Khambadkone. “A Multimode Digital Control Scheme for Boost PFC with Higher Efficiency and Power Factor at Light Load.” Published in 2012 Twenty-Seventh Annual IEEE Applied Power Electronics Conference and Exposition (APEC), Feb. 5-9, 2012, pp. 291-298. doi: 10.1109/APEC.2012.6165833.
- Rothmund, Daniel, Dominik Bortis, Jonas Huber, Davide Biadene, and Johann W. Kolar. “10kV SiC-Based Bidirectional Soft-Switching Single-Phase AC/DC Converter Concept for Medium-Voltage Solid-State Transformers.” Published in 2017 IEEE 8th International Symposium on Power Electronics for Distributed Generation Systems (PEDG), April 17-20, 2017, pp. 1-8. doi: 10.1109/PEDG.2017.7972488.
- Liu, Zhengyang. 2017. “Characterization and Application of Wide-Band-Gap Devices for High Frequency Power Conversion.” Ph.D. dissertation, Virginia Polytechnic Institute and State University. http://hdl.handle.net/10919/77959.
The post Power Tips #115: How GaN switch integration enables low THD and high efficiency in PFC appeared first on EDN.
Ungluing a GaN charger

USB-output battery chargers for smartphones, tablets and a host of other tech widgets are perpetually hot (I’m talking metaphorically here) sellers. This is particularly the case now that Apple, Google and other suppliers are no longer bundling them with their widgets, claiming that we consumers have already collected way more of them than we need (translation: “we’ve brainstormed a rationale that lets us shift the charger-acquisition burden to the consumers, lowering our BOM costs and boosting our profits”).
They now offer USB-C outputs, translating to the higher output power that was the case in the conventional USB-output past. This transition enables them to deliver timely recharges to large battery pack-based devices like laptops and even to do double-duty as AC adapters. But it also means that they run hotter (I’m now using the term literally) than before, boosting their size and therefore their available surface area for passive thermal dissipation purposes.
Thankfully, now-mainstream gallium nitride (GaN) transistors, which my colleague (more accurately: my boss) Majeed Ahmad regularly writes about, are enabling chargers/adapters that slow, halt, and in some cases, even reverse the increasing-volume trend initiated by their silicon transistor-based precursors. To quote Wikipedia:
GaN transistors are suitable for high frequency, high voltage, high temperature and high efficiency applications.[citation needed] GaN is efficient at transferring current, and this ultimately means that less energy is lost to heat…The higher efficiency and high power density of integrated GaN power ICs allows them to reduce the size, weight and component count of applications including mobile and laptop chargers, consumer electronics, computing equipment and electric vehicles.
And re my earlier “now-mainstream” comment, they’re now available at prices that match, if not undershoot (on a promotion basis, at least), those silicon transistor-based precursors. Enter today’s teardown subject, a 30W USB-C charger from a company called VOLTME that I bought at the beginning of the year for (and in fact as I write these words is still being sold priced at) only $9.99. Here’s a stock image (dimensions are 1.2×1.3×1.2 inches, and it weighs 1.5 ounces):
The manufacturer claims that “By adopting the latest GaN III tech, VOLTME 30W USB-C GaN Charger reduces the size of our chargers by 63% without compromising power.” As proof, it visually stacks its charger up against a conventional Apple 30W unit:
And VOLTME’s charger comes in five colors: black, blue, green, purple grey and white…not translucent, alas, although the company crafted a concept image to give a peek at the insides:
Here are my own comparative size-assessment shots, next to a just-purchased ($10.99 on sale) conventional silicon transistor-based Best Buy Insignia 30W USB-C charger (for which, yes, I also have future-teardown plans) with dimensions of 1.43×1.33×1.33 inches, and a United States penny (0.75 inches/19.05 mm in diameter):
Note that the VOLTME unit has non-collapsible AC prongs, whereas those of the Insignia unit fold up for transport convenience.
I’d also intended to visually compare the VOLTME GaN charger to an Aukey 27W conventional unit I’d bought in mid-2019, which currently keeps my iPad Pro juiced up, but I remembered my aspirations only after completing the VOLTME dissection. So, here’s the Aukey alongside the Insignia instead; by means of the transitive property of (in)equality, perhaps you can mentally translate the two sets of images into the VOLTME-vs-Aukey appraisal I originally intended:
Stepping back in time, here are some shots of the product packaging:
And its contents: the charger, inside a protective baggie, along with two slips of literature.
Notice anything missing? Yep, the charging cable. In fairness to VOLTME, depending on the particular user requirement, the optimum bundled cable could be any of a number of “USB-C-to” options: USB-C, conventional USB, Lightning, micro USB, etc. Still, echoing what I said before: “we’ve brainstormed a rationale that lets us shift the cable-acquisition burden to the consumers, lowering our BOM costs and boosting our profits”. Cynical, aren’t I?
Finally, some shots of our patient standalone. Front first:
That “PD” marking indicates that the charger supports Power Delivery mode, with multiple voltage/current output options pre-negotiated between the connected charger and to-be-powered device. Specifically, quoting from VOLTME’s specs:
Input: 100-240V~ 0.8A 50/60Hz
Output: 3.3-11V 3A (PPS) / 5V 3A /9V 3A / 12V 2.5A / 15V 2A / 20V 1.5A (30W Max)
To wit, here’s the backside marking summary:
Top:
Bottom:
And finally, both sides:
Now let’s dive inside. The front side edging ended up being just plastic molding, but the backside was more productive:
Here’s the inside of the plug. Those two contacts at the bottom proximity-press against matching contacts on the PCB:
Speaking of which…I pushed through from the front side USB-C opening:
Wow, that’s a lot of thermal goop:
And it’s adhesive, to boot, versus easier-to-remove paste:
There’s a plastic flap on the bottom which, when moved aside, reveals another IC underneath:
I showed a mid-teardown photo of the glue-infested insides with my colleague (and buddy) Aalyia, and she wisely-I-strongly-suspect pointed out to me, “Aren’t the leads of electrolytic caps generally a bit finicky and break with a bit of vibration? This could be their engineering/thermal management solution to it.” Thoughts, readers?
Clearly, that glue was going to have to go if I held any hope of seeing and identifying anything component-wise other than the two already-exposed PCB areas. Research suggested I give the charger a soak in either isopropyl (“rubbing”) alcohol or acetone; the former was the less caustic of the two options, and I decided to try it first. I only had the 70%-concentration stuff on hand (albeit plenty of it; during COVID we stocked up on both it and aloe vera in case we needed to make our own hand sanitizer), but a quick trip to Walmart secured me a 91%-concentration stock, which research indicated was results-preferable:
Time for a bath…
You’ll notice from the purple-tint stream coming from the charger in the above image that the dunk had a near-immediate effect. My initial excitement in seeing this response became more muted when I realized it was just the hand-scribbled mark coming off the transformer tape. Nevertheless, after a few-hour bath the glue was at least softened somewhat. The tedious application of a small flathead screwdriver along with the end of an unfolded paper clip got me (most of) the rest of the way there:
The bulk of that flap I mentioned before was located on the other side of the PCB, as it turns out, where it (buried in grey glue) acted to insulation-separate several of the components. Here it is standalone after it fell out during the glue-removal process:
Much of the componentry in the earlier shots is intuitively obvious to the engineers out there; the yellow tape-wrapped transformer, for example, several electrolytic capacitors and a wire-wound inductor, plus a bunch of other PCB-mounted passives. Speaking of the PCB(s), however, let’s focus more attention on these. First, here’s the one on the left side of the charger (if you’re looking at it from the front):
In this particular shot, I intentionally oriented the charger with the USB-C connector pointed down versus to the right, so that you can more easily read the PCB’s dominant IC markings:
SGP12
KA09070
2143GAF
If you do a straight Google search on any of those three product-mark character sequences, you’ll (unless you’re more adept at Google searches than me, which is assuredly always a possibility) end up with no meaningful results. Do a Google Image search on KA09070, on the other hand, and you’ll eventually figure out (more directly if, unlike me, you can read Chinese) that this particular chip is the HL9554 GaN Integrated Primary Side PWM Controller from a company called Elevation Semiconductor. Why the package markings have no seeming relevance to the product name is beyond me.
And what of the bottom-side PCB (USB-C plug to the left in this orientation)?
Well, I was also able to figure out one IC on it (note, too, the earlier mentioned contacts along the right side that press-mate into the AC plug). Unfortunately, there are at least four other chips that I couldn’t identify, so I’m still going to need some reader help.
The rectangular SOP IC toward the bottom labeled U1 and marked “EL1018” is a four-lead optocoupler (“an infrared emitting diode, optically coupled to a phototransistor detector”) from Everlight Americas. But what of the chip on the right side, for example, PCB-labeled BO1 (what does “BO” even mean, other than “body odor”?) and marked as follows?
WRABS
20M
2C34H
Next, what about the rectangular IC at the top, labeled U3 and marked with this cryptic text?
EAL34
KA16472
2141GAI
The second-line “KA” similarity to the aforementioned PWM controller suggests to me that this chip may also be from Elevation Semiconductors, but I’m at a loss.
What of the smaller square IC below it, labeled Q1 (suggestive of a transistor-related function)? The markings on the package are faint, therefore not discernable in the photo, so you’ll need to take my word that they’re as follows:
3016M
DT10A
And the small rectangular IC marked U4 and labeled 9MUK? Inquiring minds want to know.
Your assistance in figuring out the identities of any or all of these mystery chips is greatly appreciated; at minimum, one of them must handle USB charging protocol negotiation and subsequent management. And more generally, your thoughts in the comments on anything I have (or haven’t) mentioned in this teardown are as-always welcome!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Freeing a three-way LED light bulb’s insides from their captivity
- A short primer on USB Type-C PD 3.0 specification and design
- Teardown: Cell-phone charger: nice idea done right
- Teardown: Wireless charging pad is tough to crack
- Disassembling a wireless charger with a magnetic personality
- Teardown: Pixel Stand offers faster-than-Qi wireless charging for (some) Google fans
The post Ungluing a GaN charger appeared first on EDN.
Using a MOSFET as a thermostatic heater

The MOSFET in Figure 1 is used as both a heater and a temperature sensor in a thermostatic circuit.
Figure 1 Circuit diagram for using a MOSFET as a thermostatic heater.
The circuit can be used as a tiny thermostat for some biological structures in a Petri dish (typical set temperature is from 30॰C to 50॰C); other uses may include plastic cutting/welding, thermostating of electronic components, and even soft soldering since the maximal working temperature for Si MOSFETs is around 175॰C and, for silicon carbide (SiC), MOSFETs can be far larger.
To function properly in this circuit, the MOSFET Q1 should have a so-called “parasitic” diode within the MOSFET structure (its cathode is connected to the drain of the nFET). Almost all power MOSFETs have this diode within them (in any case you can check its existence in the datasheet). The circuit uses this diode as a temperature sensor (the temperature coefficient is about −2 mV/°C for silicon).
During the negative half-wave of the input AC, while the MOSFET Q1 is OFF, the negative voltage on the “parasitic” diode charges the capacitor C1 through the Schottky diode D3; as a matter of fact, these components create an envelope detector. (This part of the circuit can also be interpreted as an S/H circuit.)
The thing is, the typical direct voltage on a “parasitic” Si diode is 0.3V to 0.5V higher than the Schottky’s one, hence the maximal negative voltage on C1 may be about -0.3V to -0.5V. Resistors R6 and R7 are part of the envelope detector; they also make a level shift of this negative value to a positive one, making it appropriate for the TL431. To make this possible, the voltage regulator on LM317 provides a positive voltage for this level shift.
The set temperature of the circuit can be changed simply by changing the output voltage of the regulator (varying the values of R8 or R9).
The main role of the resistor R1 is to limit any transient currents to the values which are safe for both MOSFET Q1 and diode D2. Nevertheless, if the application allows, this role can be used broaden to some of the circuit’s functionality: R1 can be used as one more heating spot. You should remember however, this spot will have no thermal sensor inside, so a regulation in its vicinity may be far more crude.
During the following positive half-wave, the negative voltage saved on C1 makes the TL431 determine whether the MOSFET Q1 has to be ON or OFF.
When Q1 is ON, the circuit on the pnp transistor Q3 maintains the drain voltage of Q1 very close to the voltage on R4. This is because both the MOSFET Q1 and the transistor Q3 constitute a negative-feedback amplifier, which determines the operating point of Q1 by the ratio of the values R3 and R4.
As shown in Figure 1, the MOSFET Q1 with resistor R1—on the one hand—and resistors R3, R4—on the other hand—make up a bridge circuit, which restores its equilibrium if the drain voltage is ~equal to or close to the base voltage of Q3.
Playing with R3/R4 ratio will allow you to change the ratio of the heat dissipated by Q1 and R1.
When R3 = R4, the electrical powers dissipated on Q1 and R1 are equal; in general R1 can be used as additional heater for a larger object, when Q1 alone can’t provide sufficient heating.
In any case, have in mind the maximal ratings of Q1, D2 and R1.
These relations between time constants should be observed:
(R6+R7)*C1 >> T/2 >> R1*C1,
where T stands for period of input AC.
When using the heater for a critical application at high temperatures, caution should be observed since some SiC MOSFETS can be unreliable [1].
Note: since the minimal voltage on TL431 is about 0.9V to 1V, the minimal gate threshold voltage of Q1 (at the maximal working temperatures!) should be higher than this value.
—Peter Demchenko studied math at the University of Vilnius and has worked in software development.
Related Content
- Measure junction temperature using the MOSFET body diode on a PG pin
- Use a transistor as a heater
- Inverted MOSFET helps 555 oscillator ignore power supply and temp variations
- Transistor ∆VBE-based oscillator measures absolute temperature
- A safe adjustable regulator
Reference
- Lelis, Aivars J., et al. “High-Temperature Reliability of SiC Power MOSFETs.” Materials Science Forum, vol. 679–680, Trans Tech Publications, Ltd., Mar. 2011, pp. 599–602. Crossref, doi:10.4028/www.scientific.net/msf.679-680.599.
The post Using a MOSFET as a thermostatic heater appeared first on EDN.
Analysis software identifies SATCOM interference

To mitigate SATCOM service degradation, real-time spectrum analysis (RTSA) software from Keysight delivers up to 2 GHz of RTSA bandwidth. Running on the company’s N9042B UXA signal analyzer, the RTSA software allows satellite network operators to monitor satellite signals and interference to ensure the highest quality of service to users.
The RTSA test application enables the N9042B to conduct continuous, gapless capture and analysis of elusive and transient signals via an optical data interface (ODI). Multi-threaded and parallelized RTSA measurement with up to 2 GHz bandwidth minimizes the time gap between processing/rendering and re-capturing signals. This reduces analysis time and improves the probability of intercept, while ODI streaming at up to 2 GHz to RAID storage enables the capture of hours of signal recordings for analysis.
The N9042B signal/spectrum analyzer tests millimeter-wave performance in 5G, satellite, and radar systems. When combined with the V3050A frequency extender for unbanded coverage to 110 GHz, the U9361 receiver calibrator, M9484B VXG signal generator, and PathWave X-Series and PathWave vector signal analysis measurement applications, the N9042B provides 2-GHz real-time spectrum monitoring for satellite communication systems.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Analysis software identifies SATCOM interference appeared first on EDN.
Entry-level MCUs pack 32-bit performance

Renesas has expanded its RA family of 32-bit MCUs with two entry-level groups based on an Arm Cortex-M33 core with Arm TrustZone technology. The 100-MHz RA4E2 group and 200-MHz RA6E2 group are optimized for power efficiency and offer an easy upgrade path to other members of the RA family.
Along with 40 kbytes of SRAM, the RA4E2 series provides 128 kbytes of flash memory, while the RA6E2 series provides up to 256 kbytes of flash memory. On-chip connectivity options include CAN FD, USB 2.0, QSPI, HDMI CEC, SSI, and I3C. The MCUs also furnish a 12-bit ADC, 12-bit DAC, and PWM timer. Small QFP, QFN, and BGA packages make the microcontrollers suitable for sensing, gaming, wearables, and appliances.
MCUs in the RA4E2 group offer a choice of five different packages ranging from 32 pin to 64 pins as small as 4×4 mm. Active power consumption when executing from flash memory at 100 MHz is 82 µA/MHz. Comprising 10 variants, the RA6E2 MCUs consume 80 µA/MHz in active mode when executing at 200 MHz.
All of the RA4E2 and RA6E2 MCUs are available today. Evaluation kits and prototyping boards are also available for both MCU groups.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Entry-level MCUs pack 32-bit performance appeared first on EDN.
Piezo sounder driver helps maximize SPL

Diodes’ PAM8906, a driver IC with a built-in synchronous boost converter, is capable of driving a piezoelectric sounder with outputs up to 36 VPP. The driver maintains high sound pressure level (SPL) output, while an auto on/off function prolongs battery-powered operation.
For application flexibility, the PAM8906 operates with either an external pulse-wide modulation (PWM) input or in self-excitation mode. This allows it to be used with a variety of sounders in such devices as smoke alarms, air humidifiers, handheld GPS devices, security alarms, medical devices, and home appliances.
The driver provides a choice of three output voltage variants: 20 VPP, 24 VPP, and 36 VPP. Unlike charge pump-based alternatives, the PAM8906 maintains its output even as battery voltage drops over time. Using a small 0.47-µH inductor, the driver’s boost converter switches at a fixed frequency of 1.8 MHz, with quiescent current of just <1 µA.
The PAM8906 sounder driver comes in a 10-pin MSOP package and costs $0.37 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Piezo sounder driver helps maximize SPL appeared first on EDN.
Secure wireless MCU eases automotive designs

The NCV-RSL15 MCU from onsemi brings Bluetooth 5.2 wireless connectivity, embedded security, and ultra-low power to automotive applications. According to the manufacturer, the NCV-RSL15 is certified by the EEMBC as the industry’s lowest power secure wireless microcontroller. Designed to use as little power as possible, the device features a proprietary smart sense power mode to conserve battery power.
An Arm Cortex−M33 processor core with TrustZone Armv8−M security extensions forms the basis of the NCV-RSL15’s security platform. Arm’s CryptoCell security subsystem provides hardware-based root-of-trust secure boot, user-accessible hardware-accelerated cryptographic algorithms, and firmware-over-the-air updates.
Four low-power modes are available to reduce power consumption, while maintaining system responsiveness. These include sleep, standby, smart sense, and idle. Smart sense mode takes advantage of the low power capability of sleep mode, but allows some digital and analog peripherals to remain active with minimal processor intervention. Power specifications for the NCV-RSL15 include:
- Sleep Mode (GPIO Wakeup) @ 3 V VBAT: 36 nA
- Sleep Mode (Crystal Oscillator, RTC Timer Wakeup) @ 3 V VBAT: 81 nA
- Peak Rx Current 1 Mbps @ 3 V VBAT: 2.7 mA
- Peak Tx Current 0 dBm Output Power @ 3 V VBAT: 4.3 mA
With its rich library of sample code, a software development kit for the NCV-RSL15 provides a springboard for application development. Typical applications for the NCV-RSL15 microcontoller include keyless vehicle access using a fob or smartphone, tire pressure monitoring systems (TPMS), tire monitoring systems (TMS), and seat belt detection.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Secure wireless MCU eases automotive designs appeared first on EDN.
LoRa transceiver offers global connectivity

A low-power LoRa transceiver from Semtech, the LR1121 provides flexible multiband communication for IoT endpoints anywhere in the world. The device supports terrestrial ISM band communications in the sub-GHz and global 2.4-GHz spectrum, as well as the S-band for satellite connectivity.
The Lora Connect LR1121 is pin-compatible with Semtech LoRa Edge asset management chips, allowing module makers such as Murata to have a single hardware design for a wide range of applications. Murata’s Type 2GT module supports the LoRa Edge LR1110 and LR1120 devices, as well as the LoRa Connect LR1121. This turnkey solution enables lower cost assembly and faster time to market with a pre-certified LR1121 variant readily available.
Likewise, integrated passive devices (IPDs), like those from Johanson Technology, can readily be used alongside the LR1121. Johanson’s IPD replaces a number of RF passives, not only reducing footprint, but also minimizing design iterations and speeding time to market.
Modules and reference designs developed in partnership with Johanson and Murata are ready to launch. The LR1121 LoRa transceiver is available through Semtech’s distributor network. To learn more about the LR1121 LoRa transceiver, click here.
Semtech
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post LoRa transceiver offers global connectivity appeared first on EDN.
4 basic considerations in migrating to cloud-based EDA tools

As the benefits of Moore’s Law begin to diminish while design complexity increases, there is a need for cost- and time-efficient solutions that can also provide exceptional performance and functionality while lowering power consumption. That’s why cloud-based electronic design automation (EDA) solutions are gaining popularity among chip designers and the cloud is becoming key for furthering innovation and productivity.
A few of the many advantages cloud technologies can offer chip designers include reduced system maintenance costs, advanced storage, and compute resources, as well as fast ramp-up and flexible pay-as-you-go models that aid not only the peak usage periods, but also during the entire chip design flow. While we continue to see the limits of in-house compute resource availability, the cloud has been proven to provide flexibility to scale design and verification capabilities to ensure better quality, lower cost, and faster time to results.
Figure 1 Cloud computing is emerging as a viable platform for IC design and verification tasks. Source: Synopsys
However, as the journey to the cloud accelerates, designers will need to carefully examine their cloud technologies to achieve optimal results. There are four key factors designers should consider when turning to cloud-based technologies that result in better, faster, and cheaper semiconductors.
- Data transfer and management
When migrating EDA workloads to the cloud, a key consideration is determining what data is transferred in and out of the cloud. Solutions that reduce data transfer overhead will deliver accelerated time to results and increased productivity.
While there are a variety of models useful for managing both on-premise and cloud environments, the most straightforward model is to migrate the required data to the cloud. In the data management process, the cloud environment must be capable of replicating the on-premise environment, and that starts with determining and cataloging the dependencies for a design.
Cloud data transfer must be swift and resilient to ensure no data is lost when it’s transferred from on-premises storage to the cloud. Cloud storage will need to provide the flexibility to scale in and out based on the requirements of the design and verification tasks.
Storage efficiency is another component that must be considered when designing EDA solutions. Cloud providers also grant flexibility on the type of storage, with cost playing a critical role in the decision-making process for engineering teams. EDA solutions are being engineered to leverage distributed storage, block storage, and in-memory compute that improve the turnaround time while lowering the total cost of ownership.
- Cloud security
Data security concerns are one of the primary reasons why the semiconductor industry has been slow to adopt cloud technologies. To ensure sensitive data is protected, there must be a robust data governance plan that identifies who is given access to what type of data, accompanied by powerful access and identity management measures.
Cloud infrastructure vendors are experienced in implementing security into their infrastructure, applications, and operations. These providers have been employing the most modern security measures to protect their data centers while delivering their commitment on redundancy and high system uptime. Working closely with cloud security vendors, EDA vendors can adapt their technologies to securely run their workloads while preventing the risk of data leakage.
Above all, chip and system designers should seek EDA vendors that have adapted to cloud environments while offering encryption, troubleshooting tools, and next-generation monitoring capabilities.
Figure 2 Design engineers must ensure that they use cloud platforms offering robust data transport and security capabilities. Source: Synopsys
- Accessibility and deployment
High-performance computing and NFS-heavy storage define EDA workloads, with a high dependence on system libraries, tools environment, and hardware. Chip designers accustomed to how their EDA flows function on-premises would benefit from the faster turnaround time with on-demand, near real-time provisioning, and a much better consumer-grade user experience when using cloud solutions.
It’s also important for chip designers to consider the investment of time to set up the cloud, from establishing network connectivity to managing their firewall. Additionally, asking questions such as how the design team would access cloud-based tools, how they can best visualize what and how current resources are being utilized, and how much faster certain tasks could be completed are important.
Today, we are also seeing artificial intelligence (AI) emerging as a significant player in the chip design process, which significantly increases the efficiency of both on-premises compute environments and scales compute on-demand in the cloud to enhance power, performance, and area (PPA).
- Scalability and cloud architecture
The ability to scale compute infrastructure is one of the top reasons why designers are migrating EDA workloads to the cloud. Several compute-intensive tasks are best when they are broken into smaller parts across distributed compute and storage resources—very much a cloud-native approach.
EDA flows are supported by robust scheduling with streamlined storage to adequately manage distributed workloads. Many EDA tools have been re-architected to scale to thousands of cores along with distributed schedulers that can efficiently use these resources. Another advantage of the cloud is the availability of hybrid scaling, allowing workloads on-the-cloud or on-premises, depending on what is needed for the task.
When leveraging cloud technologies, re-architecting EDA solutions is imperative. Similar to how EDA products have welcomed multi-processing and multi-threaded opportunities, EDA solutions must do the same with cloud architecture. As chip designers begin their journey to the cloud, welcoming new technologies like distributed storage, distributed computing, and more will lead to greater innovation.
Arun Venkatachar is VP of AI, Cloud & Central Engineering at Synopsys.
Related Content
- EDA not yet ready for cloud computing
- Automotive deep learning platform shifts to cloud
- How EDA workloads inside the cloud reinvigorate chip design
- Chip design in the cloud? Now you can have pay-as-you go EDA
- Design inside the cloud: From components to boards to EDA tools
The post 4 basic considerations in migrating to cloud-based EDA tools appeared first on EDN.
Set DC input-stage gain/attenuation over a 96dB range with PWM

Pulse width modulation (PWM) is a simple and inexpensive (therefore popular!) way to implement moderate performance (e.g., 8 bit resolution low speed) digital to analog conversion, but improvising cheap DACs isn’t the only thing PWM can do. For example, Figure 1’s circuit illustrates using PWM to digitally set the analog gain of a versatile, robust, high (1 MΩ) input impedance, buffered output, DC input stage over a ~16 bit = 65280:1 = -48 dB to +48 dB attenuation/gain range.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 A PWM controlled amplifier/attenuator DC input stage.
Here’s how it works: A 1 MHz, 8 bit (T = 256 µs period) PWM control signal runs three synchronous HC4053 SPDT CMOS switches designated U1a, U1b, and U1c. Its duty cycle = G/T = 0.4% to 99.6% as G = 1 to 255 µs.
U1b acts as a programmable input attenuator by steering the I = Vin/R1 input current alternately to ground or op-amp A1’s summing point, creating an input scale factor of Vin/R1(G/T) that’s programmable between near zero, Vin/R1/256 (G = 1 µs), to near unity, Vin/R1/255/256 (G = 255 µs). Additionally, because of the near-zero summing-point-potential maintained at U1 pin15 by current steering, the accommodated Vin voltage range is very wide—limited mostly by R1’s axial voltage withstand capability which is typically 200 V for a ¼ W axial lead 1 MΩ resistor. Simultaneously, the millivolt range signal levels maintained across U1b’s switch elements (several orders of magnitude less than datasheet test conditions) reduce switch related leakage currents to << 1 nA; thus, minimizing leakage-related offset voltages to negligible levels despite the megohm R1.
Meanwhile, U1a is working to selectively steer current feedback from A1’s output to its summing point via R2 with a programmable factor of (1 – G/T), yielding a net V/I gain of
–R2/ (1 – G/T), while maintaining similar leakage-minimizing millivolt voltage differentials across U1a’s switches.
The net effect makes A1’s voltage gain = -(R2/R1)(G/T)/(1 – G/T) = -(G/T)/(1 – G/T).
As G varies from 1 µs to 255 µs, there’s the stated –(1/256)/(1 – 1/256) to -(255/256)/(1 – 255/256) = -1/255 to -256 = 96 dB gain range, but what about that pesky minus sign and infamous PWM ripple?
Both signal inversion and ripple-suppression are performed by the sample and hold function implemented by U1c and A2, yielding a final ripple-free: Vout/Vin = (G/T)/(1 – GT) as graphed linearly in Figure 2 and logarithmically in Figure 3.
Figure 2 The linear gain plot (Red = 0 to 5 and Blue = 0 to 255).
Figure 3 The log gain plot.
The positive (Vdd) and negative (Vee) power rails are non-critical and noise-insensitive but ideally should be at least roughly symmetrical and will typically be +5 V and -5 V, respectively. Total current draw is less than 2 mA. Both C1 and C3 should be low-leakage types, polystyrene is suggested. Response time to an input or gain set step is somewhat gain dependent but is typically ~2 ms. Note that the R1C1 time constant is ~4T = 1 ms. Neither is exactly what you’d call lightning fast, but we are after all talking about PWM!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a ways. In all, a total of 64 submissions have been accepted since his first contribution was published in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction but no inverter
- Cancel PWM DAC ripple with analog subtraction—revisited
- Cancel PWM DAC ripple and power supply noise
- 555 triangle generator with adjustable frequency, waveshape, and amplitude; and more!
The post Set DC input-stage gain/attenuation over a 96dB range with PWM appeared first on EDN.
Thunderbolt: Good industry-standard intentions undone by proprietary implementations

As long-time readers may recall, last year I had an underwhelming experience in attempting to assemble a personal workstation based on an AMD Threadripper 3960x processor foundation. In the “post-mortem” piece, I noted that while I wasn’t sure if I’d hold onto the motherboard for another attempt down the road (I’m still undecided), I definitely intended to reuse the two add-in cards (AICs) originally included with the motherboard in other future system builds. I’m speaking here of the hardware RAID-capable multi-m.2 NVMe SSD card:
and, specific to this particular writeup, the Thunderbolt 3 card:
But…you know the saying, “The road to hell is paved with good intentions?” Here’s what I wrote in that post-mortem, as a teaser for this more in-depth treatment of the topic:
The actualization of my TB3 card plans ended up being somewhat convoluted and restrictive, courtesy of OEMs’ proprietary implementations of the supposed industry standard. Stay tuned for a focused-topic piece to come soon, with more details on my travails and solutions.
I’d been curious to try out Intel’s “hybrid” 12th generation Alder Lake CPUs (more recently superseded by tweaked 13th-generation Raptor Lake successors), comprising a mix of “P” (performance) and “E” (efficiency) processor cores conceptually similar to Arm’s big.LITTLE architecture. And when I saw the ASUS Prime B660-PLUS D4 motherboard on sale at Newegg for $109.99 last summer, I thought I’d found the cost-effective and otherwise perfect platform to put both aspirations—next-gen CPU experimentation and AIC reuse—into action. The ASUS motherboard handled 12th gen (and even the newer 13th gen) Intel CPUs. It supported cost-effective DDR4 SDRAM. And it even had an integrated Thunderbolt 4 header and BIOS support!
Unfortunately, however, that plan didn’t pan out. Two weeks after ordering the motherboard, I’d returned it. Part of the reason, admittedly, was that it was a full-size ATX board; all other factors equal (or even similar), my preferences lean toward the more svelte microATX form factor. But the bigger issue was, it turns out, that Thunderbolt-supportive motherboards and Thunderbolt-supportive AICs aren’t necessarily supportive of each other. Allow me to explain.
ASUS has to date developed (I think) three generations’ worth of Thunderbolt AICs:
- The Thunderbolt 2 (TB2)-based ThunderboltEX II and ThunderboltEX II/DUAL
- The Thunderbolt 3 (TB3)-based Thunderbolt EX 3 and Thunderbolt EX 3-TR, and
- The Thunderbolt 4 (TB4)-based ThunderboltEX 4
Ironically, ASUS’ AICs aren’t hardware-interoperable with all Thunderbolt-supportive ASUS motherboards, even putting aside for the moment whether a particular ASUS motherboard’s BIOS software recognizes and fully supports a particular AIC. The hardware “header” connector for TB4 AICs is a “14-1” pin arrangement (a 14-pin header with one pin missing from the “male” connector and the corresponding “female” connector through-hole filled in, to ensure you don’t orient the two connectors wrong when mating), while that for TB2 and TB3 cards is 5-pin.
GIGABYTE has also developed multiple generations’ worth of Thunderbolt AICs (again, I think the following list is to-date complete):
- The TB2-based GC-THUNDERBOLT 2
- The TB3-based GC-ALPINE RIDGE ( 1.0 and rev. 2.0) and GC-TITAN RIDGE (again, rev. 1.0 and rev. 2.0, the latter supporting dual cables/connectors …keep reading…), and
- The TB4-based GC-MAPLE RIDGE
GIGABYTE has standardized on a five-pin Thunderbolt header on its motherboards, referred to as the THB-C connector…at least until the TB4 generation, when it also switched, to a 5-pin (THB-C1) plus 3-pin (THB-C2) dual-connector scheme. To wit, beginning with the rev. 2.0 GC-TITAN RIDGE card (mine’s a rev 1.0), GIGABYTE added to the board and broader kit support for a second included Thunderbolt cable with a three-pin connector at the end, unnecessary for TB3-and-earlier motherboards but presumably with TB4 future-motherboard forward compatibility in mind.
I’m not clear on whether ASUS and GIGABYTE’S 5-pin Thunderbolt headers are compatible with each other, but the five-pin THB-C header offered by my rev. 1.0 GC-TITAN RIDGE AIC was clearly incompatible with ASUS’s TB4 14-1 pin approach, not to mention being a seeming mismatch (again, keep reading) with GIGABYTE’s own dual-connector TB4 scheme. And I should point out before continuing that ASUS and GIGABYTE aren’t the only vendors who sell Thunderbolt-based AICs and Thunderbolt-supportive motherboards.
MSI, for example, also offers both TB3 and TB4 AICs, as does ASRock with TB2, TB3, and TB4 AICs. A quick scan of the MSI AICs’ specifications indicates that whereas the ThunderboltM3 supports a five-pin Thunderbolt header, the ThunderboltM4 switches to a 16-pin connector. And per Gill Boyd, whose BuildOrBuy YouTube channel I’ve recommended before, ASRock has gone with a five-pin header for TB4. Bottom line: all four vendors’ TB4 headers are incompatible with each other, and at least three of them are also inconsistent with their prior-gen headers. See for yourself:
Sigh.
Back to ASUS and GIGABYTE. I wasn’t quite ready to give up yet. Tipped off by customers’ posted comments at Amazon as well as posts on GIGABYTE’S user forum and various Hackintosh discussion boards, I’d come across a great blog post discussing (among other interesting topics) how to jumper the Thunderbolt connector on the rev 1.0 GC-TITAN RIDGE AIC to get it working with (at least some) motherboards that don’t contain a compatible (or any, for that matter) Thunderbolt header. That said, after a bit of email back-and-forth with the blog author (which included his insight that the mate to the AIC-installed connector, which makes jumpering particularly easy, is a Molex 70553-0004, single-unit quantities of which are available for purchase here), my suspicions were unfortunately confirmed:
- A motherboard absent a hardware Thunderbolt header is unlikely to have BIOS software support for Thunderbolt either (and unofficial retrofits aren’t for the faint of heart), save perhaps for MacOS systems and Hackintoshes (hold that thought), and
- Even if a motherboard does have a Thunderbolt hardware header, the vendor has undoubtedly hard-coded the BIOS to support only Thunderbolt AICs from that same vendor (and only particular ones, at that, per my earlier ASUS comments)
Maddening. And thereby also explaining why I ended up sending back the ASUS Prime B660-PLUS D4 to Newegg within the 30-day refund timespan. Equally frustrating, I’ve concluded that the replacement motherboard I ordered from Newegg, GIGABYTE’s B660M GAMING X AX DDR4, probably isn’t going to work, either, even just in a backwards-compatible TB3-only fashion that solely leverages the THB-C1 header (though I’ll still test it for curiosity’s sake).
Why? Well, the B660M GAMING X AX DDR4 isn’t listed on the GC-TITAN RIDGE rev. 1.0 compatibility list, for one thing…although given that the AIC predated the motherboard and all vendors are historically (in my longstanding experience) notoriously bad about keeping their existing products’ support lists updated as new products are released, that omission in and of itself isn’t definitive. One could also argue that if the GC-TITAN RIDGE rev. 1.0 would work, why did GIGABYTE bother releasing the dual-cable/dual-header-supportive GC-TITAN RIDGE rev. 2.0…although the motherboard isn’t included on that AIC’s supported list, either…
Like I said before, I’ll try it anyway and see what happens. That said, I’ve also (pragmatically? cynically?) gone ahead and sprung for an “insurance-policy” successor. I found a GIGABYTE GC-MAPLE RIDGE TB4 AIC for $89.99 in claimed like-new condition from Amazon’s Warehouse section…and yes, the motherboard is listed as supported in this case. All of which brings me back to my initial question: if I decide not to jump back in the ring for round 2 of the AMD Threadripper personal workstation wrestling match, what am I going to do with my GC-TITAN RIDGE rev. 1.0 AIC (aside from donating it along with the motherboard to charity, that is)?
Well, for one thing, the GC-TITAN RIDGE rev. 1.0 turns out to be compatible (after a bit of hacking, at least) with my MacPro3,1, whose intrinsic longevity (along with that of its MacPro6,1 successor) has recently been extended thanks to a slick (albeit non-Apple-sanctioned) software package called the OpenCore Legacy Patcher. OpenCore coincidentally is also the successor to several “Hackintosh” software development suites (Clover, et al.) that I’ve mentioned in past writeups. To wit, as long-time regular readers may also recall, I’ve still got multiple HP systems that are MacOS-on-PC hacking candidates sitting around the office, and several of them have the expansion slot facilities to slot in the GC-TITAN RIDGE rev. 1.0, too. Stay tuned for developments on all these fronts in future blog posts.
I’ll wrap up this particular writeup with some admitted frustration (if it wasn’t already evident), along with a bit of sadness. I’ve long been an advocate of Thunderbolt and an admirer of its co-developers, Apple and Intel; the interface is fast, it’s versatile, and every generation (albeit perhaps less evident with latest TB4 versus prior-gen jumps) has delivered tangible improvements over the predecessor revision. In the process, Thunderbolt has also pushed USB to correspondingly albeit belatedly deliver comparable enhancements.
That said, the longstanding inability to mix-and-match motherboards and AICs from different suppliers has hampered broad Thunderbolt adoption beyond niche cases where the controller is directly integrated on the motherboard (with requisite bill-of-materials cost, therefore product price, downsides). And if anything, the situation has seemingly gotten worse with TB4, for which hardware header compatibility has apparently also been tossed out the window. If you truly want to make Thunderbolt a widely and diversely implemented industry standard, Intel and partners, you’re going to need to move beyond proprietary competitive-isolation behavior and truly cooperate. After all, as the saying goes, a rising tide lifts all boats, yes?
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- What’s next for USB and Thunderbolt?
- Understand and test 10G Thunderbolt technology
- Apple’s forced O/S migration causes Thunderbolt failure
- Building a personal workstation: putting together the pieces
- Building a personal workstation: picking up the pieces
- Hackintosh: Another path to a high-end Mac
The post Thunderbolt: Good industry-standard intentions undone by proprietary implementations appeared first on EDN.
Call on mechanical engineers to solve your tough thermal problems

Engineers often consider thermal management and cooling as a two-part problem. First, there’s the global “macro” case where no individual component is excessively hot, but the aggregate heat buildup puts the board or chassis outside of acceptable limits. Second, is the localized “micro” case where one or more active or passive components (power devices, high-end processors, FPGAs, current-sense resistors) need to be cooled to avoid slow-clocking mode, burn-out, or excessive drift due to temperature coefficient. Often, the micro problem is a major contributor to the macro one, of course.
The solutions to thermal excess are well known in principle: just use some combination of convection, conduction, and radiation cooling, Figure 1.
Figure 1 The three modes of heat transfer are well understood and can be modeled and simulated as a first step in the cooling-plan analysis. Source: sciencenotes.org/
But that’s where the simplicity ends. You can model the thermal solution, but it often takes much more than a fan or a few add-on heat sinks to create a mechanically sound design which can convey your heat to that mystical place called “away” which is the thermal depository for that excess heat.
This is where the mechanical designers and production engineers earn some serious respect, as they must turn a thermal goal into a tangible, manufacturable reality. One such advanced cooling technique which has been field-proven over a decade is called a “hybrid” approach, represented by the patented RuggedCool℠ technology from General Micro Systems, Inc.
Unlike the conventional approach where the air or cooling liquid is focused on individual components and hot spots, here the heat is evacuated to an entire cold-plate assembly for the whole system, via a central “radiator” core plenum that’s essentially a whole-system cooling plate, Figure 2.
Figure 2 In the RuggedCool design, a central radiator-core plenum functions as a whole-system cooling plate. Source: General Micro Systems, Inc.
In this design, every component, board, or subsystem is conductively cooled using the cold-plate mechanism, with heat conducted away from their individual heat sinks to the combined heat-sink assembly of the entire system.
This all sounds like a simple-enough idea, but implementing it is a challenge. The GMS technology uses a corrugated alloy slug with an extremely low thermal resistance, acting as a heat spreader at the processor die (assuming that is the primary heat source). Once the heat is spread over a much larger area, a liquid silver compound in a sealed chamber is used to transfer the heat from the spreader to the system’s enclosure. There is one surface of copper, one surface of aluminum, and sandwiched in-between is a layer of silver.
This approach yields a temperature difference of less than 10°C from the CPU core to the cold plate, compared with over 25°C for conventional approaches, Figure 3.
Figure 3 The resulting arrangement has a low 10°C delta between the CPU core to the cold plate. Source: General Micro Systems, Inc.
It’s a form of liquid cooling but without the headaches or issues associated with moving fluid. Using materials like liquid silver makes it clear that the technology is expensive, but it is intended for applications for which no other viable solution is available.
This approach is in contrast to just adding cooling plates in order to produce conduction-cooled systems. That can result in inadequate cooling since the heat-producing devices, other than the CPU itself (or other primary heat source) are cooled by the CPU’s thermal-conduction path. This, of course, is contrary to the objective of drawing heat away from the CPU.
By directing all heat to central plenum, the effectiveness of blown air, if any, is maximized. It also allows for a sealed system where only the central plenum is open to the environment, thus making it easier to manage dust and moisture ingress while also easing the electrical challenge of EMI control.
This technique provides benefits related to shock and vibration—the silent and longer-term “killers” of many components. Here the CPU die does not make direct contact with the system enclosure, but instead connects via the liquid-silver chamber which acts as a shock absorber. This prevents shock from being transferred from the enclosure to the flip-chip ball grid array (FCBGA), thus isolating the CPU from ongoing vibration-induced micro-fractures (which, in time, cause the CPU to fail).
There’s no question that this is a complex, costly mechanical design, but electronic engineers and their customers have only themselves to blame. After all, dissipation has gone from a hundred or so watts to beyond a kilowatt—a modest 19-inch-wide rack-unit (RU) in 1U-size (1¾ inches high) can now reach 1.5 kW and more, so innovative approaches and new idea are needed.
Not all of these require the complexity and sophistication of this technology. In some cases, just switching to card-cage guides and grips which offer a greatly enhanced thermal path can be a big help (see Related Content).
Have you ever been involved in a cooling scenario where the physical implementation of the needed strategy was a much bigger challenge than the thermal model suggested? Was the solution just a carefully considered application of existing components, or were custom component and specialized resources needed?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- Innovative guides and grips enhance card-cage/chassis cooling capabilities
- Misconception revealed: Can a heat sink be too big?
- My long-running affection for heat sinks
- Put a diamond topping on your die to avoid heat stroke
Reference
- Heat Transfer – Conduction, Convection, Radiation. https://sciencenotes.org/heat-transfer-conduction-convection-radiation/
The post Call on mechanical engineers to solve your tough thermal problems appeared first on EDN.
Create high-performance SoCs using network-on-chip IP

A system-on-chip (SoC) containing a million transistors was considered a large device in the not-so-distant past. Today, SoCs commonly contain up to a billion transistors. Consider, for example, the recent case study with SiMa.ai and its new machine learning (ML) chip called MLSoC; it provides effortless machine learning at the embedded edge.
This MLSoC, created at the 16-nm technology node, comprises billions of transistors. As is almost invariably the case in today’s SoC designs, the MLSoC is composed of a sophisticated mix of off-the-shelf third-party intellectual property (IP) blocks coupled with an internally developed machine learning accelerator (MLA) IP.
Figure 1 The MLSoC chip combines host processor and ML accelerator capabilities in one device. Source: SiMa.ai
Third-party IPs are well-known and standard functions, such as processor and communication cores—Ethernet, USB, I2C, and SPI—and peripherals, the sort of processes not worth the time and effort to develop internally. The “secret sauce” that differentiates this SoC from its competitors is the MLA, which provides 50 trillion operations per second (TOPS) while consuming a minuscule 5 watts of power.
One problem with combining hundreds of IPs from various vendors is that multiple interconnect protocols have been defined and adopted by the SoC industry—OCP, APB, AHB, AXI, STBus, and DTL—and each IP may use a distinct protocol. Also, each IP may support a different data width and run at a separate clock frequency. As you can imagine, getting these IPs to talk to each other can be daunting.
Enter the NoC
The best solution for connecting hundreds of disparate IPs is to employ a network-on-chip (NoC). Using buffers and switches, the NoC passes data packets between initiator and target IP blocks. Each packet contains a header, which includes an ID with the source and destination addresses, and a body that encompasses the data. Large numbers of packets can be in flight at the same time.
Each IP will have one or more interfaces called sockets. Network interface units (NIUs) connect the IP sockets to the NoC and serialize and packetize the data while accommodating each IP’s data width and clock frequency requirements.
Developers typically envisage IPs as having square or rectangular footprints on the surface of the silicon chip. Many developers fail to recognize that the NoC is an IP, albeit one that spans the entire chip.
Homegrown or off-the-shelf?
SoC developers must decide whether it’s better to implement the NoC in-house or acquire it from a third-party purveyor. For many teams, this is a non-issue because they lack the time, resources and skills required to develop a full-function NoC from the ground up.
Creating a NoC suitable for a modern SoC can easily require six engineers working for two years. And then there’s the problem of debugging the NoC and the rest of the design simultaneously. The only realistic solution that reduces risk, speeds time to market, and equates to time to monetization is to employ a proven off-the-shelf NoC from a trusted vendor.
Technical benefits
Implementing a NoC requires more than attaching NIUs to IP sockets and determining the locations of any switches and the size and locations of any buffers. Since the NoC spans the entire chip, it will be necessary to introduce pipeline stages (registers) for the physical layout team and tools to meet the SoC’s performance and timing specifications.
Designs involve iterations. Performing iterations in the front-end design portion of the process is much faster than involving both the front-end and back-end physical layouts. If the front-end design engineers insert these pipeline stages by hand and fail to use enough in the right places, the back-end physical implementation team will fail to meet its goals, causing the project to be returned and reworked by the designers.
Unfortunately, architects typically address this issue by over-engineering the problem and inserting too many pipeline stages. Although this will help the physical design team to meet timing, any pipeline stages that are surplus to requirements consume die area, burn power, and increase latency.
One way to address this is by using physically aware NoCs. This means that as soon as the physical layout team provides the proposed locations of the various IP blocks, this data can be used to automatically determine the optimum number and placement of any pipeline stages. By speeding up the physical layout process, the number of time-consuming back-end to front-end iterations required to achieve timing closure is significantly reduced.
One such NoC is FlexNoc 5, which is physically aware and has additional options. For extreme designs with hundreds of IPs and 1024+ bit-wide connections, the FlexNoc XL option provides a large-capacity mesh-NoC generator capability. The FlexNoC 5 Advanced Memory option is available for architectures involving complex memory interleaving schemes and non-contiguous address bits. This option uses multi-channel reorder buffers that avoid ordering rule blocks and response serialization bottlenecks yet allow concurrent memory channel reads.
Figure 2 The physically aware network-on-chip IP offers productivity enhancements. Source: Arteris
Some designs are considered safety-critical, which means a failure or malfunction may result in death or serious injury to people, loss or severe damage to equipment or property, and environmental harm. In the case of this type of design, FlexNoc 5 fabric IP can be complemented by the FlexNoC Resilience option. This package can help designers to implement the functional safety features required for compliance with the automotive ISO 26262 and IEC 61508 standards. It also provides hardware reliability for enhanced enterprise SSD endurance.
Why off-the-shelf NoC IP
The only way to manage complex SoC designs is to use NoCs. Rather than spending years and burning engineering resources developing a NoC in-house, it’s better to save time, reduce risk and speed time to market by using a trusted and reliable off-the-shelf NoC.
Andy Nightingale, VP of product marketing at Arteris, has over 35 years of experience in the high-tech industry, including 23 years spent on various engineering and product management positions at Arm.
Related Content
- SoC Interconnect: Don’t DIY!
- Network on chip eases IP connect
- What is the future for Network-on-Chip?
- The network-on-chip interconnect is the SoC
- Arteris spins packet-based network-on-chip IP
- Why network-on-chip IP in SoC must be physically aware
- How physically aware interconnect IP bolsters SoC design
The post Create high-performance SoCs using network-on-chip IP appeared first on EDN.
Reduced-pitch TIA supports 56-GBd PAM4 operation

Semtech has announced production availability of its GN1814 quad-channel transimpedance amplifer (TIA) for use in 400G and 800G data centers. The 56-GBd PAM 4 amplifier boasts a reduced I/O channel pitch of 500 µm and is well-suited for very high-density, single-mode fiber applications. It also offers low input-referred noise, high gain, and output-swing level detection.
“The production availability of the FiberEdge GN1814 enables qualification testing of the latest optical transceiver technology in data centers by our customers and system vendors who demand exceptional quality and performance,” said Nicola Bramante, senior product line manager for Semtech’s Signal Integrity Products Group. “The reduced pitch of the new FiberEdge GN1814, combined with all the latest features from Semtech’s FiberEdge PAM4 TIAs and rigorous quality methodologies, enables the rapid deployment of next generation 100G/lane OSFP and QSFP-DD optical transceivers.”
The GN1814 linear transimpedance amplifier is available as wire-bondable bare die. It is intended for 400GBASE-DR4, FR4, LR4, and 800GBASE-DR8 PAM4 optical transceivers; chip-on-board optical assemblies; and silicon photonics. Log in or register for mySemtech to access product documentation.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Reduced-pitch TIA supports 56-GBd PAM4 operation appeared first on EDN.
Devices ward off ESD in automotive networks

Six ESD protection devices from Nexperia safeguard the 24-V board net systems used in trucks and commercial vehicles. The PESD2CANFD36XX-Q series protects the bus lines of in-vehicle networks, such as LIN, CAN, CAN-F, FlexRay, and SENT, from damage caused by ESD and other transients.
As data rates increase and vehicles feature more electrification, the need for ESD protection is critical. The PESD2CANFD36XX-Q series provides a maximum reverse standoff voltage of 36 V and up to 22 kV of ESD protection. Further, a low clamping voltage of 48 V at 1 A ensures system-level robustness for in-vehicle networks.
Available in SOT-23 and SOT-323 (SC-70) packages, the protection devices offer a choice of three different low-capacitance classes: 4.3 pF, 6 pF, and 10 pF. Low capacitance helps ensure smooth communication between interfaces without impacting signal integrity.
Devices in the PESD2CANFD36XX-Q series can be purchased directly from Nexperia or through its worldwide distributor network. View datasheets and check distributor stock using the link to the product page below.
PESD2CANFD36XX-Q series product page
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Devices ward off ESD in automotive networks appeared first on EDN.
Vision processors bring AI to intelligent cameras

Hailo-15 AI-centric vision processors from chipmaker Hailo deliver video processing and analytics at the edge when integrated into intelligent cameras. Hailo-15 SoCs deliver up to 20 tera operations per second (TOPS) running on a neural network core, processing multiple advanced deep-learning models in parallel.
Cameras outfitted with Hailo-15 vision processors can be deployed in smart cities, factories, and retail locations. Transportation authorities using Hailo-15 empowered cameras are able to recognize everything from accidents and lost children to misplaced luggage.
All Hailo-15 video processors support multiple input streams at 4K resolution and combine CPU and DSP subsystems with a field-proven AI core. The family includes three variants to meet the varying processing needs and price points of smart camera makers and AI application providers. These variants include the Hailo-15H (20 TOPS), Hailo-15M (11 TOPS), and Hailo-15L (7 TOPS).
According to the manufacturer, Hailo-15 processors enable over 5X higher performance than currently available solutions in the market at a comparable price point. For example, the Hailo-15H is capable of running the YOLOv5M6 object detection model with high input resolution (1280×1280) at a real-time sensor rate or the classification model benchmark, ResNet-50, at 700 fps.
Hailo-15 vision processors are engineered to consume very little power, making them suitable for all types of IP cameras and enabling the design of fanless edge devices. The small power envelope means camera designers can develop lower-cost products by leaving out an active cooling component.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Vision processors bring AI to intelligent cameras appeared first on EDN.