-   Українською
-   In English
EDN Network
The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile
Last time, I covered one half of the Energizer Ultimate Powersource Pro Solar Bundle that I first introduced you to back at the beginning of August and purchased for myself at the beginning of September (and, ironically, is for sale again as I write these words on November 6, at Meh’s companion SideDeal site):
If you haven’t yet read that premier post in this series, where I detailed the pros and cons of the Energizer PowerSource Pro Battery Generator, I encourage you to pause here and go back and peruse it first before proceeding with this one. This time I’ll be discussing the other half of the bundle, Energizer’s 200W Portable Solar Panel. And as before, I’ll start out with “stock” images from the bundle’s originator, Battery-Biz (here again is the link to the user manual, which covers both bundled products…keep paging through until you get to the solar panel section):
Here’s another relevant stock image from Meh:
Candidly, there’s a lot to like about this panel, model number ENSP200W (and no, I don’t know who originally manufactured it, with Energizer subsequently branding it), reflective of the broader improvement trend in solar panels that I previously covered back in mid-September. The following specs come straight from the user manual:
Solar Cells
- Solar Cell Material: Monocrystalline PERC M6-166mm
- Solar Cell Efficiency: 22.8%
- Solar Cell Surface Coating: PET
Output Power
- Max Power Output – Wattage (W): 200W
- Max Power Output – Voltage(Vmp): 19.5V
- Max Power Output – Current (Imp): 10.25A
- Power Tolerance: ±3%
- Open Circuit Voltage (Voc): 23.2V
- Short Circuit Current (Isc): 11.38A
Operating Temperatures
- Operating Temp (°C): -20 to 50°C / -4 to 122°F
- Nominal Operating Cell Temp (NOCT): 46°± 2° C
- Current Temp Coefficient: 0.05% / °C
- Voltage Temp Coefficient: – 0.35% / °C
- Power Temp Coefficient: – 0.45% / °C
- Max Series Fuse Rating: 15A
Cable
- Anderson Cable Length: 5M / 16.5 ﬞ
- Cable Type: 14AWG dual conductor, shielded
- Output Connector: Anderson Powerpole 15/45
Dimensions and Weight
- Product Dimensions – Folded: 545 x 525 x 60 mm/21.5″ x 20.7″ x 2.4″
- Product Dimensions – Open: 2455 x 525 x 10 mm/96.7″ x 20.7″ x 0.4″
- Product Net Weight: 5.9 kgs/ 13.0 lbs
As you can see from the last set of specs, the “portable” part of the product name is spot-on; this solar panel is eminently tote-able and folds down into an easily stowed form factor. Here’s what mine looked like unfolded:
Unfortunately, as with its power station bundle companion, the solar panel arrived with scuffed case cosmetics and ruffled-through contents indicative of pre-used, not brand new, condition:
Although, I was able to clip a multimeter to the panel’s Anderson Powerpole output connector and, after optimally aligning the panel with the cloud-free direct sunlight, I got close to the spec’d max open circuit output voltage out of it:
the connector itself had also arrived pre-mangled by the panel’s prior owner (again: brand new? Really, Battery-Biz?), a situation that others had also encountered, and which prevented me from as-intended plugging it into the PowerSource Pro Battery Generator:
Could I have bought and soldered on a replacement connector? Sure. But in doing so, I likely would have voided the factory warranty terms. And anyway, after coming across not-brand-new evidence in the entire bundle’s constituents, I was done messing with this “deal”; I was offered an exchange but requested a return-and-refund instead. As mentioned last time, Meh was stellar in their customer service, picking up the tab on return shipping and going as far as issuing me a full refund while the bundle was still enroute back to them. And to be clear, I blame Battery-Biz, not Meh, for this seeming not-as-advertised bait-and-switch.
A few words on connectors, in closing. Perhaps obviously, the connector coming out of a source solar panel and the one going into the destination power station need to match, either directly or via an adapter (the latter option with associated polarity, adequate current-carrying capability and other potential concerns). That said, in my so-far limited to-date research and hands-on experiences with both solar panels and power stations, I’ve come across a mind-boggling diversity of connector options. That ancient solar panel I mentioned back in September, for example:
uses these:
to interface between it and the solar charge controller:
The subsequent downstream connection between the controller and my Eurovan Camper’s cargo battery is more mainstream SAE-based:
The more modern panel I showcased in that same September writeup:
offered four output options: standard and high-power USB-A, USB-C and male DC5521.
My SLA battery-based Phase2 Energy PowerSource Power Station, on the other hand:
(Duracell clone shown)
like the Lithium NMC battery-based Energizer PowerSource Pro Battery Generator:
expects, as already mentioned earlier in this piece, an Anderson Powerpole (PP15-45, to be precise) connector-based solar panel tether:
To adapt the male 5521 to an Anderson Powerpole required both the female-to-female DC5521 that came with the Foursun F-SP100 solar panel and a separate male DC5521-to-Anderson adapter that I bought off Amazon:
What other variants have I encountered? Well, coming out of the EcoFlow solar panels I told you about in the recent Holiday Shopping Guide for Engineers are MC4 connectors:
Conversely, the EcoFlow RIVER 2:
and DELTA 2 portable power stations:
both have an orange-color XT60i solar input connector:
the higher current-capable (100 A vs 60 A), backwards-compatible successor to the original yellow-tint XT60 used in prior-generation EcoFlow models:
EcoFlow sells both MC4-to-XT60 and MC4-to-XT60i adapter cables (note the connector color difference in the following pictures):
along with MC4 extension cables:
and even a dual-to-single MC4 parallel combiner cable, whose function I’ll explore next time:
The DELTA 2 also integrates an even higher power-capable XT150 input, intended for daisy-chaining the power station to a standalone supplemental battery to extend runtime, as well as for recharging via the EcoFlow 800W Alternator Charger:
Ok, now what about another well-known portable power station supplier, Jackery? The answer is, believe it or not, “it depends”. Older models integrated an 8 mm 7909 female DC plug:
which, yes, you could mate to a MC4-based solar panel via an adapter:
Newer units switched to a DC8020 input; yep, adapters to the rescue again:
And at least some Jackery models supplement the DC connector with a functionally redundant, albeit reportedly more beefy-current, Anderson Powerpole input:
How profoundly confusing this all must be to the average consumer (not to mention this techie!). I’m sure if I did more research, I’d uncover even more examples of connectivity deviance from other solar panel and portable power station manufacturers alike. But I trust you already get my point. Such non-standardization might enable each supplier to keep its customers captive, at least for a while and to some degree, but it also doesn’t demonstrably grow the overall market. Nor is it a safe situation for consumers, who then need to blindly pick adapters without understanding terms such as polarity or maximum current-carrying capability.
Analogies I’ve made before in conceptually similar situations, such as:
- It’s better to have a decent-size slice of a sizeable pie versus a tiny pie all to yourself, and
- A rising tide lifts all boats
remain apt. And as with those conceptually similar situations on which I’ve previously opined, this’ll likely all sort itself out sooner or later, too (via market share dynamics, would be my preference, versus heavy-handed governmental regulatory oversight). The sooner the better, is all I’m saying. Let me know your thoughts on this in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Experimenting with a modern solar cell
- SLA batteries: More system form factors and lithium-based successors
- Experimenting with a modern solar cell
- Then and Now: Solar panels track the sun
- Solar-mains hybrid lamp
- Solar day-lamp with active MPPT and no ballast resistors
- Beaming solar power to Earth: feasible or fantasy?
The post The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile appeared first on EDN.
Innovative manufacturing processes herald a new era for flexible electronics
New and repurposed fabrication techniques for flexible electronic devices are proliferating rapidly. Some may wonder if they are better than traditional methods and at what point they’ll be commercialized. Will they influence electronics design engineers’ future creations?
Flexibility is catching on. Experts forecast the flexible electronics market value will reach $63.12 million by 2030, achieving a compound annual growth rate of 10.3%. As its earning potential increases, more private companies and research groups turn their attention to novel design approaches.
Flexible electronics is a rapidly developing area. Source: Institute of Advanced Materials
As power densification and miniaturization become more prominent, effective thermal management grows increasingly critical—especially for implantable and on-skin devices. So, films with high in-plane thermal conductivity are emerging as an alternative to traditional thermal adhesives, greases, and pads.
While polymer composites with high isotropic thermal conductivity (k) are common thermal interface materials, their high cost, poor mechanics, and unsuitable electrical properties leave much to be desired.
Strides have been made to develop pure polymer film with ultrahigh in-plane k. Electronics design engineers use stretching or shearing to enhance molecule chain alignment, producing thin, and flexible sheets with desirable mechanical properties.
Here, it’s important to note that the fabrication process for pure polymer films is complex and uses toxic solvents, driving costs, and impeding large-scale production. A polyimide and silicone composite may be the better candidate for commercialization, as silicone offers high elasticity and provides better performance in high temperatures.
Novel manufacturing techniques for flexible electronics
Thermal management is not the only avenue for research. Electronics engineers and scientists are also evaluating novel techniques for transfer printing, wiring, and additive manufacturing.
Dry transfer printing
The high temperatures at which quality electronic materials are processed effectively remove flexible or stretchable substrates from the equation, forcing manufacturers to utilize transfer printing. And most novel alternatives are too expensive or time-consuming to be suitable for commercial production.
A research team has developed a dry transfer printing process, enabling transferring thin metal and oxide films to flexible substrates without risk of damage. They adjusted the sputtering parameters to control the amount of stress, eliminating the need for post-processing. As a result, the transfer times were shortened. This method works with microscale or large patterns.
Bubble printing
As electronics design engineers know, traditional wiring is too rigid for flexible devices. Liquid metals are a promising alternative, but the oxide layer’s electrical resistance poses a problem. Excessive wiring size and patterning restrictions are also issues.
One research group overcame these limitations by repurposing bubble printing. It’s not a novel technique but has only been used on solid particles. They applied it to liquid metal colloidal particles—specifically a eutectic gallium-indium alloy—to enable high-precision patterning.
The heat from a femtosecond laser beam creates microbubbles that guide the colloidal particles into precise lines on a flexible substrate. The result is wiring lines with a minimum width of 3.4 micrometers that maintain stable conductivity even when bent.
4D printing
Four-dimensional (4D) printing is an emerging method that describes how a printed structure’s shape, property or function changes in response to external stimuli like heat, light, water or pH. While this additive manufacturing technique has existed for years, it has largely been restricted to academics.
4D-printed circuits could revolutionize flexible electronics manufacturing by improving soft robotics, medical implants, and wearables. One proof-of-concept sensor converted pressure into electric energy despite having no piezoelectric parts. These self-powered, responsive, flexible electronic devices could lead to innovative design approaches.
Impact of innovative manufacturing techniques
Newly developed manufacturing techniques and materials will have far-reaching implications for the design of flexible electronics. So, industry professionals should pay close attention as early adoption could provide a competitive advantage.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- Flexible electronics tech shows progress
- Fab-in-a-Box: Flexible Electronics Scale Up
- Printed electronics enhance device flexibility
- Flexible electronics stretch the limits of imagination
- Printed Electronics to Enhance both Exteriors and Interiors in EVs
The post Innovative manufacturing processes herald a new era for flexible electronics appeared first on EDN.
Touch controller eases user interface development
Microchip’s MTCH2120 turnkey touch controller offers 12 capacitive touch sensors configured via an I2C interface. Backed by Microchip’s unified ecosystem, it simplifies design and streamlines transitions from other turnkey and MCU-based touch interface implementations.
The MTCH2120 delivers reliable touch performance, unaffected by noise or moisture. Its touch/proximity sensors can work through plastic, wood, or metal front panels. The controller’s low-power design enables button grouping, reducing scan activity and power consumption while keeping buttons fully operational.
Easy Tune technology eliminates manual threshold tuning by automatically adjusting sensitivity and filter levels based on real-time noise assessment. An MPLAB Harmony Host Code Configurator plug-in eases I2C integration with Microchip MCUs and allows direct connection without host-side protocol implementation. Design validation is facilitated through the MPLAB Data Visualizer, while built-in I2C port expander capabilities allow for the addition of three or more unused touch input pins.
In addition, access to Microchip’s touch library minimizes firmware complexity, helping to shorten design cycles. For rapid prototyping, the MTCH2120 evaluation board includes a SAM C21 host MCU for out-of-the-box integration.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Touch controller eases user interface development appeared first on EDN.
2-A driver powers automotive LEDs
A synchronous step-down LED driver with high-side current sensing, Diodes’ AL8891Q drives up to 2 A of continuous current from a 4.5-V to 65-V input. It also supports up to a 95% duty cycle, enabling the driver to power longer LED chains in various automotive lighting applications.
The AL8891Q uses constant on-time control for simple loop compensation and cycle-by-cycle current limiting, offering fast dynamic response without needing an external compensation capacitor. Its adjustable switching frequency, ranging from 200 kHz to 2.5 MHz, optimizes efficiency or enables a smaller inductor size and more compact form factor. Spread spectrum modulation further enhances EMI performance and aids compliance with the CISPR 25 Class 5 standard.
Two independent pins control PWM and analog dimming. PWM dimming, from 0.1 kHz to 2 kHz, enables high-resolution dimming. Analog dimming, ranging from 0.15 V to 2 V, supports soft-start and other adjustments. The AL8891Q also features comprehensive protection with fault reporting.
The AEC-Q100 Grade 1 qualified AL8891Q driver costs $0.78 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 2-A driver powers automotive LEDs appeared first on EDN.
Satellite IoT module handles large file transfers
The Certus 9704 IoT module from Iridium supports satellite IoT applications requiring real-time data analysis and automated decision-making. This compact module offers larger file transfer sizes and faster message speeds than previous Iridium IoT modules, enabling the delivery of data, pictures, and audio messages up to 100 KB in industrial and remote environments.
With Iridium Messaging Transport (IMT) technology, the Certus 9704 provides two-way IoT services worldwide over Iridium’s low-latency global satellite network. The module is 34% smaller than the Iridium 9603, 79% smaller than the Iridium 9602, and offers an 83% reduction in idle power consumption compared to both.
In addition to conventional satellite IoT applications, the Certus 9704 is AIoT-ready. Products built with Certus 9704 modules can offload more computing to the cloud in a single message, where an AIoT engine can quickly make decisions and send actionable instructions back to the remote device. With IMT at its core, a built-in topic-sorting capability ensures messages are efficiently organized for delivery to the appropriate engine for real-time data, audio, or image analysis.
Iridium offers a development kit preconfigured with the Certus 9704 module mounted on a motherboard, along with a power supply, antenna, and Arduino-based software. The kit comes with 1000 free messages and GitLab-hosted reference materials.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Satellite IoT module handles large file transfers appeared first on EDN.
High-power chip resistor comes in tiny package
Kyocera AVX has expanded its CR series of high-power chip resistors with a device that handles 2.5 W in a small 0603 package. Designed to enable the miniaturization of RF power amplifiers, the 0603 resistor overcomes size constraints and enhances thermal management by incorporating a high thermal conductivity substrate and maximized heat sink grounding area.
Chip resistors in the CR series are non-magnetic, qualified to MIL-PRF-55342, and deliver reliable performance in a variety of communications, instrumentation, test and measurement, military, and defense applications. They feature thin-film resistive elements, aluminum nitride substrates, and silver terminals.
The resistors are available in eight chip sizes from 0603 to 3737, with standard resistive values of 100 Ω and 50 Ω and tolerances as tight as ±2%. They offer capacitance values ranging from 0.3 pF (0603) to 6.0 pF, power handling up to 250 W, and operating temperatures from -55°C to +150°C.
Chip resistors in the CR series are available from Mouser, DigiKey, and Richardson RFPD.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post High-power chip resistor comes in tiny package appeared first on EDN.
OCTRAM technology achieves low power, high density
Kioxia and Nanya Technology have co-developed a type of 4F2 DRAM known as Oxide-Semiconductor Channel Transistor DRAM (OCTRAM). It features an oxide-semiconductor transistor that has both high ON current and ultra-low OFF current, enabling reduced power consumption across a wide range of applications, including AI, post-5G communication systems, and IoT products.
Panoramic view of the OCTRAM.
OCTRAM seeks to deliver low-power DRAM by using an InGaZnO-based cylinder-shaped vertical transistor with ultra-low leakage. This 4F2 design adaptation, according to Kioxia, provides significantly higher memory density than conventional silicon-based 6F2 DRAM.
ON and OFF current characteristics of InGaZnO transistor across configurations and structures.
The InGaZnO vertical transistor delivers a high ON current of over 15 μA per cell and an ultra-low OFF current below 1aA per cell, achieved through device and process optimization. In the OCTRAM design, the transistor is integrated on top of a high aspect ratio capacitor using a capacitor-first process. This setup helps separate the effects of the advanced capacitor process from the InGaZnO performance.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post OCTRAM technology achieves low power, high density appeared first on EDN.
Power Tips #136: Design an active clamp circuit for rectifiers at a high switching frequency
In vehicle electrical systems, a high- to low-voltage DC/DC converter is a reversible electronic device that changes the DC from the vehicle’s high-voltage (400 V or 800 V) battery to a lower DC voltage (12 V). These converters can be unidirectional or bidirectional. Power levels from 1 kW to 3 kW are typical, with systems requiring components rated at 650 V to 1,200 V for the converter’s high-voltage power net (primary side) and at least 60 V on the 12-V power net (secondary side).
The need for greater power density and a smaller powertrain led to increased switching frequencies for power components to several hundred kilohertz, in order to help shrink the size of magnetic components. The miniaturization of a high- to low-voltage DC/DC converter exposes many issues that are not as important at lower switching frequencies, such as electromagnetic compatibility (EMC), thermal dissipation, and active clamp for metal-oxide semiconductor field-effect transistors (MOSFETs). In this power tip, I will discuss the design of clamping circuits for synchronous rectifier MOSFETs at a high switching frequency.
Traditional active clampThe phase-shifted full bridge (PSFB) shown in Figure 1 is a popular topology in high- to low-voltage DC/DC applications because it can achieve soft switching on switches to increase converter efficiency. But you can still expect to see high-voltage stress on the synchronous rectifier, as its parasitic capacitance resonates with the transformer leakage inductance. The voltage stress of the rectifier could be as high as Equation 1:
Vds_max = 2VIN x (Ns/Np) (1)
where Np and Ns are the transformer’s primary and secondary windings, respectively.
Considering the power level of a high- to low-voltage DC/DC converter and the power losses of a resistor-capacitor-diode snubber [1], designers often use active clamp circuits for synchronous rectifier MOSFETs. Figure 1 shows the typical circuits.
Figure 1 Traditional active clamp circuit for PSFB synchronous rectifier MOSFETs. Source: Texas Instruments
In this schematic, you can see the P-channel metal-oxide semiconductor (PMOS) Q9 and the snubber capacitor, which are the main parts of the active clamp circuit. One terminal of the snubber capacitor connects to the output choke, and the source of the PMOS connects to ground. In a traditional active clamp circuit for a PSFB, synchronous rectifier MOSFET Q5 and Q7 have the same scheme; so do Q6 and Q8. Each time after the synchronous rectifier MOSFETs shut down, the PMOS will turn on with a proper delay time.
Figure 2 shows the control scheme of the PSFB and active clamp. You can easily find that the switching frequency of PMOS will be double the fsw.
Figure 2 Control scheme of active clamp PMOS Q9 where the switching frequency of the PMOS is doble the fsw. Source: Texas Instruments
Evaluating active clamp lossYou can use Equation 2, Equation 3, Equation 4, Equation 5, and Equation 6 to evaluate the loss of the active clamp PMOS. Apart from Pon_state, all of the other losses are proportional to fsw. When the switching frequency of the PMOS doubles, the loss doubles, so you will need to resolve the PMOS thermal issue. And the exact thermal issue turns out to be even worse when pushing the fsw higher to meet the needs for miniaturization.
Pon_state = Irms2 x Rdson (2)
Pturn_on = 0.5 x Vds x Ion x ton x fsw (3)
Pturn_off = 0.5 x Vds x Ioff x toff x fsw (4)
Pdrive = Vdrv x Qg x fsw (5)
Pdiode = Isnubber x Vsd x td x fsw (6)
The proposed active clampSo, what can you do? To select PMOS with better figure of merit (FOM) or to choose thermal grease with higher conductivity coefficient? Both are OK but remember the thermal issue caused by active clamp still concentrates at one part which makes the issue hard to resolve. Can we divide the thermal into several parts? A feasible way is to use two active clamp circuits and connect the terminal of the snubber capacitor to the switching node of the secondary legs, as Figure 3 shows. Then you can only turn on Q11 after Q5 and Q7 turn off, and only turn on Q10 after Q6 and Q8 turn off. Figure 4 shows the control scheme of the PSFB and proposed active clamp.
Figure 3 Proposed active clamp circuit for PSFB synchronous rectifier MOSFETs. Source: Texas Instruments
Figure 4 Control scheme of the PSFB and proposed active clamp. Source: Texas Instruments
When Q5 and Q7 turn off, Q6 and Q8 are still on. So, you can locate the clamp loops for Q5 and Q7, as indicated by the green arrows in Figure 3. The switching frequency of Q10 and Q11 are both fsw, not double the fsw.
So, according to Equation 2, Equation 3, Equation 4, Equation 5, and Equation 6, Pon_state of each PMOS will be one quarter of original, Pturn_on, Pturn_off, Pdrive, and Pdiode will be one half of original. Obviously, the proposed method divides the loss of the clamp circuit into two parts and even less, which makes it easier to deal with the thermal issue.
Let’s come back to the clamp loop. Q5 has a larger loop than Q7; it’s similar to Q6 and Q8. You will need to pay attention to the layout of the synchronous rectifiers in order to get a minimum clamp loop for Q5 and Q6.
Proposed active clamp performanceFigure 5 and Figure 6 shows the related tests from the High-Voltage to Low-Voltage DC/DC Converter Reference Design with GaN HEMT from Texas Instruments, which uses the proposed active clamp circuit working at a 200-kHz switching frequency. Figure 5 shows the voltage stress of the rectifier.
Figure 5 Voltage stress of the rectifier where CH1 is the Vgs of the rectifier, CH2 is the Vds of the rectifier, CH3 is the voltage for the primary transformer winding, and CH4 is the current for the primary transformer winding. Source: Texas Instruments
CH1 is the Vgs of the rectifier, CH2 is the Vds of the rectifier, CH3 is the voltage for the primary transformer winding, and CH4 is the current for the primary transformer winding. The maximum voltage stress of the rectifier is below 45 V at 400 VIN, 13.5 VOUT, 250-A IOUT. The maximum temperature of the active clamp circuit is 46.6°C at 400 VIN, 13.5 VOUT, 180-A IOUT [2], as shown in Figure 6. So, the proposed control scheme achieves quite good thermal performance for the clamping MOSFET.
Figure 6 Thermal performance of the active clamp circuit where the maximum temperature of the active clamp circuit is 46.6°C at 400 VIN, 13.5 VOUT, 180-A IOUT. Source: Texas Instruments
500-kHz active clamp sans thermal issuesWhen promoting switching frequency from 200 kHz to 500 kHz, the volume of transformer will shrink about 45% [2], which will help to promote the power density of the High-Voltage to Low-Voltage DC/DC Converter. With the proposed method, BOM cost will increase a little, but designer can run the active clamp at 500-kHz switching frequency without thermal issue, leading to improved performance. Considering the pulsed drain current of PMOS is far smaller than NMOS, designer can also use NMOS in active clamp with isolated driver and bias power supply if necessary.
Daniel Gao works as a system engineer in the Power Supply Design Services team at Texas Instruments, where he focuses on developing OBC and DC/DC converters. He received the M.S. degree from Central South University in 2010.
Related Content
- Power Tips #135: Control scheme of a bidirectional CLLLC resonant converter in an ESS
- Power Tips #134: Don’t switch the hard way; achieve ZVS with a PWM full bridge
- Power Tips #133: Measuring the total leakage inductance in a TLVR to optimize performance
- Precision clamp protects data logger
- Inverted bipolar transistor doubles as a signal clamp
- High-speed clamp functions as pulse-forming circuit
References
- Betten, John. 2016. “Power Tips: Calculate an R-C Snubber in Seven Steps.” TI E2E design support forums technical article, May 2016.
- “High-Voltage to Low-Voltage DC-DC Converter Reference Design with GaN HEMT.” 2024. Texas Instruments reference design test report No. PMP41078, literature No. TIDT403A. Accessed Dec. 16, 2024.
The post Power Tips #136: Design an active clamp circuit for rectifiers at a high switching frequency appeared first on EDN.
Symmetrical 10 V, 1.5 A PWM-programmed power supply
Variable regulated power supplies are handy tools found on well-equipped electronics lab benches. The symmetrical varieties that produce equal voltage outputs of opposite polarity, are even more so. Figure 1’s version of a symmetrical 0 V to 10 V, 1.5 A lab supply implements an extra handy trick: computer programming via a single PWM output.
Figure 1 LM337, LM317, and CD4053 join forces in a symmetrical 0 to +10 V PWM programmed power supply.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In Figure 1’s PWM DAC interface, DPST switches U1a and U1b accept a 10-kHz PWM 5-V signal to generate a +1.25 V to -8.75 V “ADJ” control signal on C2 for the U2 regulator. Vout = ADJ – 1.25 V, so ADJ = 1.25 V forces an output of zero and U3 follows along. At the other end of the span, ADJ = -8.75 V makes them make 10 V. Current source Q1 reduces zero offset error by (mostly) nulling out the 65 µA (typical) 337 ADJ pin bias current.
Inverter switch U1c provides active ripple filtering via analog subtraction. Other PWM frequencies can be accommodated by proportional scaling of filter caps C1 and C2.
The feedback loop established by R2 and R3 makes the 10 V full-scale outputs proportional to U2’s precision internal reference. This makes output voltage an accurate function of PWM duty factor DF with functionality (DF ranging from 0 to 1) given by…
Vout (+/-) = +/-1.25 DF / (1 – 0.875 DF)
…as graphed in Figure 2.
Figure 2 Vout (0 to -10 V and +10 V) versus PWM duty factor, DF (0 to 1). The black curve is LM317’s Vout = 1.25 DF / (1 – 0.875 DF). The red curve is LM337’s Vout = -1.25 DF / (1 – 0.875 DF).
Figure 3 plots the inverse of Figure 2, yielding the PWM DF required for any desired Vout. Figure 3 The PWM DF required for any desired Vout. PWM DF = |Vout| / (0.875 |Vout| + 1.25).
For the corresponding 8-bit PWM setting:
Dbyte = 255 DF = 255 |Vout| / (0.875 |Vout| + 1.25)
Actually, as shown in Figure 4, the Vout-to-DF relation is surprisingly close to logarithmic.
Figure 4 DF (x-axis) versus Vout (y-axis) is (fairly) close to a logarithmic function, which makes good use of limited 8-bit PWM resolution.
The supply rail inputs must be at least 13 V to accommodate U2’s and U3’s minimum headroom requirement. The negative input is limited to a 15 V max in recognition of U1’s 20-V absmax rating.
U2’s 0 to -10 V output is inverted by the Q2, Q3, and Q4 differential amplifier via feedback to U3’s ADJ input, forcing U3 to track U2. This results in the symmetrical outputs plotted in Figure 2. Q5 provides U3’s minimum output load while R6 does that job for U2.
U2 and U3 must of course be adequately heatsunk as dictated by their power dissipation which is equal to output current multiplied by the Vin to Vout differential. Maximum heating (up to 20 W) therefore occurs at high current and low voltage.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Synthesize precision bipolar Dpot rheostats
- Cancel PWM DAC ripple with analog subtraction—revisited
- Fast-settling synchronous-PWM-DAC filter has almost no ripple
- Cancel PWM DAC ripple and power supply noise
The post Symmetrical 10 V, 1.5 A PWM-programmed power supply appeared first on EDN.
The 2-nm process node and Samsung’s foundry crossroads
Despite being the first to adopt the gate-all-around (GAA) technology, Samsung has struggled with commercialization and yield issues and its foundry market share has dropped below 10%. Han Jin-Man, recently named president of Samsung’s foundry operations, aims to change that with a major strategic shift in the execution and production of 2-nm manufacturing node.
Previously, Han oversaw the DRAM, flash memory design, and SSD development teams at Samsung. The Korean chipmaker has also named process development specialist Nam Seok-Woo as the chief technology officer (CTO) of the foundry division. It’s a major reshuffle at the world’s second-largest chip contract manufacturer, currently trying to close the gap with market leader TSMC, which has a market share of nearly 65%.
Figure 1 Like Intel, there is now an industry chat about the potential spinoff of Samsung’s foundry business. Source: Samsung
The change of guard at Samsung’s foundry business comes as it transitions to the next-generation foundry process: the 2 nm chip manufacturing process. The shift to the 2-nm process node entails a significant overhaul of transistor structures, which inevitably leads to higher development costs and greater design complexity.
While Samsung’s foundry business has never been in a position to challenge TSMC, it stunned the market by surpassing TSMC in launching the GAA technology to replace the FinFET manufacturing technique. While this significantly raised Samsung’s foundry profile, commercializing this highly complex technology proved a tough nut to crack.
The GAA setback
In 2022, Samsung made waves by incorporating GAA transistors at its 3-nm process node; meanwhile, TSMC decided to implement GAA technology at its upcoming 2-nm process node. In retrospect, that seems to be the right call because Samsung faced difficulties achieving high yields due to GAA’s technical complexity.
In GAA technology, transistors use vertically placed horizontal nanosheets, enabling the gate to cover the channel on all four sides. That reduces current leakage and improves drive current, resulting in chips with enhanced performance and better energy efficiency.
Figure 2 While a gate covers three sides in the FinFET structure, in GAA, a gate surrounds the entire four sides of a cylindrical channel, which is a passage for current flows. Source: Samsung
Now, Samsung is determined to recover from the setbacks it experienced at the 3-nm process node. It begins with improving yield and better handling GAA’s technical complexity. “We will focus on dramatically improving the yield of the 2-nm manufacturing process,” said Han Jin-Man, Samsung’s new foundry chief.
It’s important to note that TSMC is expected to achieve over 60% yield in trial production and enter mass production for 2nm chips ahead of Samsung. On the other hand, Samsung’s 2-nm yield has been reported in the 20% to 30% area.
Moreover, according to industry sources, TSMC plans to start trial production of its 2-nm process node in April 2025 and mass production in the second half of 2025. On the other hand, Samsung is expected to start test production in the first half of 2025 and mass production in the fourth quarter of 2025.
High time at Samsung
Samsung, already under immense pressure after ceding leadership of high-bandwidth memory (HBM) products to SK hynix, is now in a renewed spotlight for its issues regarding the semiconductor contract manufacturing business. Intel’s troubles in this area have led to the ouster of its CEO and an uncertain future.
The second-best stature doesn’t often spark envy in the semiconductor industry, and Samsung isn’t known to be shy about fighting back. Its quest for relevance in the 2-nm space has already been rewarded with a manufacturing order from Japanese AI chip startup Preferred Networks (PFN), whose investors include Toyota, NTT, and FANUC.
Besides manufacturing 2-nm chips, Samsung will also provide its 2.5D packaging technology—I-Cube S—to enable PFN to integrate multiple chips into a single package. That marks a significant breakthrough, and if Samsung can secure one major customer of TSMC, it’ll create a significant impact in the contract foundry business.
Still, catching up with TSMC remains a daunting challenge.
Related Content
- Samsung Readies Gate-All-Around Ramp
- What GAA and HBM restrictions mean for South Korea
- Samsung unveils plans for 2-nm and 1.4-nm process nodes
- All you need to know about GAA chip manufacturing process
- Gate-All-Around (GAA): The Ultimate Solution to Reduce Leakage
The post The 2-nm process node and Samsung’s foundry crossroads appeared first on EDN.
2025: A technology forecast for the year ahead
As has been the case the last couple of years, we’re once again flip-flopping what might otherwise seemingly be the logical ordering of this and its companion 2024 look-back piece. I’m writing this 2025 look-ahead in November for December publication, with the 2024 revisit to follow, targeting a January 2025 EDN unveil. While a lot can happen between now and the end of 2024, potentially affecting my 2025 forecasting in the process, this reordering also means that my 2024 retrospective will be more comprehensive than might otherwise be the case.
That all said, I did intentionally wait until after the November 5 United States elections to begin writing this piece. Speaking of which…
The 2024 United States election (outcome, that is)
Yes, I know I covered this same topic a year ago. But that was pre-election. Now, we know that there’s been a dominant political party transition both in the Executive Branch (the President and Vice President) and the Legislative Branch (the Senate, to be specific). And the other half of the Legislative Branch, the House of Representatives, will retain a (thin) Republican Party ongoing majority, final House results having been determined just as I type these words a bit more than a week post-election. As I wrote a year ago:
Trump aspires to fundamentally transform the U.S. government if he and his allies return to power in the executive branch, moves which would undoubtedly also have myriad impacts big and small on technology and broader economies around the world.
That said, a year ago I also wrote:
I have not (and will not) reveal personal opinions on any of this.
and I will be “staying the course” this year. So then why do I mention it at all? Another requote:
Americans are accused of inappropriately acting as if their country and its citizens are the “center of the world”. That said, the United States’ policies, economy, events, and trends inarguably do notably affect those of its allies, foes and other countries and entities, as well as the world at large, which is why I’m including this particular entry in my list.
Given that I’m clearly not going to be diving into other hot-button topics like immigration here, what are some of the potential technology impacts to come in 2025 and beyond? Glad you asked. Here goes, solely in the order in which they’ve streamed out of my noggin:
- Network Neutrality: Support for net neutrality, which Wikipedia describes as “the principle that Internet service providers (ISPs) must treat all Internet communications equally, offering users and online content providers consistent transfer rates regardless of content, website, platform, application, type of equipment, source address, destination address, or method of communication (i.e., without price discrimination)” predictably waxes or wanes depending on which US political party—Democratic or Republican, respectively —is in power at any point in time. As such, it’s likely that any momentum that’s built up toward ISP regulation over the past four years will fade and likely even reverse course at the Federal Communications Commission (FCC) in the four-year Presidential term to come, along with course reversals of other technology issues over which the FCC holds responsibility. Note that the “ISP” acronym, traditionally applied to copper, coax and fiber wired Internet suppliers, has now expanded to include cellular and satellite service providers, too.
- Tariffs: Wikipedia defines tariffs on imports, which is what I’m primarily focusing on here, as “designed to raise the price of imported goods and services to discourage consumption. The intention is for citizens to buy local products instead, thereby stimulating their country’s economy. Tariffs therefore provide an incentive to develop production and replace imports with domestic products. Tariffs are meant to reduce pressure from foreign competition and reduce the trade deficit.” The Trump administration, during his first term from 2016-2020, activated import tariffs on countries—notably China—and products determined to be running a trade surplus with the United States (tariffs which, in fairness, the subsequent Biden administration kept in place in some cases and to some degrees). And Trump has emphatically stated his intent to redouble his efforts here in the coming term, ranging up to 60%. The potential resultant “squeeze” problem for US domestic suppliers is multifold:
- Tariff-penalized countries are likely to respond in kind with import tariffs of their own, hampering US companies’ abilities to compete in broader global markets
- Those countries are likely to also tariff-tax exports (to the United States, specifically) of both product “building blocks” designed and manufactured outside the US—such as semiconductors and lithium batteries—and products built by subcontractors in other countries—like smartphones.
- And broader supply-constraint retaliation, beyond fiscal encumbrance, is also likely to occur in areas where other countries already have global market share dominance due to supply abundance and high-volume manufacturing capacity: China once again, with solar cells, for example, along with rare earth minerals.
Perhaps this is why Wikipedia also notes that “There is near unanimous consensus among economists that tariffs are self-defeating and have a negative effect on economic growth and economic welfare, while free trade and the reduction of trade barriers has a positive effect on economic growth…Often intended to protect specific industries, tariffs can end up backfiring and harming the industries they were intended to protect through rising input costs and retaliatory tariffs.” Much will likely depend on if the tariffs to be applied will be selective and scalpel-like versus broadly wielded as blunt instruments.
- Elon Musk (and his various companies): Musk spent an estimated $200M financially backing Trump’s campaign, not to mention the multiple rallies he spoke at and the formidable virtual megaphone of his numerous posts on X, the social media site formerly known as Twitter, which he owns. A week post-election, the return on his investment is already starting to become evident. What forms could it take?
- Electric vehicle (EV) manufacturer Tesla is a particularly lucrative revenue and profit generator for Musk. On one hand, Trump’s stated plan to eliminate EV rebates included in the Biden administration’s Inflation Reduction Act, as part of Trump’s broader wind-down of environmental regulations and increased protectionism for domestic suppliers of petrochemical-powered autos, might be interpreted as a bad thing for Tesla. On the other, already-mentioned planned tariffs on China, whose EV manufacturers are Tesla’s primary competitors at the moment, will likely be a good thing.
- Further to the Tesla prognostication, the company has stated its aspirations to aggressively move into the robotaxi business, which necessitates full vehicle autonomy versus today’s advanced human-driver assistance (ADAS). Fast-tracking National Highway Traffic Safety Administration (NHTSA) and other agency approvals would be beneficial in actualizing Tesla’s robocar aspirations, as would more broadly be Department of Justice dismissals of existing and pending “Autopilot”-related lawsuits (along with those to inevitably come).
- Starlink is another increasingly lucrative corporate entity. Regarding the low Earth orbit (LEO) satellite constellation (and the current Falcon 9 rockets that launch new members of it into orbit), Musk would undoubtedly welcome favorable FCC treatment both standalone and relative to satellite, cellular, and other scarce-spectrum competitors. And then there’s the next-generation Starship, queuing up for its pending sixth test launch as I type these words (and potentially launched by the time you read them). In the near term, Starlink and Musk aspire to use Starship to loft larger per-launch LEO payloads into orbit. And down the road, NASA and other large-payoff contracts also await.
- And then there’s the Neuralink brain computer interface and the Food and Drug Administration (FDA)…
- Asia-based foundries: Taiwan, the birthplace of TSMC, and South Korea, headquarters of Samsung, are among the world’s largest semiconductor suppliers. Of particular note, as foundries they manufacture ICs for fabless chip companies, large and small alike. And although both companies are aggressively expanding their fab networks elsewhere in the world, their original home-country locations remain critical to their ongoing viability. Unfortunately, those locations are also rife with ongoing political tensions and invasion threats, whether from the People’s Republic of China (Taiwan) or North Korea (South Korea). All of which will make the Trump administration’s upcoming actions critical. Last summer, during an interview with Bloomberg, then-candidate Trump indicated that Taiwan should be paying the United States to defend it, that in this regard the US was “no different than an insurance company”, and that Taiwan “doesn’t give us anything”, accusing it of taking “almost 100%” of the US’s semiconductor industry. And during his first term, Trump also cultivated a relationship with North Korean dictator Kim Jong Un.
- Ongoing CHIPS funding: Shortly before the election, and in seeming contradiction to Republican party leader Trump’s earlier noted expressed regret about lost US semiconductor dominance, then (and likely again) House of Representatives Speaker (and fellow Republican) Mike Johnson indicated that the legislative body he led would likely repeal the $280B CHIPS and Science Act funding bill if his party again won a majority in Congress. Shortly thereafter, he backpedaled, switching his wording choice from “repeal” to “streamline”. Which will it actually be? We’ll have to wait and see.
- DJI and TikTok: Back in September, I mentioned that the US government was considering banning ongoing sales of DJI drones, citing the company’s China headquarters and claimed links to that country’s military and other government entities, resulting in US security concerns. Going forward, given Trump’s longstanding economic-and-other animosity toward China, it wouldn’t surprise me to see the proposed ban become a reality, which US-based drone competitors like Skydio would seemingly welcome (no matter that, to my earlier comments, China is already proactively reacting to the political pressure by cutting off battery shipments to Skydio). Conversely, although Trump championed a proposed ban of social media platform TikTok (a far more obvious security concern, IMHO) at the end of his first term, he’s now seemingly doing an about-face.
- Etc.: What have I overlooked or left on the cutting room floor in the interest of reasonable wordcount constraint, folks? Sound off in the comments.
Ongoing unpredictable geopolitical tensions
This was the first topic on my 2024 look-ahead list. And I’m mentioning it here just to reassure you that it hasn’t fallen off my radar. But as for predictions? Aside from comments I’ve already made regarding semiconductor powerhouses Taiwan and S. Korea, along with up-and-comer China, I’m going to avoid prognosticating any further on Asia, or on Europe or the Middle East, for that matter. Instead, I’ll just reiterate and slightly update two comments I made a year ago:
I’m not going to attempt to hazard a guess as to how the situations in Europe, Asia, and the Middle East (and anywhere else where conflict might flare up between now and the end of 2024, for that matter) will play out in the year to come.
and, regarding the US election:
Who has ended up in power, not only in the presidency but also controlling both branches of Congress, and not only at the federal but also states’ levels, will heavily influence other issues, such as support (or not) for Ukraine, Taiwan, and Israel, and sanctions and other policies against Russia and China.
That’s all, at least on this topic, folks! To clarify, if necessary, please don’t incorrectly interpret my reduced comparative wordcount for this section versus the previous one as indicative of perceived lower importance in my mind, or heaven forbid, of “inappropriately acting as if my country and its citizens are the center of the world,” to requote an earlier…umm…requote. It’s just that a year and a month after the October 7, 2023 attack that initiated the latest iteration of armed conflict between Israel and Iran’s Hamas and Hezbollah proxies, nearly three years into Russia’s latest and most significant occupation of Ukraine sovereign territory, and a few weeks shy of three quarters of a century (as I write these words) since the Republic of China (ROC) fled the mainland for the island of Taiwan…I’ve given up trying to figure out the end game for any of this mess. And echoing the Serenity Prayer, I realize there’s only so much that I can personally do about it. Speaking of prayer, though, one thing I can do is to pray for peace. So, I shall, as ceaselessly as possible. I welcome any of you out there who are similarly inclined to join me.
AI: Will transformation counteract diminishing ROI?
In next month’s 2024 look-back summary, I plan to dive into detail about why I feel the bloom is starting to fade from the rose of AI. Briefly, the ever-increasing resource investments:
- Processing hardware, both for training (in particular) and subsequent inference
- Memory and mass storage
- Interconnect and other system infrastructure
- Money to pay for all this stuff
- And energy and water (with associated environmental impacts) to power and keep cool all this stuff
are translating into diminishing capability, accuracy and other improvement “returns” on these investments, most recently noted in coverage appearing as I was preparing to write this section:
OpenAI’s next flagship model might not represent as big a leap forward as its predecessors, according to a new report in The Information. Employees who tested the new model, code-named Orion, reportedly found that even though its performance exceeds OpenAI’s existing models, there was less improvement than they’d seen in the jump from GPT-3 to GPT-4. In other words, the rate of improvement seems to be slowing down. In fact, Orion might not be reliably better than previous models in some areas, such as coding.
What can be done to re-boost the improvement trajectory seen initially? Thanks for asking:
- Synthetic data: This one is, I’ll admit upfront, tricky. Conceptually, it would seem, the more training data you feed a model with, the more robust its resulting inference performance will be. And such an approach is particularly appealing when, for example, databases of real-life images of various objects are absent perspectives from certain vantage points, of certain colors and shapes, and captured under certain lighting conditions. Similarly, a training algorithm’s ability to access the entirety of the world’s literature is practically limited by copyright constraints. But that said, keep in mind that both the quantity and quality of training data are critical. A synthetic image of an object that has notable flaws compared to its real-life counterpart, for example, would be counterproductive. Same goes for the slang and gibberish (not to mention extremist language and other garbage) that pervades social media nowadays. And while on the one hand you want your training data set to be comprehensive (to prevent bias, for example), proportionality to real life is also important in guiding the model to the most likely subsequent inference interpretation of an input. After all, there’s a fundamental reason why pruning to reduce sparsity is key to optimizing both model size and accuracy.
- Multimodal models: Large language models (LLMs), which I rightly showcased at the very top of my 2023 retrospective list, are increasingly impressive in their capabilities. But they’re also, admittedly somewhat simplistically speaking, “one-trick ponies”. As their name implies, they’re language-based from both input (typed) and output (displayed) standpoints. If you want to speak to one, you need to first run the audio through a separate speech-to-text model (or standalone algorithm); the same goes for spitting a response back at you through a set of speakers. Analogies to images and video clips, and other sensory and output data, are apt. Granted, this approach is at least somewhat analogous to human beings’ cerebral cortexes, which are roughly subdivided into areas optimized for language, vision and other processing functions. Still, given that humans are fundamentally multisensory in both input and output schema, any AI model that undershoots this reality will be inherently limited. That’s where newer multimodal models come in. Vision language models (VLMs), for example, augment language with equally innate still and video image perception and generation capabilities. And large multimodal models (LMMs) are even more input- and output-diverse. Think of them as the deep learning analogies to the legacy sensor fusion techniques applied to traditional processing algorithms, which I ironically alluded to in my 2022 retrospective.
- Continued (albeit modified) transition from the cloud to the edge: Reiterating what I initially wrote a couple of years ago:
One common way to reduce a device bill-of-materials cost (BOM) is to offload as much of the total required processing, memory and other required resources to other connected devices. A “cloud” server is one common approach, but it has notable downsides that also beg for consideration from the device supplier and purchaser alike, such as:
- Sending raw data up to the “cloud” for processing, with the server subsequently sending results back to the device, can involve substantial roundtrip latency. There’s a reason why self-driving vehicles do all their processing locally, for example!
- Sending data up to the “cloud” can also engender privacy concerns, depending on exactly what that data is (consider a “baby cam”, for example) and how well (or not) the data is encrypted and otherwise protected from unintended access by others.
- Taking latency to the extreme, if the “cloud” connection goes down, the device can turn into a paperweight, and
- You’re trading a one-time fixed BOM cost for ongoing variable “cloud” costs, encompassing both server usage fees (think AWS, for example) and connectivity bandwidth expenses. Both of those costs also scale with both the number of customers and the per-customer amount of use (both of each device and cumulatively for all devices owned by each customer).
- Another popular BOM-slimming approach involves leveraging a wired or (more commonly) wireless tethered local device with abundant processing, storage, imaging, and other resources, such as a smartphone or tablet. This technique has the convenient advantage of employing a device already in the consumer’s possession, which he or she has already paid for, and for which any remaining “cloud” processing bandwidth involved in implementing the complete solution he or she will also bankroll. The latency is also notably less than with the pure “cloud” approach, privacy worries are lessened if not fully alleviated, and although the smartphone’s connection to the “cloud” may periodically go down, the connection between it and the device generally remains intact.
- For these and other reasons, in recent years I’ve seen a gradually accelerating transition from cloud- to edge-based processing architectures. That said, an in-parallel transition from traditionally coded algorithms to deep learning-based implementations has also occurred. And of late, this latter shift has complicated the former cloud-to-edge move, due specifically to the aforementioned high processing, memory, and mass storage requirements required to run inference on locally housed deep learning models. New system architecture variants to address both transitions’ merits are therefore gaining prominence. In one, the hybrid exemplified by Apple Intelligence along with Google’s Pixel phones’ conceptually equivalent approach, a base level of inference occurs locally, with cloud resources tapped as-needed for beefier-function requirements. And in the other, whereas “edge” might have previously meant a network of “smart” standalone edge cameras in a store, now it’s a network of less “smart” cameras all connected to an edge server at each store (still, versus a “cloud” server at retail headquarters).
- Deep learning architectures beyond transformers (and deep learning models beyond LLMs and their variants): The transformer, initially developed for language translation, quickly expanded into broader natural language processing and now also finds use for audio, still and video images, and various other applications. Similarly, usage of the LLM and its previously mentioned multimodal relatives is pervasive nowadays. However, when Yann LeCun, one of the “godfathers” of AI (and chief scientist at Meta), suggested earlier this year that the next generation of researchers should look beyond today’s LLM approaches and their associated limitations, accompanied by Meta’s public rollout of one such next-generation approach, and then more recently stated that today’s AI is as “dumb as a cat”, it caught a lot of industry attention. A recently published arXiv paper goes into detail on transformers’ limitations, along with the inherent strengths and shortcomings, current status and evolution potential of other “novel, alternative potentially disruptive approaches”. And I also commend to your attention a recent episode of Nova on AI. The entire near-hour is fascinating, and it specifically showcases an emerging revolutionary architecture alternative called the liquid neural network.
- New hardware approaches: Today’s various convolutional neural network (CNN), recurrent neural network (RNN) and transformer-based deep learning network architectures are well-matched to the GPU-derived massively parallel processing hardware architectures championed for training by companies such as NVIDIA, today’s dominant market leader (and also one of the leading suppliers for inference processing, although architectural diversity is more common here). That said, any one chip supplier can only satisfy a subset of total market demand, and the resultant de facto monopoly also leads to higher prices, all of which act to constrain AI’s evolutionary cadence. And that said, the emerging revolutionary network architectures and models I’ve just discussed, should they gain traction, will also open the doors to new hardware approaches, along with new companies supplying products that implement those approaches. To be clear, I don’t envision this emergent hardware, or the new network architectures and models that it supports, to become dominant in 2025 (or, realistically, even before the end of this decade). That said, I feel strongly that such revolutionary transformation is essential to, as I said earlier, re-boosting AI’s initial trajectory.
Merry Christmas (and broader happy holidays) to all, and to all a good night
I wrote the following words a year ago and couldn’t think of anything better (or even different) to say a year later, given my apparent constancy of emotion, thought and resultant output. So, with upfront apologies for the repetition, a reflection of my ongoing sentiment, not laziness:
I’ll close with a thank-you to all of you for your encouragement, candid feedback and other manifestations of support again this year, which have enabled me to once again derive an honest income from one of the most enjoyable hobbies I could imagine: playing with and writing about various tech “toys” and the foundation technologies on which they’re based. I hope that the end of 2024 finds you and yours in good health and happiness, and I wish you even more abundance in all its myriad forms in the year to come. Let there be Peace on Earth.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- 2024: A technology forecast for the year ahead
- 2023: A technology forecast for the year ahead
- 2022 tech themes: A look ahead
- A tech look back at 2022: We can’t go back (and why would we want to?)
- A 2021 technology retrospective: Strange days indeed
The post 2025: A technology forecast for the year ahead appeared first on EDN.
To press ON or hold OFF? This does both for AC voltages
On the October 14, 2024, a design idea (DI) by Nick Cornford entitled “To press ON or hold OFF? This does both” appeared. It is a very interesting DI for DC voltages, but what about AC voltages?
Wow the engineering world with your unique design: Design Ideas Submission Guide
After reading this DI, I decided to design a circuit with similar operation for the much-needed AC voltages since many of our gadgets are connected to 110V/230V AC mains. In Figure 1’s circuit, if the single push button SW1 is pressed momentarily once, the mains AC voltage is extended to the output where a gadget is connected. If push button SW1 is pressed for a long time—4 to 5 seconds—power is disconnected. In my opinion, a shiny modern push button looks more attractive and elegant than a toggle switch.
Figure 1 If you press SW1 once, the AC output terminal J2 gets AC supply. If you hold SW1 for a long time, i.e., 4 to 5 seconds, the path from the power supply to terminal J2 gets disconnected. One single pushbutton provides both ON and OFF functions for AC voltage.
In this circuit, mains AC is fed to output terminal through the U5 triac. It should be selected according to voltage and current requirement. When you press SW1 once momentarily, it triggers a monostable U2A. Its raising edge pulse output sets the flip-flop U4A. Q2 becomes ON and current flows through the photodiode of U1. Optotriac U1 conducts and hence triac U5 also conducts. Thus, mains voltage is extended to output terminal.
If you press SW1 for a long time, i.e., 4 to 5 seconds (this time can be adjusted by changing R4, R5), capacitor C1 charges. When its voltage reaches the reference voltage set by the R4, R5 division, the comparator U3A output goes HIGH which resets flip-flop U4A. Thus, the flip-flop output goes LOW, switching Q2 OFF. At this point, no current flows through photodiode of U1, hence U1 and U5 are switched OFF. This way, the mains voltage to output is disconnected.
When you press SW1, C1 is charged. When SW1 is open, there must be a path to discharge C1 for proper operation of the next cycle. This is done by Q1. When SW1 is open, current flows from C1 through the emitter-base of Q1 and R1. Hence Q1 gets saturated and discharges C1. When SW1 is pressed, voltage is applied to base of Q1 via R7 and hence Q1 becomes open and allows C1 to charge. Being CMOS IC-based, the entire circuit draws very little current.
VDD here is 5 VDC. The VDD and VSS pins of U2, U3, and U4 are not shown in the circuit. They must be wired to VDD and VSS inputs shown. If you want a more simple circuit, the U1, U5 circuit can be replaced with a simple relay.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- To press on or hold off? This does both.
- Smart TV power-ON aid
- Latching power switch uses momentary pushbutton
- A new and improved latching power switch
- Latching power switch uses momentary-action pushbutton
The post To press ON or hold OFF? This does both for AC voltages appeared first on EDN.
AI designs and the advent of XPUs
At a time when traditional approaches such as Moore’s Law and process scaling are struggling to keep up with performance demands, XUPs emerge as a viable candidate for artificial intelligence (AI) and high-performance computing (HPC) applications.
But what’s an XPU? The broad consensus on its composition calls it the stitching of CPU, GPU, and memory dies on a single package. Here, X stands for application-specific units critical for AI infrastructure.
Figure 1 An XPU integrates CPU and GPU in a single package to better serve AI and HPC workloads. Source: Broadcom
An XPU comprises four layers: compute, memory, network I/O, and reliable packaging technology. Industry watchers call XPU the world’s largest processor. But it must be designed with the right ratio of accelerator, memory and I/O bandwidth. And it comes with the imperative of direct or indirect memory ownership.
Below is an XPU case study that demonstrates sophisticated integration of compute, memory, and I/O capabilities.
What’s 3.5D and F2F?
The 2.5D integration, which involves integrating multiple chiplets and high-bandwidth memory (HBM) modules on an interposer, has initially served AI workloads well. However, increasingly complex LLMs and their training necessitate 3D silicon stacking for more powerful silicon devices. Next, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, takes silicon devices to the next level with the advent of XPUs.
That’s what Broadcom’s XDSiP claims to achieve by integrating more than 6000 mm2 of silicon and up to 12 HBM stacks in a single package. And it does that by developing a face-to-face (F2F) device to accomplish significant improvements in interconnect density and power efficiency compared to the face-to-back (F2B) approach.
While F2B packaging is a 3D integration technique that connects the top metal of one die to the backside of another die, F2F connection assembles two dies ended by a high-level metal interconnection without a thinning step. In other words, F2F stacking directly connects the top metal layers of the top and bottom dies. That provides a dense, reliable connection with minimal electrical interference and exceptional mechanical strength.
Figure 2 The F2F XPU integrates four compute dies with six HBM dies using 3D die stacking for power, clock, and signal interconnects. Source: Broadcom
Broadcom’s F2F 3.5D XPU integrates four compute dies, one I/O die, and six HBM modules while utilizing TSMC’s chip-on-wafer-on-substrate (CoWoS) advanced packaging technology. It claims to minimize latency between compute, memory, and I/O components within the 3D stack while achieving a 7x increase in signal density between stacked dies compared to F2B technology.
“Advanced packaging is critical for next-generation XPU clusters as we hit the limits of Moore’s Law,” said Frank Ostojic, senior VP and GM of the ASIC Products Division at Broadcom. “By stacking chip components vertically, Broadcom’s 3.5D platform enables chip designers to pair the right fabrication processes for each component while shrinking the interposer and package size, leading to significant improvements in performance, efficiency, and cost.”
The XPU nomenclature
Intel’s ambitious take on XPUs hasn’t gone much far as its Falcon Shores platform is no longer proceeding. On the other hand, AMD’s CPU-GPU combo has been making inroads during the past couple of years. Though AMD calls it an accelerated processing unit or APU. It partly comes from the industry nomenclature where AI-specific XPUs are called custom AI accelerators. In other words, it’s the custom chip that provides the processing power to drive AI infrastructure.
Figure 3 MI300A integrates CPU and GPU cores on a single package to accelerate the training of the latest AI models. Source: AMD
AMD’s MI300A combines the company’s CDNA 3 GPU cores and x86-based Zen 4CPU cores with 128 GB of HBM3 memory to deliver HPC and AI workloads. El Capitan—a supercomputer housed at Lawrence Livermore National Laboratory—is powered by AMD’s MI300A APUs and is expected to deliver more than two exaflops of double precision performance when fully deployed.
The AI infrastructure increasingly demands specialized compute accelerators interconnected to form massive clusters. Here, while GPUs have become the de facto hardware, XPUs seem to represent another viable approach for heavy lifting in AI applications.
XPUs are here, and now it’s time for software to catch up and effectively use this brand-new processing venue for AI workloads.
Related Content
- The role of cache in AI processor design
- Top 10 Processors for AI Acceleration at the Endpoint
- Server Processors in the AI Era: Can They Go Greener?
- Four tie-ups uncover the emerging AI chip design models
- Using edge AI processors to boost embedded AI performance
The post AI designs and the advent of XPUs appeared first on EDN.
Expanding output range of step-up converter
This is one real-life quest: How do we increase the output voltage of a step-up converter? If you have unlimited access to the right ICs, you are one lucky dog, but what if you don’t? Or maybe you are limited to a specific chip due to particular requirements, for instance, it is stable under some environmental conditions or, it has some specific features/interfaces or maybe, it’s easily accessible or cheap. Here, the ADP1611 step-up converter, is taken as an example. An application circuit can be seen in Figure 1.
Figure 1: An application circuit for the 5 to 15 V ADP1611 step-up regulator.
Wow the engineering world with your unique design: Design Ideas Submission Guide
It has a 20-V limit on its output voltage; this limit is mainly due to the output switch of the ADP1611. Adding a tiny GaN FET such as the EPC2051 to the ADP1611 can increase this limit to above 100 V (Figure 2).
Figure 2: A 5 V to 40 V step-up regulator with the addition of the GaN FET.
The cascode, shown in Figure 2 consists of an internal switch transistor and the newcomer FET; to have better frequency characteristics than the internal switch alone. So, if the newly added GaN FET also has much lower on-resistance (Rds(on)), then the internal switch, it will not reduce the efficiency.
To make the trick possible, the step-up converter should have an open drain (or open collector) output. Also, the connection of the inductor, diode, and the output of the chip must be reconfigured as shown in Figure 2. Diode D2 protects the internal switch from over-voltage.
Don’t forget to use this new value of the output voltage in your calculations. The output diode, capacitor, and inductor should also be rated to the new voltage. For the output diode, I used the HER107.
The addition of this GaN FET adds only 15 mΩ to the switch resistance of ADP1611 (0.23 Ω)—an increase of less than 10%. Please note, the gate-source voltage (VGS) of EPC2051 cannot exceed +6V, so be careful.
—Peter Demchenko studied math at the University of Vilnius and has worked in software development.
Related Content
- GaN vs SiC: A look at two popular WBG semiconductors in power
- High-performance GaN-based 48-V to 1-V conversion for PoL applications
- GaN transistors for efficient power conversion: buck converters
- How to get 500W in an eighth-brick converter with GaN, part 1
- Thermal design for a high density GaN-based power stage
The post Expanding output range of step-up converter appeared first on EDN.
SoCs offer RF sampling and DSP muscle
Adaptive SoCs in AMD’s Versal RF series integrate direct RF sampling data converters, dedicated DSP hard IP, and AI engines in a single chip. The devices offer wideband-spectrum observability and up to 80 TOPS of digital signal processing performance in a SWaP-optimized design for radar, spectral analysis, and test and measurement applications. They also provide programmable logic and ample memory to create powerful accelerators.
Versal RF SoCs enable wideband spectrum capture and analysis with 14-bit multichannel RF ADCs and RF DACs. These converters support input/output frequencies up to 18 GHz and sampling rates up to 32 Gsamples/s. Select DSP functions, like 4-Gsample/s FFT/iFFT, channelizer, polyphase resampler, and LDPC decoder, run on dedicated hard IP blocks, cutting dynamic power by up to 80% compared to AMD soft logic.
Versal RF silicon samples and evaluation kits are expected in Q4 2025, with production shipments beginning in the first half of 2027.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post SoCs offer RF sampling and DSP muscle appeared first on EDN.
Lattice launches small-size FPGA platform
Nexus 2 is Lattice Semiconductor’s next-generation small FPGA platform, featuring improved power efficiency, edge connectivity, and security. Built on a 16-nm FinFET TSMC process, Nexus 2 FPGAs offer 65k to 220k system logic cells in a form factor that is up to 5 times smaller than similar class devices.
According to Lattice, Nexus 2 FPGAs deliver up to 3 times lower power consumption and up to 10 times greater energy efficiency for edge sensor monitoring compared to competing devices in the same class. Fast connectivity is enabled by a multiprotocol 16-Gbps SERDES, PCIe Gen 4 controller, and MIPI D-PHY/C-PHY interfaces operating at speeds up to 7.98 Gbps.
Nexus 2 FPGAs support a broad range of security functions, including 256-bit AES-GCM encryption and SHA3-512 hashing, compliant with FIPS 140-3 Level 2 standards. The devices also feature crypto agility, anti-tamper protection, and post-quantum readiness.
The Nexus 2 platform is designed to allow rapid development of new device families based on a single platform. The first of these, the Certus-N2 family of general-purpose small FPGAs, is now available for sampling.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Lattice launches small-size FPGA platform appeared first on EDN.
Multiprotocol wireless SoC is Matter-compliant
Joining Synaptics’ Veros IoT connectivity family is the SYN20708, a dual-core SoC that supports Bluetooth 5.4 and IEEE 802.15.4. The Matter-compliant chip enables Bluetooth Classic, Bluetooth Low Energy (BLE), Zigbee, and Thread protocols to operate concurrently on both cores, allowing simultaneous connections to multiple endpoints in heterogeneous network environments.
The SYN20708 employs a modular software architecture that simplifies development for systems requiring low latency, extended range, low power, and interoperability. It can be used in a range of consumer, automotive, healthcare, and industrial applications, including dedicated home hubs and automotive infotainment systems.
The SoC features dual-antenna maximum ratio combining (MRC) and transmit beamforming (TxBF) to enhance signal quality and double communication range. It is Bluetooth 5.4 certified and Bluetooth 6.0 compliant, enabling channel sounding, Bluetooth Classic Audio, and LE Audio. The SoC supports IEEE 802.15.4 (OpenThread and ZBOSS) up to Version 2, along with BLE Long Range, angle of departure (AoD), and angle of arrival (AoA) capabilities. Synaptics’ proprietary CoEX technology improves coexistence in the 2.4-GHz band.
The SYN20708 wireless SoC is available now.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Multiprotocol wireless SoC is Matter-compliant appeared first on EDN.
Multiphase PWM controller powers Blackwell GPUs
A 4-phase PWM controller from AOS, paired with industry-standard DrMOS power stages, boosts system efficiency for NVIDIA Blackwell GPU platforms. The AOZ73004CQI, which powers AI servers and graphics cards based on the Blackwell architecture, is fully compliant with the Open Voltage Regulator (OpenVReg) OVR4-22 standard.
The AOZ73004CQI’s cycle-by-cycle current limit aligns with the GPU’s overcurrent protection requirements, enabling safe power throttling to maximize performance. It features an external reference input and PWMVID interface for dynamic output voltage control. By reducing ripple effects, the controller achieves PWMVID slew rates of up to 30 mV/µs—a threefold increase over typical rates. Additionally, deep-off and shallow-off power states minimize power consumption.
The AOZ73004CQI with 4-phase PWM is not limited to using four DrMOS power stages as standard. AOS’s proprietary DrMOS design allows precise turn-on timing, enabling one PWM to drive two or three DrMOS devices. By doubling or tripling DrMOS, designers can create a high-power, multiphase system with up to 12 power stages.
Prices for the AOZ73004CQI buck controller start at $1.20 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Multiphase PWM controller powers Blackwell GPUs appeared first on EDN.
Multichannel driver enhances automotive lighting
With 36 programmable LED current channels, the AL5887Q from Diodes drives up to 12 RGB configurations or 36 individual LEDs. The automotive-compliant linear driver provides a hardware-selectable I2C or SPI digital interface, along with an internal 12-bit PWM for precise color and brightness control. Designers can create dynamic lighting patterns and rich color depths for both interior and exterior lamps.
An external resistor sets the output current for all 36 channels, with each channel’s current digitally configurable up to 70 mA without the need for paralleling. An automatic power-saving mode reduces current to 15 µA, and a quiescent shutdown mode cuts it to 1 µA when all LEDs are off for more than 30 ms, minimizing energy draw from the car’s battery.
The AL5887Q includes multiple protection features, such as an open-drain fault pin with diagnostic fault registers and individual fault mask registers. It also provides overtemperature protection with a pre-OTP warning.
The AEC-Q100 qualified AL5887Q driver costs $1.13 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Multichannel driver enhances automotive lighting appeared first on EDN.
Synthesize precision bipolar Dpot rheostats
The ubiquitous variable resistance circuit network shown in Figure 1…
Figure 1 Classic adjustable resistance; Rmax = Rs + Rr; Rmin = Rs.
…can be accurately synthesized in solid state circuitry built around a digital potentiometer (Dpot) as discussed in “Synthesize precision Dpot resistances that aren’t in the catalog.” Its accuracy holds up despite pot resistance element tolerance and is independent of wiper resistance. See Figure 2 for the circuit.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 2 Synthetic Dpot evades problems by using FET shunt, precision fixed resistors, and op-amp; Rab > Rmax; Rp = (Rmax-1 – Rab-1)-1; Rs = (Rmin-1 – Rab-1 – Rp-1)-1.
But a sticky question remains: What if the polarity of the Va – Vb differential is subject to reversal? Figure 1 can of course accommodate this without a second thought, but it’s a killer for Figure 2.
A simple—but unfortunately unworkable—solution is shown in Figure 3.
Figure 3 Simply paralleling complementary N and P channel MOSFETs might look good but won’t work beyond a few hundred mV of |Va – Vb|.
The problem arises of course from the parasitic body diodes common to MOSFETs, which conduct and bypass the transistor if the reverse polarity source-drain differential is ever more than a few tenths of a volt.
Figure 4 shows the simplest (not very simple) solution I’ve been able to come up with.
Figure 4 Two complementary anti-series FET pairs connected in parallel allow bipolar operation.
Inspection of Figure 4 shows a couple extra FETs have been added in anti-series with the paralleled complementary transistors of Figure 3, together with polarity comparator amplifier A2. A2 enables the Q1/Q2 pair for (Va – Vb) > 0, Q3/Q4 for (Va – Vb) < 0.
The TLV9152 with its 4.5-MHz gain-bandwidth, 400-ns overload recovery, and 21-V/µs slew rate is a fairly good choice for this application. Nevertheless, significant crossover distortion can be expected to creep in for low signal amplitudes and frequencies above 10 kHz or so.
Design equations are unchanged from Figure 2.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Synthesize precision Dpot resistances that aren’t in the catalog
- Keep Dpot pseudologarithmic gain control on a leash
- Dpot pseudolog + log lookup table = actual logarithmic gain
- Digital potentiometer simulates log taper to accurately set gain
- Op-amp wipes out DPOT wiper resistance
- Adjust op-amp gain from -30 dB to +60 dB with one linear pot
The post Synthesize precision bipolar Dpot rheostats appeared first on EDN.