Українською
  In English
EDN Network
Fibonacci and the golden mean

What we today call the golden mean was known in antiquity. It is a specific ratio of lengths between two line segments. Ancient architects often used this ratio to achieve visually pleasing esthetic effects.
Please see this Wikipedia article on the golden mean.
Even today, graphic presentations with width to height ratios made equal or nearly equal to the golden mean can lead to less stressful user viewing. For example, the 16:9 aspect ratio of the modern television screen is a ratio of 1.777…which as we shall see below is fairly close to the golden mean value.
With reference to two line segments called “a” and “b”, derivation of the golden mean goes along the following lines as shown in Figure 1. (Yes, that was a pun.)
Figure 1 Derivation of the golden mean with reference to two line segments called “a” and “b”. Source: John Dunn
Now take a look at the Wikipedia article on Fibonacci.
Fibonacci, was born in Pisa c.1170 and died in Pisa c.1240-50, was an Italian mathematician who introduced a mathematical concept we today call the Fibonacci series that goes like this (Figure 2):
Figure 2 The Fibonacci series is an infininite series beginning with the numbers 0 and 1 where each number is the sum of the two preceding numbers. Source: John Dunn
The increasing numbers in the Fibonacci series increase by the same algorithm as the segment lengths increase as shown in the golden mean derivation.
Since the square root of five is an irrational number, the golden mean is also an irrational number. Since the numbers in the Fibonacci series are all integers, the ratios of each number to the number immediately before can never exactly equal the golden mean, but as those numbers get larger and larger with greater quantities of significant digits with which to find their division ratios, those ratios do converge on the golden mean.
It is a tribute to Fibonacci himself that he is honored and his work is still remembered after eight centuries.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Can Fibonacci numbers improve your search algorithms?
- Computers will be more than 1s & 0s
- Making number theory relevant (& fun)
- Measure career advancement by blowing your mind
The post Fibonacci and the golden mean appeared first on EDN.
China curbs and the future of GaN’s power revolution

What will happen to the power electronics revolution sparked by gallium nitride (GaN) semiconductors after what Financial Times calls Beijing’s most pointed response to the chip technology sanctions on China imposed by the United States and its allies?
China has responded with its own restrictions on the export of gallium and germanium, two semiconductor materials widely used as alternatives to silicon wafers. According to the U.S. Geological Survey, China is the world’s leading producer of gallium and germanium metals.
While gallium is widely used in GaN and gallium arsenide (GaAs) semiconductors, germanium is a staple in silicon germanium (SiGe) chips used in high-frequency RF designs for defense and aerospace applications. Compared to silicon, SiGe chips offer superior thermal capabilities due to their higher melting point.
Next, GaAs chips are commonly used in microwave and other high-frequency applications as well as in manufacturing LEDs. Moreover, high-efficiency solar systems employ them for single-crystalline thin-film solar cells and multi-junction solar cells.
But the real story in China’s latest chip curbs relates to GaN and the power electronics revolution it’s promising in automotive, consumer electronics, data centers and telecom, industrial, and renewable energy designs. In many ways, 2023 seemed like the year of GaN semiconductors so far.
Will the promising market for GaN power slow down after exporters in China won’t be able to ship these materials without a permit after 1 August 2023? Will the production cost of GaN products significantly increase after these curbs? These are serious questions for the GaN market, which along with silicon carbide (SiC) semiconductors, are seen to embody the second golden age of semiconductors.
It’s important to note that the stakes are also high for China, which is currently trying to contain an economic downturn. Some Chinese companies reckon these curbs will have a limited impact on global markets in the short term.
We’ll have a better picture when more details emerge in the coming days. For now, it seems like a stumbling block for the quickly emerging GaN semiconductors industry.
Related Content
- SiC and GaN: A Tale of Two Semiconductors
- “Navitas Presents” to focus on GaN at CES 2023
- GaN’s applications roadmap spotted at CES 2023
- The diverging worlds of SiC and GaN semiconductors
- Integrated GaN Half-Bridge Delivers MHz Performance
The post China curbs and the future of GaN’s power revolution appeared first on EDN.
The “XOR” versus “Sum modulo 2”

Logic elements, which are the basis of modern digital technology, are based on the use of Boolean logic. In 1854, George Boole proposed for the first time to investigate logical statements by mathematical methods. Such logical statements were originally two mutually exclusive concepts as “True” and “False”, later transformed in mathematical and technical applications into conditional values of 1 or 0 (“Logical 1” and “Logical 0”) [1].
In 1938, in Claude Elwood Shannon’s master’s thesis “Symbolic analysis of relay and switching circuits” for the first time applied the Boole logic algebra in practice to describe the operation of relay-contact and electron-tube circuits [2].
Wow the engineering world with your unique design: Design Ideas Submission Guide
The world’s first integrated circuits were developed and manufactured in 1959 by Americans Jack St. Clair Kilby (Texas Instruments) and Robert N. Noyce (Fairchild Semiconductor) independently of each other [3].
On July 24, 1958, engineer Jack St. Clair Kilby proposed to make electrical circuit components (resistors, capacitors, and transistors) from one material and place them on a common substrate. On September 12, 1958, he created the first analog integrated circuit containing five elements. On September 19, 1958, Jack Kilby demonstrated the first digital integrated circuit by creating a trigger on two transistors.
In 1959, the scientist-entrepreneur Robert Norton Noyce (Robert Norton Noyce, 1927–1990) invented a planar technological process for the manufacture of microcircuits.
The first implementations of Boolean logic operations in electronic technology used “Inversion”, “Conjunction”, “Disjunction” and many others, such logic elements as NOT, AND, OR, etc., were born and first realized on lamps, then on transistors and microcircuits [3–5].
The author [4–14] analyzed the work of numerous elements of digital logic, gave definitions, compared the logic of their work, offered training stands for studying logic devices using a minimal set of components—switches, LEDs, diodes, and resistors [7–11].
In these works one- and two-input logic elements are considered: Repeater/Inverter, elements OR/OR-NOT, AND/AND-NOT, Exclusive OR/Exclusive OR-NOT. Non-classical elements are also considered: Equivalence; Equivalence-NOT [7].
More complex elements are discussed: Odd; Parity; 3exclusive OR; 3exclusive OR-NOT; 2 and only 2; 2 and only 2-NOT; Logical threshold 2; Logical threshold 2-NOT; Majoritarity; Majoritarity-NOT; Logical threshold 3; Logical threshold 3-NOT [8].
As well as two-input logic elements for comparing the level of logical signals: A = B, A ≠ B, A > B, A ≥ B, A < B, A ≤ B; Digital comparator; Analytical digital comparator [9].
Elements of non-priority logic: Only one of all; Only one of all-NOT; Only two of all; Only two of all-NOT; Except all; Except all-NOT; Only all; Only all-NOT; Equivalence; Equivalence-NOT [10]. And finally: digital and analog multiplexers and demultiplexers, decoders, encoders [11].
In these works, it was clearly shown that a number of logic elements can have a decrease in their number inputs and often still perform the same functions. So, when combining the inputs of the well-known AND and OR elements, they are converted into a single-input Repeater element (Buffer). For logical elements of a more complex construction, such a large metamorphosis can lead to terminological confusion and the appearance of false synonyms [12]. This can be explained with the following example.
To begin with, we remind readers of the definitions of a number of logical elements [4–9].
Repeater (Buffer) is a logical element that performs the function of a signal repeater. When a control signal X is applied to the input of such an element, a signal Y is formed at the output of the element, completely identical to the input one.
AND is a logical element in which the output signal Y will have the value of a logical unit only if the level of a logical unit is applied to all its inputs.
OR is a logical element in which the output signal Y will have the value of a logical unit if there is a logical unit signal at least one of its several inputs.
EXCLUSIVE OR (XOR) is a logical element, for a two–input variant of which the output signal Y takes the value of a logical unit only when there is a logical unit at one of its inputs and a logical zero at the other.
3Exclusive OR (3XOR) is a logic element having three inputs and one output, the level “log. 1” on which appears, provided that the level “log. 1” is present only at one of its inputs.
nExclusive OR (nXOR) is logical element whose output signal takes the value of a logical unit if and only if only one of its n inputs has a logical unit.
ODD (synonyms: M2, Sum Modulo 2 or “Nonequivalence”) is a logical element having several inputs and one output, the level “log. 1” on which appears only if the level “log. 1” is simultaneously present at an odd number of its inputs (n = 1, 3, 5 …). This logical element performs a logical addition operation modulo 2 on the input data.
In a particular case, if the logic element “Odd” has only one input, such an element will perform the function of a “Repeater”, the output signal of which repeats the input signal by level. In the case of two inputs, the logic elements “Odd” and “Exclusive OR” are identical in their functions, which often leads to the conclusion that the functions of these elements are identical regardless of the number of their inputs.
In quantum logic, the analogue of the XOR element is the Feynman gate, also called a controlled inverter, that is controlled by negation or controlled by “NOT” (Controlled NOT or CNOT, C-NOT). The CNOT element resembles the classical XOR logic element according to the truth table, but differs in that it has two inputs (C and A) and two outputs (C’ and A’). This element was proposed in 1985 by the popularizer of science Richard Phillips Feynman (1918–1988) [14].
An example can be seen in Table 1 with the conventional graphic designations in the IEC standard of several logic elements and their truth tables. This characterizes the response of a logic element when control signals of different levels are applied to its inputs [8].
Table 1 The truth tables of several logic elements with the conventional graphic designations in the IEC standard [8]. Note the sections on the table without the gray fill correspond to logical elements having two inputs.
Note that the given truth table is also suitable for describing the operation of logical elements having two inputs. To do this, it is possible to remove the column corresponding to the input X3 from the table, as well as the rows corresponding to the signal level “log. 1” at this input.
During the discussion of the recently published work [12] concerning the creation of the 3XOR element, a conversation arose claiming that the elements M2 and XOR are synonymous logical elements regardless of the number of their inputs. In fact, this opinion has most likely developed historically due to the fact that the functions of these elements for the two inputs are quite similar.
Let’s once again give the diagrams of the 3XOR elements made of discrete elements [12] using the logic elements NOT, 2AND and 2NAND as shown in both Figure 1 and Figure 2. The truth table of these elements is given above in Table 1.
Figure 1 Logic element 3XOR on bipolar transistors and other discrete elements.
Figure 2 Logic element 3XOR using the logic elements NOT, 2AND and 2NAND.
Let us recall once again what the function “Exclusive OR” means. Already from the name of the function we can deduce that it does not allow any alternatives. Of all possible options, only one answer is possible, and all other options are excluded.
Here are some examples:
- A hand is held out to you in which either a coin or, there is no coin.
- You flip a coin. It falls to the ground on either “heads” or “tails”, there are no other options.
- You toss a cube, on the opposite sides of which the numbers 1, 2, and 3 are applied. The cube will fall to the ground showing one of three possible numbers: either 1, or 2, or 3. There can be no other options, in other words options for the simultaneous appearance of values 1 and 2, or 1 and 3, or 2 and 3, or 1 and 2 and 3, are excluded. This can be seen in Figure 3.
Thus, for the 3XOR element in Figures 1–3, the truth table clearly corresponds to the definition of this function as shown in Table 1 and Table 2.
In contrast to the 3XOR element, the logic element 3M2 (3Sum Modulo 2) has a difference in the last row of the truth table, Table 1. The output signal when applying “log. 1” level signals to all inputs is “log. 1”. This is not surprising and corresponds to the logic of summing input signals, this can be seen in Table 2, with the concept of “either the first, or the second, or the third, or all together”, but does not correspond in any way to the operation “Exclusive OR”.
Figure 3 Venn diagrams for elements “3Sum Modulo 2” (Y=A’B’C+A’BC’+AB’C’+ABC) and 3XOR (Y=A’B’C+A’BC’+AB’C’).
Table 2 shows a comparison of the functions of the logical elements AND, OR, Sum Modulo2, as well as some as yet unknown logical element X. The logical formulas, according to which these elements functions are given, are also shown as well as their basic identities of the algebra of logic. This allows for the calculation of truth table values for various combinations of input signal levels and the number of inputs of logic elements.
Table 2 Logical elements AND, OR, Sum Modula 2, as well as X; logical formulas describing the properties of these elements, and the Basic identities of the algebra of logic.
Notes for Table 2: Properties and differences of logical elements:
- For elements (2,3…n)AND as well as (2,3…n)OR, when all inputs are combined, are converted into a Voltage Repeater (Buffer). To increase the number of inputs, these elements can be cascaded.
- The element (2,3…n)M2 does not work when combining all inputs (see also **). To increase the number of inputs, these elements can be cascaded.
- The element (2,3…n)X does not work when combining all inputs (see also **), and when cascading, it is not converted to a multi-input element of X.
Table 3 shows the truth tables for one-, two- and three-input logic elements AND, OR, M2 and X.
Table 3 Truth tables for one-, two- and three-input logic elements AND, OR, M2 and X.
As noted earlier, for all single-input use cases of AND, OR, Sum Modulo2 elements, as well as X, the properties are identical and coincide with the function of the Repeater (Buffer) element (Table 3).
The two-input elements’ logical formulas and truth tables (Table 3) are identical for Sum Modulo2 and X.
The situation changes when switching to three-input elements. Here the “paths of truth” for the elements diverge. To elaborate, the 3XOR element is hidden under the X element. Thus, the 3XOR element cannot be synonymous with the Sum Modulo2 element.
It also seems reasonable that the elements performing the functions Y=A’B+AB’, Y=A’B’C+A’BC’+AB’C’, etc., in order to avoid terminological confusion, should not be called “Exclusive OR”, but “Exclusive AND”.
The element “Exclusive AND” (XAND) is a logical element whose output signal takes the value of a logical unit if only one of its inputs has a logical unit (refer to Figures 1–3, Table 3).
The element “Exclusive AND-NOT” (XNAND) is a logical element whose output signal takes the value of logical zero if only one of its inputs has a logical unit.
Interestingly, the only commercially available logic element SN74LVC1G38, declared by the manufacturer as 3XOR, is actually the “ODD” element (synonyms: M2, Sum Modulo 2).
Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 750 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.
Related Content
- Simple XOR logic elements on transistors
- Universal logic element on one transistor and its applications
- Universal purpose optoelectronic logic element with input optical switching of AND/NAND, OR/NOR and XOR/XNOR functions
- Universal purpose optoelectronic logic elements
References
- Boole G. “An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities”. Macmillan. Reprinted with corrections, Dover Publications, New York, NY (reissued by Cambridge University Press, 2009, ISBN 978-1-108-00153-3).
- Shannon C.E. “A Symbolic Analysis of Relay and Switching Circuits”. Transactions American Institute of Electrical Engineers, Vol. 57, 1938. Pp. 471–495.
- Shustov M.A. “History of Electricity”. Moscow; Berlin: Direct-Media, 2019, 567 p. ISBN 978-5-4475-9841-9.
- Shustov M.A. “Practical circuit engineering. 450 useful circuits”. Moscow: Altex-A, 2001. Book 1 of a series of 5 books “Practical circuit engineering”, 352 p. (I ed.). ISBN 978-5-942-71031-6, 2003 (II ed.); Moscow: Dodeka-XXI–Altex, 2007, 360 p. (II ed.). ISBN 978-5-94271-002-3.
- Shustov M.A. “Digital circuitry. Basics of construction”. St. Petersburg: Science and Technology, 2018, 320 p. ISBN 978-5-94387-875-6.
- Shustov M.A. “Digital circuitry. Application practice”. St. Petersburg: Science and Technology, 2018, 432 p. ISBN 978-5-94387-876-3.
- Shustov M.A. “Stand for studying the work of logical elements”. Radio (RU), 2020, No. 4, pp. 61–64.
- Shustov M.A. “Stand for studying the operation of logic elements – 2”. Radio (RU), 2020, No. 6, pp. 59–64.
- Shustov M.A. “Stand for studying the operation of logic elements – 3”. Radio (RU), 2020, No. 7, pp. 61–64.
- Shustov M.A. “Stand for studying the work of elements of non-priority logic”. Radioamateur (BY), 2022, No. 2, pp. 24–29.
- Shustov M.A. “Optoelectronic digital multiplexers, demultiplexers and stands for studying their work”; “Optoelectronic analog-digital demultiplexer and stand for studying its work”; “Optoelectronic decoders, encoders and stands for studying their work” (in print).
- Shustov M.A. “Simple XOR logic elements on transistors”. EDN, May 29, 2023. https://www.edn.com/simple-xor-logic-elements-on-transistors/
- Shustov M.A. “Binary elements of fractional logic”. EDN, December 14, 2022. https://www.edn.com/binary-elements-of-fractional-logic/
- Shustov M.A. “Optoelectronic and key analogues of the basic elements of quantum logic”. Radiolotsman (RU), 2022, № 11–12, pp. 18–25. https://www.rlocman.ru/review/article.html?di=655757
The post The “XOR” versus “Sum modulo 2” appeared first on EDN.
How will GDDR7’s 2024 launch impact GPUs, graphic cards

The availability of GDDR7 memory in the first half of 2024 is expected to revive graphics card designs with speeds of up to 36 Gbps, outperforming the GDDR6 and GDDR6X video-RAM technologies that have been around since 2018. Though it’s still not clear whether Micron will make GDDR7 devices available for commercial use or if this timeline merely marks the end of GDDR7’s development work.
Another question mark: how will GPU suppliers respond to this timeline? A new encoding mechanism in GDDR7 means that this upgrade will require new memory controllers and thus new graphics processors.
While Micron hasn’t provided details about this new memory technology, according to information disclosed by Cadence Design Systems, GDDR7 will use PAM3 signaling, a three-level pulse amplitude modulation technique that enables the transfer of three bits over a span of two cycles. PAM3, compared to the two-level NRZ scheme used by GDDR6, will offer a more efficient data transmission rate per cycle, reducing the need to upgrade to higher memory interface clocks and addressing the subsequent signal loss challenges.
Cadence has already unveiled a GDDR7 verification solution for designers who must ensure that their controllers and PHY devices comply with the upcoming VRAM specification. Once GDDR7 is here, it’s expected to bring a massive boost in bandwidth for graphics cards.
According to a report published in Wccftech, Micron aims to offer 36 Gbps of bandwidth per pin. Currently, Nvidia’s GDDR6X-based graphics cards provide a maximum speed of 22 Gbps, while it’s 20 Gbps for AMD’s GDDR6-based graphics solutions. So, bandwidth per pin will drastically increase when GPUs are equipped with GDDR7 memory.
The new technology will likely ease the challenges related to limited VRAM and memory bandwidth. At the same time, however, it could also boost the market for GPUs and graphics cards, which many already consider overpriced.
Related Content
- Picture-perfect memory
- New Uses Vie for GDDR6 Supply
- GDDR memory finds growing use in AI
- Emerging Applications Could Transform GDDR Market
- Memory Technologies Confront Edge AI’s Diverse Challenges
The post How will GDDR7’s 2024 launch impact GPUs, graphic cards appeared first on EDN.
USB Power Delivery: incompatibility-derived foibles and failures

My recent two-part series on the proprietary-vs-multisource “tug of war” in the photography world covered both lens mount and flash illumination-source “locks”, but it almost had a third case study entry. In part two, I’d mentioned Godox, a supplier of (among other things) electronic flash units for various other manufacturers’ cameras. Some of Godox’s flashes (whose product names begin with the characters “TT”) are powered by conventional AA batteries, while others (whose names start with “V”) use proprietary battery packs. The fundamental tradeoff, I’ve concluded from both anecdotal research and personal experience, is one of convenience (widely available off-the-shelf, already-charged cylindrical dry batteries) versus per-battery higher volume charge density (proprietary cells).
Non-standard battery packs—not only from one device manufacturer to another, but even from device to device within a particular manufacturer’s product line—are a longstanding unfortunate reality in the photography industry. That said, at least Canon and Sony’s de facto standards (by virtue of their widespread use) have also been unofficially adopted by third-party continuous light, external monitor, extended-life external power and other equipment suppliers. And unsurprisingly, the charging docks used to replenish those proprietary battery packs are also proprietary. Here, for example, is my Godox V1o flash unit (the “o” indicates that it works with Olympus cameras, along with those of other Micro 4/3 suppliers such as Panasonic; Godox also makes V1 variants for Canon, Nikon, Sony, Fujifilm and Pentax, the latter which I also own):
Here’s what the battery-plus-dock combo looks like with the cable connected:
And here’s the entire “kit”:
Now another look (from my personal “kit”) at the charging dock, its associated AC-to-DC “wall wart” and the cable that comes with the kit and connects the two:
Note that the dock connection is USB-C, while that for the “wall wart” is USB-A.
In exploring my new “toy”, I’d been curious to see if I could use a high-capacity portable battery (sometimes referred to as a power bank) to recharge the flash unit’s own battery when, for example, I was “in the field” shooting pictures and away from an AC outlet. Specifically, I own (among other high-capacity portable batteries) an Anker PowerCore+ 20100 model A1371:
And since both it and the Godox charging dock include USB-C connections, I thought I’d use a standard USB-C cable to connect them for highest-possible potential Anker power output and consequent fastest-possible potential Godox charging speed. But the combo flat-out refused to work; the charger dock light didn’t illuminate and, more importantly, the battery didn’t charge. The same thing happened (or perhaps more accurately, didn’t happen) when I tried connecting the Godox charging dock over USB-C to an Anker PowerPort+ 5 A2053 multi-port charger:
Or even to the Aukey PA-Y8 single-port USB-C charger that I use with my iPad Pro:
(the Aukey is on the left)
So, what’s going on? At first, I suspected that Godox might have bundled a proprietary cable in its kit. But I suspect at least some of you may have already figured out what I initially overlooked but eventually “grokked”…the key wasn’t that the Godox cable was proprietary, it was that the cable was USB-A on one end. When I tried out other USB-C to USB-A cables (and USB-A wall warts, multi-port chargers and the like connected to that end of the cable), they all worked fine…even the USB-A outputs of my Anker PowerCore+ 20100 power bank.
While I initially thought that this situation was a one-off, the quirk is apparently more common than I’d originally believed. As I’m becoming more interested (and equipment-invested) in videography, I’ve picked up a couple of HDMI wireless transmitters, both specific to my gimbal (Zhiyun) and more generic (Accsoon, in my case). In professional video capture settings (of which, to be clear, I don’t harbor any delusion of ever being an active participant), such gear commonly finds use in, for example, enabling an off-camera director to view the footage as it’s captured by the camera.
Professional videographers also rarely if ever rely on a camera’s built-in autofocus, instead focusing manually…and sometimes they don’t even do that themselves and leverage a focus puller, who can perhaps obviously also benefit from off-camera wireless viewing. Sometimes the wireless transmitter connects to a matching wireless receiver module and from there to a display over HDMI. Other times, the wireless broadcast, directly received by an Android or iOS tablet, smartphone or other mobile device, is viewed directly on it.
Anyway, one of the devices I now own is Accsoon’s CineEye Air:
Here’s some promo footage of it in action on-set:
Unlike its CineEye big brother, it doesn’t include an embedded battery; it’s instead powered externally via either a DC “barrel” plug or…you guessed it…a UCB-C input. And…you guessed it…I initially couldn’t figure out why connecting it to a USB-C equipped power source was unsuccessful. Thankfully, in the viewer comments section of an excellent review of both the CiniEye and CineEye Air (impressive range, low latency and high resolution/frame rate, eh?):
I found my answer:
@W00ge
Does yours work with a USB-C to USB-C power cable? Mine doesn’t.
@PhillipSkraba
I’m afraid I’m oldschool and only have usb – usb-c cables, they both work fine with those.
And the situation in this case was even more baffling, because Accsoon didn’t even bundle a USB-A to USB-C power cable with the unit to provide owners with initial accessory guidance.
So, what’s going on here? Why on earth would a manufacturer include a USB-C power “sink” connection that’s incapable of being successfully mated with a USB-C power “source”? Around five years ago, within an overview writeup titled “USB: Deciphering the signaling, connector, and power delivery differences,” I wrote:
Via the Power Delivery (PD) specification, released in v1.0 form in July 2012 and most recently updated to v3.0, micro-USB and USB-C connections are capable of handling up to 100W of power transfer via a combination of boosted current and four different voltage options; legacy 5V, plus 9V, 15V, and 20V. Charging source and sink devices negotiate their respective capabilities and requirements upon initiation of the connection.
Sounds straightforward, right? Another more recent EDN contributed article authored by Infineon Technologies goes into more detail on the negotiation process, which employs the USB-C connector’s two Channel Configuration pins, CC1 and CC2 (quick aside: the Infineon article only covers through USB Power Delivery spec v3, which supports up to 100W power transfer capabilities. Coming-soon USB PD v3.1 aspires to actualize longstanding 240W promises via, among other things, beefier cables…it also supports finer-grained voltage-and-current combinations and variances of both throughout the charging process).
I can only assume that in these particular cases, the Accsoon or Godox “sinks” aren’t correctly implementing this negotiation handshake (if they implement it at all), resulting in the “source” giving up and disabling its power output. All the USB-C power “sources” I have access to at my home office seem to also be PD-supportive, alas, so I can’t confirm this hypothesis by trying a non-PD USB-C source instead. Informed-reader insights are welcomed in the comments.
More generally, though I’m loath to “look a gift horse in the mouth”, USB’s charging and broader power delivery schemes are a longstanding and lingering mess. Don’t get me wrong; particularly with respect to USB-C and its Thunderbolt 3-and-newer “kissing cousin”, there’s a lot to love, such as:
- The rotationally symmetrical connector (akin to Apple’s Lightning precursor)
- Fast, low-latency data transfer rates
- Display-interface coexistence, and
- The aforementioned beefy, scalable and bi-directional voltage and current payloads
But consider these enthusiasm-offsetting case study examples:
- Mis-wired USB-A to USB-C cables that result in “fried” expensive equipment
- Missing-resistor (or wrong-value-resistor) USB-A to USB-cables that…ditto…
- USB-C cables that claim to include “smart” capability-ID chips but don’t
- And system manufacturers whose USB PD implementations aren’t fully Channel Configuration voltage range-compliant, therefore resulting in “fried” expensive equipment when used with third-party chargers, docks and the like.
Plenty of other similar issues also exist, alas. And the competing existence of proprietary approaches such as Qualcomm’s multiple Quick Charge generations further confuses the situation for consumers (and even techies like me).
Does the blame for these and other examples of interface woe lie solely or even predominantly with the USB Implementers Forum (USBIF)? Certainly not: assuming the specs that the organization comes up with are sufficiently comprehensive to cover all potential “corner cases”, it’s then up to the IC and system suppliers to follow them to the letter from an implementation standpoint (or not follow them and suffer the consequences). And I also don’t dismiss the effects of USB’s usage pervasiveness; coverage of any resultant implementation “glitch” inevitably ends up being equally pervasively disseminated. That said, I can’t shake the nagging conjecture that if USBIF and its members were to spend a bit less time and effort on rapidly advancing the interface’s capabilities (to keep pace with Thunderbolt and other interface approaches, among other things) and a bit more time and effort cleaning up today’s interface implementations, we’d have a lot fewer “glitches” as a result.
Agree or disagree? Let me know your thoughts in the comments.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- USB: Deciphering the signaling, connector, and power delivery differences
- USB Type-C PD 3.0 Specification, Charging and Design
- USB 3.0—Everything you need to know
- Multi-source vs proprietary: more “illuminating” case studies
- Multi-source vs proprietary: photography case studies
The post USB Power Delivery: incompatibility-derived foibles and failures appeared first on EDN.
Simulation software adds EM modeling for antennas

Ansys Discovery’s upfront simulation capabilities now include high-frequency electromagnetics (EM) modeling for antennas. The 3D simulation design tool empowers teams to virtually explore several design areas at once, minimizing the need for physical prototyping and testing.
Wi-Fi router far-field radiation pattern in Ansys Discovery
Discovery’s latest EM features, in combination with existing capabilities, create a multiphysics simulation environment coupled with interactive geometric modeling. Companies can investigate new concepts early in the antenna design process to achieve better performance for IoT applications, 5G, and autonomous vehicles.
With Discovery, engineers can evaluate changes to element design and antenna placement without having to interpret or clarify complicated CAD geometry. The software automates the creation of EM regions based on desired frequency ranges and assigns conductive and dielectric material based on port definitions. Additionally, the model and physics setup can be seamlessly transferred to Ansys HFSS 3D high-frequency EM simulation software for final design validation.
“Adding EM capabilities in Ansys Discovery for antenna design not only shifts simulation left, but it democratizes simulation for all users from beginner to expert,” said Shane Emswiler, senior vice president of products at Ansys. “Discovery provides an easy-to-use interface with integrated modeling and access to other Ansys tools, which streamlines the antenna design process and results in optimized development, performance, and efficiency.”
To learn more about Discovery or request a free trial, click the link to the product page below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Simulation software adds EM modeling for antennas appeared first on EDN.
Customizable RISC-V processor is vendor-independent

Bluespec’s MCUX RISC-V processor allows developers to implement custom instructions and add accelerators to FPGAs and ASICs. MCUX joins the company’s MCU RISC-V family of processors designed for ultra-low resource utilization on FPGAs. Along with more customization opportunities, MCUX is fully portable across all major FPGA architectures and ASIC technologies.
The MCUX embedded processor is intended for applications that require a small processor for configuration and control of custom modules, IO devices, sensors, actuators, and accelerators, as well as software-programmable replacements for fixed-hardware finite state machines. This makes MCUX well-suited for machine vision, video decoding, audio decoding, and radar signaling applications, among many other use cases for the edge, industrial automation, defense, and IoT.
MCUX is able to operate at a high frequency, allowing for integration into designs without crossing clock domains. Designs that do not require high-frequency operation benefit from extra timing slack, without an impact on timing closure.
Bluespec offers an application note that explains how to implement and execute custom instructions on the MCUX processor. It also reveals the potential for reducing software execution cycle counts with a small amount of implementation effort. To access the MCUX application note, click here. For more information about the MCUX RISC-V processor, use the online contact form on the Bluespec website.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Customizable RISC-V processor is vendor-independent appeared first on EDN.
Arm Cortex-M3 MCUs boost code flash to 1 Mbyte

M3H Group 2 MCUs, the latest addition to Toshiba’s TXZ+ family of 32-bit MCUs, pack up to 1 Mbyte of code flash and support over-the-air firmware updates. Based on an Arm Cortex-M3 core running at up to 120 MHz, these latest devices can be used for the main control of consumer equipment, such as home appliances and healthcare products, as well as office equipment, like multifunction printers. They are also suitable for industrial equipment and IoT applications.
The M3H Group 2 MCUs not only expand the code flash memory capacity of existing M3H Group 1 MCUs from 512 kbytes to 1 Mbyte, but also increase RAM capacity from 66 kbytes to 130 kbytes. Other features include 32 kbytes of data flash memory with a program/erase endurance of 100,000 cycles and various interface and motor control options, such as UART, I2C, advanced encoder input circuit, and advanced programmable motor control circuit.
Code flash memory is implemented in two separate areas of 512 kbytes each. This implementation allows instructions to be read from one area, while the updated code is programmed into the other area in parallel. The firmware rotation function using the area swap method enables firmware updates without interrupting MCU operation.
M3H Group 2 MCUs are offered in seven LQFP and QFP configurations, ranging from 64 pins to 144 pins. Documentation, sample software with usage examples, and driver software to control peripheral interfaces are available from Toshiba. Evaluation boards and development environments are provided in cooperation with Arm’s partner ecosystem.
Toshiba Electronic Devices & Storage
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Arm Cortex-M3 MCUs boost code flash to 1 Mbyte appeared first on EDN.
Arduino Uno development board scales to 32 bits

Two new variants of the Arduino Uno development board, the lightweight Uno R4 Minima and the full-fledged Uno R4 WiFi, are each powered by a 32-bit microcontroller. These next-generation Uno boards represent a considerable revision of Arduino’s 8-bit technology, while providing options to meet the budgetary and creative needs of the maker community.
Powered by a Renesas RA4M1 32-bit microcontroller with a 48-MHz Arm Cortex-M4 processor core, the Uno R4 boards deliver up to 16 times the clock speed, memory, and flash storage of the Uno R3. Uno R4 boards also preserve the standard form factor, shield compatibility, and 5-V power supply of the Uno R3. Further, an enhanced thermal design enables the Uno R4 boards to be powered up to 24 V.
Both versions of the Uno R4 offer 32 kbytes of SRAM and 256 kbytes of flash memory. They also furnish a USB-C port, along with CAN bus, SPI, and I2C communication capabilities. In addition, each Uno R4 development board offers a 12-bit DAC and operational amplifier.
For projects that require Wi-Fi and Bluetooth communication, the Uno R4 WiFi variant packs an Espressif ESP32-S3 dual-core XTensa LX7 module. Serving as a secondary MCU, the ESP32-S3 integrates 2.4-GHz Wi-Fi (802.11b/g/n) and Bluetooth 5 (LE) connectivity. The Uno R4 WiFi also includes a 12×8 red LED matrix for designs using animations or for plotting sensor data without the need for additional display hardware.
Available for purchase from the Arduino online store, the Uno R4 Minima costs $20 and the Uno R4 WiFi sells for $27.50.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Arduino Uno development board scales to 32 bits appeared first on EDN.
Satellite communication module extends coverage

Quectel’s CC660D-LS non-terrestrial network (NTN) satellite communication module supports 3GPP Rel-17 (IoT-NTN) at L-band (B255) and S-band (B256/23) frequencies. Intended to ensure global coverage, the module supports two-way communication and both IP and non-IP service networks. It also provides SMS SOS functionality for emergency notifications.
The CC660D-LS operates from a low-voltage power supply of 2.1 V to 3.6 V and offers various power-saving modes to reduce power consumption. These modes include discontinuous reception (DRX), extended DRX (eDRX), and power-saving mode (PSM), enabling efficient energy management.
Housed in a compact 17.7×15.8×2.0-mm LCC package, the CC660D-LS is useful for space-constrained applications and can be integrated into any type of portable device. The satellite module features Quectel enhanced AT commands, SIM/eSIM support, and embedded internet service protocols.
Engineering samples of the CC660D-LS are available now. The module is currently going through the CE and FCC certification processes, with mass production scheduled for Q4 2023.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Satellite communication module extends coverage appeared first on EDN.
LightFair 2023 recap

Another Light Fair has come and gone, and this year marked a potential turning point in the fortunes of what was once the premier lighting-focused trade show in North America. The show was held in May at the Javitz Center in New York and attracted over 300 exhibitors and over 11,000 attendees, up from the past couple of years but nowhere near the size of the show at its zenith (Figure 1). Missing were many of the largest players in the LED lighting and controls industry (along with their gigantic booths), which contributed to a show floor that was more open and easier to navigate. The LED lighting industry has matured and the products on display reflected that, representing incremental change instead of the huge leaps and bounds of the industry’s earlier days.
Figure 1 The light Fair 2023 trade show floor with over 11,000 attendees and 300 exhibitors. Source: Yoelit Hiebert
As in past years, the show also featured three days of educational sessions. One of the most engaging was a “show-n-tell” of several of products that came from Kyocera SLD Laser—the self-proclaimed maker of the world’s first white laser light emitter.
To review this education session: a spontaneous emission occurs after an electron at a lower energy state absorbs a passing photon and transitions to a higher energy state. Eventually the electron will seek to return to a lower state, emitting a photon in the process. This emitted photon’s wavelength will correspond to the energy of the initial low to high transition, but its direction will be random. However, a passing photon can also cause an already excited electron to transition from a higher to lower energy state, resulting in the emission of a second photon that matches the wavelength, phase, and direction of the original. This is referred to as stimulated emission and is the mechanism driving lasers.
The emitted photons are amplified through reflection between a pair of mirrors at each end of the crystal. As they travel, unlike frequencies cancel out, leaving photons that share a single frequency. Some of these photons go on to stimulate additional emissions by means of reentry into the crystal and some exit through one of the mirrors as a highly coherent beam of light.
It’s worth noting a third class of light emitting semi-conductor product, the superluminscent (SLD) diode, that completes the light-emitting diode trifecta. SLDs possess characteristics similar to both LEDs and lasers. Like LEDs, the SLD is an incoherent source, but with an emitted bandwidth that, depending of the construction of the device, can range from something similar to a single-color LED to much broader bandwidths. And like lasers, an SLD device incorporates amplification but instead of mirrors, an optical waveguide in the surface of the crystal is employed.
The biggest advantage of laser diode sources over LEDs is the area of efficacy. Because LEDs are a spontaneous emissions source, the theoretical maximum efficacy that can be achieved is approximately 638 lm/watt at 555 nm (green). White light is even further limited to 250 to 380 lm/watt, depending upon the color temperature. Additionally, because laser diode sources emit light with a very narrow beam angle, the light is able to travel longer distances at higher brightness. This can be advantageous for certain applications such as outdoor directional lighting, architectural and entertainment lighting, and even off-road lighting. However, use in general lighting applications would most likely require additional optics to disperse the emitted beam over a wider area. According to Kyocera SLD Laser, laser sources represent the next big step in the evolution of lighting.
Switching subjects, a presentation addressing the DALI suite of standards proved useful in explaining the distinctions surrounding this digital communication protocol, which is struggling to gain wide-spread acceptance in North America. The recent introduction of the DALI-2 sub-standards and D4i requirements only add to the already considerable confusion as to how (and if) the suite of standards apply.
Briefly, The DALI suite of standards was first released in the early 1990s under the auspices of IEC 62386. The standard is composed of a number of sub-standards, called Parts, that address various aspects of lighting controls based on a digital protocol. The actual standards development work is undertaken by the DALI Alliance, a global industry organization. What was originally called “DALI”, but is now usually referred to as “DALI-1”, included communication protocol requirements for luminaires, the communication bus, the bus power supply, and input devices as part of a lighting control system.
In 2017, the DALI suite expanded to include additional functionality not addressed in the original. This enhanced suite is referred to as “DALI-2” and includes parts addressing specifications for increased interoperability and more stringent test protocols. Also included is a requirement for independent verification of conformance in order to use the DALI trademark. D4i, released in 2020, is an extension of DALI-2 in that it provides a mandatory set of LED driver features to facilitate intra-luminaire communication and data gathering.
The diagram below shows the DALI-2 suite of standards currently released under IEC 62386. The diagram in Figure 2 might explain the current hesitance for North American companies to consider the DALI protocol in their projects.
Figure 2 The general requirements and system components for the DALI-2 standard where the many “Parts” may contribute to excessive complexity and a hesitancy for North American manufacturers to adopt the protocol. Source: DALI Alliance
Turning back to Light Fair 2023, the recent proliferation of regional shows has prompted a decision to hold Light Fair biannually, which may have a salutary effect in terms of the number of show exhibitors and attendees. Whether or not Light Fair has found its place in the post-COVID tradeshow ecosphere remains to be seen.
—Yoelit Hiebert has worked in the field of LED lighting for the past 10 years and has experience in both the manufacturing and end-user sides of the industry.
Related Content
- LightFair 2022 recap
- Considerations in the selection of UV LEDs for germicidal applications
- LightFair 2021 recap
- A brief history of the LED
The post LightFair 2023 recap appeared first on EDN.
A short primer on traction inverter design for EVs

Range anxiety because of limited range is a barrier to electric vehicle (EV) adoption for many consumers. Increasing the battery cell density and improving the efficiency of energy conversion processes are key to extending vehicle range to alleviate this anxiety. One key area where efficiency is critical is the traction inverter that converts the DC battery voltage into the AC drive required to power the motors.
This technical article will explain how IGBT-based modules enable higher cell densities and provide a more efficient conversion process to extend the range of an EV, helping to overcome the concerns of consumers.
Main traction inverters are at the heart of electric vehicles, connecting the batteries to the traction motors. They convert the DC battery voltage to an AC drive that the motors require, commonly at power levels from 80 kW to 150+ kW. The battery voltage is based on the size of the battery string and has typically been in the range of 400 VDC, although 800 VDC is becoming more common in a bid to reduce the sizeable currents and, therefore, mitigate losses.
Despite reducing cost by 40% in the past 3 years, or 90% in the past decade, the lithium-ion (Li-ion) battery remains the highest cost item within an EV. The downward price trajectory is expected to continue until around 2025 when prices will stabilize. Given the cost of this item, it is imperative that every joule of stored energy is used as efficiently as possible to mitigate the cost, as well as the size, of the battery pack.
Additionally, electric drivetrain provides incredible amounts of torque and acceleration. The responsiveness of the inverter and electric motor combination correlate directly to the “feel” of the vehicle and, therefore, to the consumer’s driving experience and satisfaction.
Role of switching devices
A traction inverter typically comprises of three half-bridge elements, each of which is formed from a pair of MOSFETs or IGBTs known as high-side and low-side switches. There is one half-bridge for each motor phase, making three in total, with gate drivers controlling each switching device.
Figure 1 An overview of traction inverter highlights the key design building blocks. Source: onsemi
The primary role of the switches is switching the DC voltage and current from the high-voltage battery on and off to create the AC drive for the motor(s) that propel the vehicle. This is a demanding application due to the high voltages, currents and operating temperatures experienced as 800 V batteries can deliver more than 200 kW of power.
Traction inverters based on 400-V battery systems require power semiconductor devices that have a VDS rating in the 650 V to 750 V range, while 800-V solutions increase the VDS requirement to 1,200 V. In a typical application, these power components must also handle peak AC currents in excess of 600 A for up to 30 seconds (s) and a maximum AC current of 1,600 A for around 1 millisecond (ms).
In addition, the switching transistors and gate drivers used for the device must be capable of handling these large loads while maintaining high traction inverter efficiency.
IGBTs have been the device of choice for traction inverter applications as they can handle high voltages, switch rapidly, deliver efficient operation, and meet the challenging cost objectives of the automotive industry.
Why power density is critical
Modern automobiles are incredibly cramped, at least as far as space for technology is concerned. This means that power density is an important parameter, especially for anything in the powertrain. So, the physical size (and weight) must be minimized as any weight reduces the range of the vehicle.
Apart from the physical size of the components, the primary driver for size is the efficiency of the design. The greater the efficiency, the less heat is generated and the more compact the inverter can be.
Figure 2 Switching IGBTs are crucial in the amount of losses that generate heat. Source: onsemi
Switches—whether IGBT or MOSFET—have the most significant impact on the losses that generate heat. Lower on resistance (RDS(ON)) values reduce static losses while improvements in gate charge (Qg) reduce dynamic or switching losses, allowing systems to switch faster. If the switching speed is higher, then the size of passive components such as magnetics can be much reduced, thereby increasing power density.
The maximum operating temperature of the switches can also affect power density as, if the devices are able to operate at higher temperatures, less cooling is required, thereby further reducing the size and weight of the design.
In many traction inverter designs, the key components are often separate and discretely packaged and, while this is a perfectly valid approach, it does not necessarily deliver the most compact—or highest power density—design. An alternative approach is to use pre-configured modules to form the half-bridges necessary for the traction inverter.
Power modules deliver high current density, robust short circuit protection, and the increased blocking voltage needed for 800-V battery applications. Here, IGBTs facilitate integrated current and temperature sensors, providing a faster reaction time for the protection features such as over-current and over-temperature protection.
How modules enhance integration
Power modules like VE-Trac Dual are mounted as die packages featuring 4.2 kV (basic) isolation capability, with copper and cooling on both sides. The absence of any wire bonds doubles expected lifetimes when compared with similar case modules that contain wire bonds. Co-packaged with the IGBTs is a diode that reduces power loss and enables soft switching, thereby enhancing overall efficiency.
By packaging bare die into a compact footprint, VE-Trac Dual modules are much easier to integrate into a compact design. Efficient operation, low losses and dual-sided cooling ensure that thermal management is easily achieved while a continuous operating temperature of 175°C allows higher peak power to be delivered to the traction motors.
Figure 3 VE-Trac Dual power modules incorporate a pair of 1,200 V IGBTs in a half bridge configuration. Source: onsemi
A single VE-Trac Dual module is normally required for each phase of a traction inverter and the mechanical design leads itself to use in multi-phase applications, providing simple scalability including the ability to parallel modules to deliver more power on each individual phase.
However, while IGBT-based power modules have been serving automotive applications, an enhanced version based on silicon carbide (SiC) MOSFETs is also available for the most demanding applications. It utilizes this wide bandgap (WBG) technology to deliver further size and efficiency gains in traction inverter designs.
Jonathan Liao is senior product line manager at onsemi.
Related Content
- MCUs tackle EV motor control challenges
- EV Traction Inverters Need Optimized SiC Power
- Why SiC MOSFETs Are Replacing Si IGBTs in EV Inverters
- How Power Electronics Is Revolutionizing the EV Ecosystem
- GaN enables efficient, cost-effective 800V EV traction inverters
The post A short primer on traction inverter design for EVs appeared first on EDN.
Battery solves a literal “power rail” problem

Model railroading—a hobby enjoyed by tens of thousands—apparently has an inherently built-in solution for delivering electrical power to its locomotives. The model layout is in a confined, fixed location, and there is plenty of available AC power to provide the modest 20 V/2 A maximum needed by a typical model locomotive via a power supply unit. Even better, the tracks of the layout can function as literal power rails and deliver that power to the locomotive as load. These rails are a free and available power-transmission subsystem.
However, the reality is using the track rails is not such a good fit, despite its simplicity and availability at first glance. It turns out that the obvious method of using the tracks to deliver power actually has many issues and problems.
Power on the rails is transferred via the locomotive’s wheels and is then “picked off” by small metal fingers that rest against those wheels. It’s mechanically tricky and electrically subject to intermittent performance due to dirt or oil at both the track-wheel contact and the wheel-pickoff contact. It’s somewhat like the brushes on a brushed DC motor, but with more exposure and opportunity for reliability problems.
Further, every wheelset axle on the locomotive and on all cars needs to be insulated as a continuous metal wheel-axle-wheel combination would short out the two tracks (for this reason, the model cars largely use plastic wheels).
Whenever there is a gap in the track, such as at a rail turnout (switch), the conduction and connectivity are momentarily lost. For this reason, many locomotives get power from more than one axle, which increases reliability but adds to the complexity of the power-pickup mechanical arrangement. You can also buy and add a supercapacitor module to the locomotive, to provide carry-over power through the gaps. Of course, this adds to cost, plus it’s tough to find an on-board place for these.
There’s another unavoidable problem for which there is no good answer: the reverse loop. Unless it is a simple circular or point-to-point track topology, a track will loop back onto itself at some point in the layout configuration. Any track configuration that permits a train to change its direction without simply backing up needs an electrical switchover or, power-reversing circuit. (Figure 1).
Figure 1 The reverse loop, which allows a train to turn around on the track it was on, is an electrically challenging aspect of any track topology and is unavoidable except in the simplest layouts. Source: Umarsan
Without the switchover, the positive rail gets connected to the negative rail, yielding a short circuit across the two (thankfully, the term “ground rail” is never used here by modelers—nor should it be).
To avoid this problem, it is necessary to cut track gaps in the rails at the beginning and end of the loop to create isolatable blocks. Once the locomotive is in the gapped block, the polarity of the track leading to the block is manually reversed using a double-pole, double-throw (DPDT) switch (or via an automatic circuit which costs about $30) so when the locomotives come out of the loop, the feeding track polarities agree. It’s a real nuisance to have to deal with and manage the power switching at these reverse loops, even if done automatically.
(Historical note: the classic Lionel “O27” models used a three-rail scheme, with the two outer rails for one polarity of the power supply and the middle rail for the other, Figure 2. In this way, all the problems of reverse loop are eliminated, but the two-rail realism is lost.)
Figure 2 The Lionel O27 track system completely eliminated the reverse loop problem via the use of two outside rails providing one polarity and a center rail for the other polarity, but at a major cost in visual realism. Source: Lionel Operating Train Society
Batteries to the rescue, maybe
It turns out that the advances in battery technologies such as high-capacity yet light rechargeable lithium-based cells are not only offering new approaches for everything from portable devices and instrumentation to electric vehicles, they also are affecting the entire approach to powering the model railroad locomotives.
Instead of providing this power via the obvious and available rails, in some cases, a set of batteries can be carried by the model train and used to power the locomotive. One commercially available unit for this is the AirWire900 Battery Powered Wireless DCC Control System, (Figure 3). This scheme not only eliminates the rails as power feeds but also the need for those small brushes which pick up power from the wheels and transfer it to the motor. (Note: Digital Command Control (DCC) is a very popular standard in model railroading for providing power and controlling it via digital commands.)
Figure 3 This battery-powered system from CVP USA replaces the rails as the power source (and associated wheel pickup) with internal batteries and a wireless control link; otherwise, it is transparent to the user and uses the same components as a conventional digital DCC system. Source: CVP Products
As in so many similar situations, it’s largely about physical space and weight. Much of the development work has been driven by the large outdoor-garden models. The reason is that their weather-exposed tracks, leaves, debris on the tracks, and small shifts of the ground itself make delivering power through the rails a real challenge requiring constant maintenance and upkeep.
These larger models do have space for batteries, so it’s a good fit. Some modelers have managed to do it with smaller indoor models using the tender behind a steam locomotive model for the electronics and battery. Hobbyists who model with diesel locomotives have improvised by using the space in a special dummy unpowered unit behind the powered one, in a two-engine “consist” (a multiengine consist is used in real railroading for heavy, long trains).
In addition to the battery and its management circuitry, the arrangement uses a wireless receiver also placed in the tender or dummy diesel unit. The locomotive then gets its digital-control commands via the wireless link from the user’s controller.
Why bother with all this extra work and headache, given the inherently available physical arrangement? By going to on-board battery power instead of getting power via the convenient, already-in-place rails, all the cited track and polarity problems just go away. Reliable rail connectivity and polarity reversals are no longer items to worry about because they have been literally designed out of the situation. Battery power changes everything, including the list of rail-power issues and how they must be managed.
This is a case where the obvious, easy, low/no-cost method of delivering power to a load turns out to have many subtle and not-so-subtle implications in terms of basic reliability and operational issues. While modelers have learned to live with them successfully over the years—and it can be reliable if you do it right with regular inspection and maintenance—it is a headache. The internal-battery approach is an alternative which avoids all these problems but is more costly (e.g., batteries, radio-control system) and has a limited run-time of about an hour before you either swap or recharge the batteries.
Life imitates hobbies, maybe
Interestingly, the use of batteries for locomotive power is not limited to hobbyist models of various scales. Real railroads are testing units which are battery-only (analogous to BEVs), or battery-assisted diesels (like hybrid EVs). At present, the run time and load capacity of these units limits them to short runs and rail-yard use, situations where operating demands are more defined and constrained and a charging connection, if needed, can be set up. The references at the end give some updates on the status of this R&D effort.
Have you ever been involved on a project where there was an obvious, immediate, “no-brainer” solution, only to find out on closer examination or trial that it didn’t work as needed? As a result, were you forced to go to a more-complicated approach?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related content
- My battery is dead and it’s your fault!
- Electric rail traction systems need specialized power management
- Supercaps solve diverse niche problems
- Can you use supercaps to power electric vehicles?
References
- Model Railroader, “How to wire a layout for two-train operation”
- Wikipedia, “Digital Command Control”
- CVPUSA, “The AirWire900
Battery Powered Wireless DCC Control System”
- CleanTechnica, “Battery-Electric Freight Trains Could Happen Sooner Than You Think”
- Caterpillar, “Coming Soon: The World’s Largest Battery Electric Vehicle”
- Union Pacific, “Union Pacific Railroad to Assemble World’s Largest Carrier-Owned Battery-Electric Locomotive Fleet”
- General Electric, “Leading The Charge: Battery-Electric Locomotives Will Be Pushing US Freight Trains Further”
- Trains, “Battery-powered locomotives continue to gain momentum”
The post Battery solves a literal “power rail” problem appeared first on EDN.
AI chat tool for PCBs aims to simplify hardware design

With a design mantra, “Hardware doesn’t have to be so hard,” Flux, a supplier of browser-based PCB design tools, has unveiled the latest iteration of Copilot, a chat-based artificial intelligence (AI) design assistant integrated into the Flux PCB design tool. The upgraded Copilot leaps forward on the path to generative AI by transitioning from being a helpful guide to a proactive design partner.
Besides providing advice to design engineers, with user approval, Copilot now performs actions that pave the way to the automatic design of circuits. “Flux Copilot is becoming a truly collaborative partner in hardware design,” said Matthias Wagner, CEO of Flux. “This is a big step toward fully generative AI, reducing the time and complexity often associated with component connections.”
Flux Copilot—based on a custom-trained large language model (LLM)—is designed to understand the principles of electrical engineering and circuit design. While living inside the design project, it provides direct feedback, advice, and analysis through a simple chat interface.
How Copilot works
Copilot carries out actual connections on the schematic, helping users navigate through simple circuits or intricate arrays of unfamiliar components. It also helps users with general questions, provides guidance on specific electronics design processes, and helps build circuits for them while eliminating the need for extensive research and iteration.
“While making an integrated board is incredibly difficult, Copilot helps designers create a bunch of connections and manage control pins without reading an 80-page datasheet,” said Kerry Chayka, hardware engineer at Flux. Chayka, who started his career working on iPhone designs, learned firsthand how hardware is unnecessarily difficult.
“You are buried in tools, a siloed world that’s very hard to progress through,” he added. “On the other hand, hardware design should be creative.” While working at a startup, where it took three weeks to develop a board, when Chayka began developing boards using Flux, it took him half a day to finish the design.
“That’s how I ended up at Flux,” he said. “As a one-man hardware shop, I was able to compete with companies that had entire teams building hardware.” Chayka gave the example of automated impedance control, a complicated task designers must perform on PCBs when working on parts like HDMI bus, Ethernet, and PCIe. Flux can perform tasks like automated impedance control and automated pair routing for design engineers.
“Design engineers must ensure that high-speed buses work properly with automated impedance control, so they don’t have to worry about things such as stack-up and calculating traces with spacing,” he said. “Copilot knows what these pins are doing and how they can be connected, so design engineers don’t have to do the boilerplate work over and over again.”
Figure 1 You can ask Copilot a bunch of questions and give a general idea of what you want to accomplish. It will provide a list of parts while helping you get started, learn things along the way, and get circuits implemented. Source: Flux
With an upgraded version, Copilot is more than a design guide. It can tell engineers what a specific pin does or explain complex circuit elements, eliminating the need to sift through pages of complex documentation. Moreover, Copilot can explain the role of parts in projects, teach users about the design, and provide a headstart for open-source hardware projects.
“If you are trying to design something similar, you can fork an open-source project, maybe add another sensor, but retain the rest of the capabilities,” Chayka noted. “That takes significantly shorter than constructing a board from scratch.”
For instance, when trying to develop an environment radiation logger, Copilot will give the list of specific parts to use and offer suggestions on developing a two-stage amplifier with 20-kHz bandwidth. Copilot can also advise what parts should be used and walk through specific electronics design processes; for example, how to connect op-amps in a configuration you want.
Community-based business model
Flux, founded in 2019 as an online hardware design platform, aims to augment engineering capabilities and thus enable professional engineers to work faster with much lower effort. “If you are a young engineer, it will enable you to do things for which you don’t have the skills,” said Jared Kofron, software engineer at Flux.
“We have an online hardware design community that offers public projects to leverage,” he added. “All projects start private by default, but some folks choose to make their projects public.” In other words, if a developer creates a part, it’s made public at Flux, and everyone can access it. Next, Flux adds real-time information about component stock and pricing to avoid supply chain issues before an engineer starts a project.
Figure 2 Copilot serves a wide range of users, from students to professional design engineers. Source: Flux
“We have a subscription model where users can subscribe for a standard monthly fee, which we want to keep low to expand access,” Kofron said. “The only thing behind the subscription is how many private projects an individual can do.”
If design engineers want to have the free version of the application, they are allowed to work on 10 projects. If they want to go beyond that, Flux asks for a subscription; the current rate is $7 a month. Flux also has a team-tier package for companies, which is more expensive.
Initially, Flux saw a lot of interest from the hobbyist community and individual contractors. “We have also seen a lot of interest from smaller startups,” noted Kofron. “A reasonable number of large companies are also using Flux to make a difference.”
In electrical engineering, you must work at so many levels of complexity, from a high-level view to deep down at USB implementation stacks, said Chayka. “You have to understand all the complexity,” he added. “We want people who don’t have a lot of electronics design experience to be able to use the same feature set that a super pro user would use.”
Related Content
- AI Must Be Secured at the Silicon Level
- AI-Powered Chip Design Goes Mainstream
- How generative AI puts the magic back in hardware design
- AI hardware acceleration needs careful requirements planning
- To Conquer SoC Design Challenges, Change EDA Methods and Tools
The post AI chat tool for PCBs aims to simplify hardware design appeared first on EDN.
Addressing network synchronization challenges in O-RAN infrastructures

The market for open radio access network (O-RAN) technology and its role in the implementation of 5G services has the potential to grow at a rapid rate. Mobile network operators (MNOs) seek to take advantage of lower costs, increased flexibility, and the ability to avoid vendor lock-ins. All these benefits are possible through access to interoperable technologies available from multiple vendors. Operators can also benefit from real-time performance.
O-RAN is the latest step in the evolution of the radio access network (RAN), which started with the launch of 1G in 1979. 2G launched in 1991 and 3G in 2001. 4G long-term evolution (LTE) services first appeared in 2009 and introduced packet switching. During its deployment, multiple input, multiple output (MIMO) antenna arrays started being used and centralized (or cloud) cRAN, running on vendor proprietary software, enabled the baseband unit (BBU) to be split into a distributed unit (DU) and a centralized unit (CU), with mid-haul between the two.
5G new radio (NR) rollout began in 2018 and introduced the virtualized RAN (vRAN) as a means of implementation, with BBU (or CU and DU) functions implemented in software running on servers. For example, load balancing, resource management, routers and firewalls can now run under network function virtualization (NFV). However, the software for the radio unit (RU), CU and DU is proprietary. O-RAN aims to remove barriers by giving operators access to open-source software-based vRAN for implanting 5G.
Figure 1 illustrates the goal of the O-RAN Alliance—a community of more than 300 mobile operators, vendors, research organizations and academic institutions—to have open RUs, CUs and DUs (prefixing each initialism with an O-) and with fronthaul through Common Public Radio Interface (CPRI).
Figure 1 Under O-RAN, we effectively have a modular base station software stack running on commercially available server hardware. MNOs can mix and match their O-RU, O-DU and O-CU from different vendors. Source: Microchip
Support for real-time is possible through 5G with transfer speeds of up to 20 Gbps, compared to 4G’s 1 Gbps between static points and only 100 Mbps between one or two moving points. Also, the latency is down to just 1 ms for 5G.
Another key component of O-RAN is the RAN intelligent controller (RIC), which can be either near-real-time or non-real-time, with both options responsible for the control and optimization of O-RAN elements. Figure 2 shows the O-RAN software community (SC), which follows the architecture defined by the O-RAN Alliance.
Figure 2 O-RAN SC architecture is shown with its near-real-time RAN intelligent controller. Source: Microchip
Synchronization
One of the major challenges for O-RAN implementation is ensuring synchronization of the various O-RAN elements—this stringent synchronization performance demands timing accuracy to just ±130 ns.
Keeping the RU switches and DUs synchronized is important for effective O-RAN operation. It avoids data packet loss, minimizes network interruptions and helps keep power consumption as low as possible. Synchronization also helps MNOs comply with their frequency license ownership responsibilities.
Another key difference between 5G and earlier generations is the switch from frequency division duplex (FDD) to time division duplex (TDD)—which allows for uplink and downlink transmissions to be made at the same time using two separate but close frequencies. It uses different time slots for uplink and downlink signals over the same frequency, which makes better use of the RAN RF spectrum delivers enhanced mobile broadband (eMBB), for example, the ratio between uplink and downlink time can adjust as required.
TDD also provides greater compatibility with MIMO beamforming and the C-band spectrum (3.7 to 3.98 GHz), which will be used by operators to deploy 5G across municipalities both big and small. To avoid intra- and inter-cell interference, there is a guard period between up and downlink transmissions. Even so, tight synchronization is essential for operational efficiency (reduced error rates) and to compensate for any frequency or phase shifts.
Precise timing
All new radio deployments must maintain phase alignment accuracy to a Universal Coordinated Time (UTC) Global Navigation Satellite System (GNSS) based timing source to within ±1.5 microseconds. Both compliance with multiple industry standards and following the recommendations of industry bodies is also essential when creating end-to-end, real-time connectivity.
For precise time distribution throughout the network, a precision time protocol (PTP) is needed as specified by IEEE 1588-2019 within the O-RAN Alliance’s O-RAN architecture. Within the protocol there is a grandmaster (or PTP master) clock against which other PTP clocks in the network synchronize using PTP messages. The synchronization factors in effects such as path delays, and the standard specifies time boundary-clock (T-BC) and time transparent clock (T-TSC) functions to counter upstream and downstream asymmetry as well as packet delay variation (PDV).
The ITU-T, part of the International Telecommunication Union, has also made recommendations for TDD. For instance, ITU-T G.8272/Y.1367 specifies the requirements for primary reference time clocks (pRTCs) suitable for time, phase, and frequency synchronization in packet networks, and ITU-T G.8273.2 recommends timing characteristics of telecom boundary clocks and telecom time secondary clocks for use with full timing support (FTS) from the network.
Throughout the network, clocks are placed in chains, with the time signal cleansed to filter and remove noise by boundary clocks. However, equipment will need to meet one of four performance classes, defined by ITU-T G.8273.2, which range from Class A to D. Of these classes, class C and D have the highest accuracy requirements. For instance, the time error produced by a Class D T-BC clock must be less than 5 ns. In addition to GNSS/UTC and PTP, 5G deployments also use Synchronous Ethernet (SyncE). Together, all three can deliver time, phase, and frequency accuracy through the network.
O-RAN demands off-the-shelf platforms
O-RAN provides MNOs with access to non-proprietary solutions. Where hardware is concerned, commercially available semiconductor devices and platforms can be used to meet the end-to-end timing requirements within the network.
For instance, IEEE 1588-compliant grandmaster clocks with PTP and SyncE capabilities are available that meet PRTC Class A, Class B, and enhanced PRTC (ePRTC) specifications plus Class C and D specifications for multidomain boundary clock. Such versatility and multifunctionality are critical features to MNOs in order to implement a synchronous timing solution.
Network synchronization hardware, such as oscillators, programmable phase-locked loop (PLL) ICs, buffers and jitter attenuators can be deployed within DU, CU, and RU equipment. Moreover, dedicated single-chip network synchronization solutions are now available. Microchip’s ZL3073x/63x/64x platform (Figure 3) brings together DPLLs, low output jitter synthesizers, and IEEE 1588-2008 precision time protocol stack and synchronization algorithm software modules.
Figure 3 The single-chip network synchronization platform combines DPPLs, jitter synthesizers, and precision timing. Source: Microchip
Another key consideration of timing so critical within a 5G O-RAN is stability against temperature. Temperature-compensated oscillators and PLLs and chip scale atomic clocks (CSACs), are already deployed and proven in harsh environments such as military and industrial applications, and are suitable for RU, CU, and DU hardware.
The use of TDD in 5G delivers great benefits but presents synchronization challenges. Thankfully, under O-RAN, MNOs and the companies supporting them with systems have access to semiconductors and platforms that can be used to craft an end-to-end RAN while avoiding being tied into proprietary solutions.
Thomas Gleiter is a staff segment manager at Microchip Technology Inc.
Related Content
- Creating more energy-efficient mobile networks with O-RAN
- Design and verify 5G systems, part 2
- O-RAN is transforming 5G network design and component interoperability
- O-RAN challenges from the fronthaul
The post Addressing network synchronization challenges in O-RAN infrastructures appeared first on EDN.
Battery life management: the software assistant

I haven’t exactly had the best of luck with battery-powered devices over the years. Stuff either flat-out dies, or it swells to gargantuan dimensions and then dies (which, I suppose, is better than “then catches fire and dies”). Then again, I’ve admittedly owned (or at least had temporary review-unit possession of) more than my fair share of battery-powered devices over the years so…probability and statistics, eh?
There was, off the top of my head:
- The expensive Beats Powerbeats Pro earbud set whose embedded batteries got deep cycled into oblivion
- A Kindle Keyboard ebook reader that suffered the same fate
- A Surface RT 2-in-1 that may have suffered the same fate (or maybe something else died inside…dunno)
- A Surface Pro 2-in-1 that keeps threatening to bloat and die but hasn’t…yet…
- A MacBook Pro laptop that has swollen into the shape of an egg…twice…along with multiple predecessors’ battery-induced demises, and
- An engorged, expired Google Pixel smartphone that currently resides somewhere in my voluminous teardown-candidate pile
- And likely more than a few other widgets whose memories I’ve suppressed
And folks wonder why I keep just-in-case spares sitting around for particularly essential devices in my technology stable…
From the occasional (situational, often, admittedly) research I’ve done over the years, rechargeable NiMH and Li-ion battery failures seem to stem from one-to-several-in-parallel of the following device owner root causes:
- Not using the device that the batteries are inside, which nonetheless trickle-drains them even when it’s supposedly “off”, to a deep-discharge point from which they’re eventually unable to recover (there’s a reason why specially designed deep-cycle batteries exist for use in certain applications, after all)
- Using the device that the batteries are inside, but not using the batteries, i.e., keeping a laptop computer perpetually tethered to an AC adapter (guilty as charged). Perpetually trickle-charging a battery inevitably leads to premature failure, but on the other hand…
- Using the device that the batteries are inside, and using the batteries, which seems contradictory with the previous point, until I add the “using the batteries a significant amount” qualifier. Every currently available battery technology, to a varying threshold point, is capable of only a finite number of recharge cycles before it flat-out fails. And prior to that point, its peak charge storage capacity slowly depletes with increasing cycle counts.
- Speaking of peak charge storage capacity, recharging the batteries to 100% full. Research suggests that charging batteries to only 50-80% of their full capacity, especially if the system containing them is going to be subsequently operated on AC for a lengthy period, will notably increase their operating life.
- Recharging the batteries too rapidly, and the closely related
- Recharging the batteries at too hot an operating temperature
So what can be done, particularly considering that battery packs are increasingly deeply embedded in systems to such a degree that their replacement is impractical (especially by end users) and their failure ends up leading to wasteful discard and replacement of the entire system? Some of this responsibility is up to the system owner: periodically unplug the device from AC and run it off battery power for a while to “force” a subsequent recharge cycle, for example. Achieving this objective is reasonably easy with earbuds or a smartphone…not so much with a constantly charger-tethered tablet or a laptop computer used as the primary “daily driver”.
Systems manufacturers also have a key role to play, from both hardware and software standpoints. To the latter topic, for example, it’s increasingly common for device suppliers to offer optional (or not) settings that enable the device to “guess” when it’s going to be used again and, based on how much charging needs to be done between now and then to get back to “full”, dynamically modulate the recharge rate to keep the battery cool and otherwise as unstressed as possible for the duration.
Google Pixel smartphones, as a case study, look at what time your tomorrow-wakeup alarm is set for, compare it against the current time when the phone is connected to the charger, and use the full duration of the in-between timespan to recharge the battery (holding it at 80% charged for most of that time, in fact), versus going blazing fast and then sitting there excessively warm and fully charged for the next however-many hours until you exit slumber again. Such “Adaptive Charging” approaches work passably…generally speaking, at least…
But inevitably there comes the time that you wake up in the middle of the night with insomnia and reach for a phone that’s only partially recharged. Human beings are such impatient creatures…and speaking of impatience, wireless charging is a mixed bag when it comes to battery life. On the one hand, its inherent inefficiency translates into slower recharge rates than if a wired charger is in use. On the other, that same inefficiency leads to higher heat output, “cooking” batteries in the process.
Unsurprisingly, Apple offers conceptually similar Optimized Battery Charging for its smartphones and tablets. “Battery Health Management” is also available for the company’s battery-powered computers, both Intel- and Apple Silicon-based but only the most modern ones in the former case:
Battery health management is on by default when you buy a new Mac laptop with macOS 10.15.5 or later, or after you upgrade to macOS 10.15.5 or later on a Mac laptop with Thunderbolt 3 ports.
Alas, while my “early 2015” 13” Retina MacBook Pro currently runs MacOS 11, its hardware predates the “Thunderbolt 3” era. As such, subsequent to its second battery swap, I initially strove to remember to unplug it from AC each evening after putting it in standby, resulting in a ~20% battery drain overnight and translating into an effective full recharge cycle every week or thereabouts. Did I always remember? No. And was it a clumsy workaround? Yes. Definitely yes.
Fortunately, I then came across a third-party utility called AlDente, developed by a company called AppHouseKitchen, which has added an “on steroids” alternative to battery health management of my archaic (at least according to Apple) computer. I sprung for the paid version, which thankfully offers a reasonable non-subscription-based price option ($21.85 for lifetime, versus $10.38/year) and which adds some useful (at least to me) features. Here are some screenshots:
When I’m getting ready to go on a trip where I’ll be using the laptop on an airplane, or in some other AC outlet-deficient situation, I crank up the charge limit to 100% first (dialing it back down to 80% afterwards). And when I need to recharge the battery quickly, I disable “Heat Protection”. Otherwise, I leave the settings pretty much where the screenshots document them. I also occasionally recalibrate the laptop’s charge-measurement hardware from within AlDente, and I strive to more frequently remember to continue partial-cycling the battery versus leaving the laptop perpetually AC-powered. Will the laptop’s battery pack last until the fall of 2024 when I forecast the system containing it will no longer be covered by Apple’s operating system support? I sure hope so!
What battery life extension secrets have you discovered and implemented in the systems you design and use every day? Let me know in the comments.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The ins and outs of battery management system (BMS)
- Battery charging options for portable products
- New battery technologies hold promise, peril for portable-system designers
- How to design a battery management system
The post Battery life management: the software assistant appeared first on EDN.
CXL: The key to memory capacity in next-gen data centers

We’re on the verge of a new era of computing that will likely see major changes to the data center, thanks to the growing dominance of artificial intelligence (AI) and machine learning (ML) applications in almost every industry. These technologies are driving massive demand for compute speed and performance. However, there are a few major memory challenges for the data center presented by advanced workloads like AI/ML as well.
These challenges come at a critical time, as AI/ML applications are growing in popularity, as is the sheer quantity of data being produced. In effect, just as the pressure for faster computing increases, the ability to meet that need through traditional means decreases.
Solving the data center memory dilemma
To continually advance computing, chipmakers have consistently added more cores per CPU—increasing rapidly over recent years from 32, 48, 96, to over 100. The challenge is that system memory bandwidth has not scaled at that same rate—leading to a bottleneck. After a certain point, all the CPUs beyond the first one or two dozen are starved of bandwidth, so the benefit of adding those additional cores is sub-optimized.
There are also practical limits imposed on memory capacity, given the finite number of DDR memory channels that you can add to the CPU. This is a critical consideration for infrastructure-as-a-service (IaaS) applications and workloads that include AI/ML and in-memory databases.
Enter Compute Express Link (CXL). With memory such a key enabler of steady computing growth, it is gratifying how the industry has coalesced around CXL as the technology to tackle these big data center memory challenges.
CXL is a new interconnect standard that has been on an aggressive development roadmap to deliver increased performance and efficiency in data centers. In 2022, the CXL Consortium released the CXL 3.0 specification, which includes new features capable of changing the way data centers are architected, boosting overall performance through enhanced scaling. CXL 3.0 pushes data rates up to 64 GT/s using PAM4 signaling to leverage PCIe 6.0 for its physical interface. In the 3.0 update, CXL offers multi-tiered (fabric-attached) switching to allow for highly scalable memory pooling and sharing.
CXL’s pin-efficient architecture helps overcome the limitations of package scaling by providing more memory bandwidth and capacity. Significant amounts of additional bandwidth and capacity can then be delivered to processors—above and beyond that of the main memory channels—to feed data to the rapidly increasing number of cores in multi-core processors.
Figure 1 CXL provides memory bandwidth and capacity to processors to feed data to the rapidly increasing number of cores. Source: Rambus
Once memory bandwidth and capacity are delivered to individual CPUs, GPUs, and accelerators in a manner that provides for desired performance, efficient use of that memory must then be considered. Unless everything in a heterogenous computing system is running at maximum performance, memory resources can be left underutilized or “stranded.” With memory accounting for upward of half of server BOM costs, the ability to share memory resources in a flexible and scalable way will be key.
This is another area where CXL offers an answer. It provides a serial low latency memory cache coherent interface between computing devices and memory devices, so CPUs and accelerators can seamlessly share a common memory pool. This allows for the allocation of memory on an on-demand basis. Rather than having to provision processors and accelerators for worst-case loading, architectures that balance memory resources between direct-attached and pooled can be deployed to address the issue of memory stranding.
Finally, CXL provides for new tiers of memory with different performance characteristics without requiring changes to the CPU, GPU, or accelerator that will take advantage of them—performing in between the performance of DRAM and SSDs. Pooled or shared memory may have its own collection of memory tiers depending on performance, but by providing an open standard interface, data center architects can more generally choose the best memory type, whether they be older memory technology or memory types we haven’t even seen hit the market yet, to provide the best total cost of ownership (TCO) for the workload.
Figure 2 CXL offers multiple tiers of memory with different performance characteristics. Source: Rambus
Here, CXL helps bridge the latency gap that has existed between natively supported main memory and SSD storage.
CXL, but when?
CXL technology will help with several use models that will roll out over time. The first experimentation will be for pure memory expansion (CXL 2.0), where more bandwidth and capacity can be plugged into a server in a one-to-one relationship between a compute node and a CXL memory device.
Key to implementation will be the CXL memory controller device which will manage traffic between the memory devices and CPUs and provide more flexibility by making the memory controller external to the CPU. This will enable more and varied types of memory to connect to compute elements.
The next phase of deployment will likely be CXL pooling, leveraging principles introduced with CXL 3.0, where the CXL memory devices and compute nodes are connected in a one-to-many or many-to-many fashion. Of course, there will be practical limits in terms of how many host connections you can have on a single CXL-enabled device in a direct connected deployment. To address the desire for scaling, CXL switches and fabrics come into play.
Switches and fabrics have the additional benefit of enabling peer-to-peer data transfers between heterogeneous compute elements and memory elements, freeing up CPUs from being involved in all transactions. Switch and fabric architectures will only be deployed at scale when data center architects are satisfied with the latencies and the reliability, availability, and serviceability (RAS) of the solution. This will take some time, but once the ecosystem arrives, the possibilities for disaggregated architectures will be enormous.
Once-in-a-decade technology
CXL is a once-in-a-decade technological force with the power to completely revolutionize data center architecture, and it’s gaining steam at a critical moment in computing business. CXL can enable the data center to ultimately move to a disaggregated model, where server designs and resources are far less rigidly partitioned.
The improved pin-efficiency of CXL not only means that more memory bandwidth and capacity are available for data-intensive workloads, but that memory can also be shared between computing nodes when needed. This enables pools of shared memory resources to be efficiently composed to meet the needs of specific workloads.
The technology is now supported by a large ecosystem of over 150 industry players, including hyperscalers, system OEMs, platform and module makers, chipmakers, and IP providers, which, in turn, furthers CXL’s potential. While it’s still in the early stages of deployment, the CXL Consortium’s release of the 3.0 specification emphasizes the technology’s momentum and showcases its potential to unleash a new era of computing.
Mark Orthodoxou is VP of strategic marketing for interconnect SoCs at Rambus.
Related Content
- ‘CXL’ May Well Be How Data Centers Spell Relief
- SNIA Spec Gets Data Moving in CXL Environment
- What CXL 3.0 specification means for data centers
- CXL Spec Grows, Absorbs Others to Collate Ecosystem
- CXL initiative tackles memory challenges in heterogeneous computing
The post CXL: The key to memory capacity in next-gen data centers appeared first on EDN.
Intel unveils service-oriented internal foundry model

While Intel has been finetuning its foundry business for quite some time, the latest update from the Santa Clara, California-based chip behemoth tilts its fab business more toward TSMC’s pure-fab model and less toward its IDM counterpart Samsung. Intel is calling it the most significant business transformation in its 55-year history.
In the operating model, Intel’s new manufacturing group, comprising Intel Foundry Services (IFS) as well as manufacturing and technology development units, will have a foundry-style relationship with the company’s internal product groups. In other words, the company’s manufacturing group, and IFS within it, will keep at arm’s length with Intel’s internal chip manufacturing operations.
Source: Intel
As a result, Intel will offer the same market-based pricing to its internal business units as well as external foundry customers. According to Intel executives, this “internal foundry” model offers significant inherent business value beyond billions of dollars in cost savings. Intel expects that this internal foundry model will bring cost savings of more than $8-10 billion by the end of 2025.
An arm’s length with the company’s internal chip operations also means that Intel will deliver complete segregation for foundry customers’ data and IP. “As we begin retooling the company for this transformation, we are architecting with a security-first mindset, taking data separation as a key tenet into our system design,” said Jason Grebe, corporate VP and GM of Intel’s Corporate Planning Group.
In retrospect, this move has long been overdue on Intel’s part in its bid to offer world-class foundry service levels. This service-oriented mindset has been a key to TSMC’s ascendence to the top, and it’ll be crucial in Intel’s ambition to become the semiconductor industry’s second-largest foundry. The clarity on Intel’s fab model is also critical amid its gigantic push to expand chip manufacturing capacity in the United States and Europe.
Related Content
- Change of guards at Intel Foundry Services (IFS)
- Intel Signs MediaTek as Third Major Foundry Customer
- Intel Foundry’s ‘No. 1’ Customer—U.S. DoD—Targets GAA
- Intel Foundry Services roadmap unveiled one deal at a time
- Intel’s foundry foray and its influence on the EDA, IP industries
The post Intel unveils service-oriented internal foundry model appeared first on EDN.
Polymer tantalum capacitors and lifetime

The subject of capacitors seems quite attractive from the standpoint of having low values of equivalent series resistance (low ESR) but there might be an issue which we see when we extract a small portion of a chart about those capacitors as taken from the Digi-Key website where we find ratings for “Lifetime @ Temp.”. Please see the following (Figure 1) excerpt from Digi-Key:
Figure 1 Polymer tantalum capacitor lifetime at temperature (right). Source: Digi-Key
In all these years, I had never before noticed a “lifetime” rating for any other capacitor types. For me, this specification raised heretofore unknown concerns.
From a different URL which unfortunately is no longer active: https://www.vishay.com/docs/42106/faqpolymertantalumcaps.pdf. We would find a Q & A as shown in Figure 2.
Figure 2 An application note excerpt from Vishay for polymer tantalum capacitors. Source: Vishay
Please take special note of the words “….slowly absorb moisture… “.
I downloaded the Vishay-Sprague file “polymerguide” which seems to describe these capacitors quite well, but I found no information in that document regarding “lifetime”. Thus, the question in my mind is whether the anticipated “lifetime” of polymer tantalum capacitors is the result of aging brought about by slow but inexorable absorption of moisture from the environment.
Would a circuit board coating (parylene or something similar) quell that moisture absorption aging process and extend the capacitors’ expected lifetime?
I do not know but SOMETHING is limiting “lifetime”. Taking whatever it is to be an aging process opens up a plausible line of reasoning via Arrhenius’s Law which briefly stated is that any aging process takes place at a rate which doubles for each ten degrees Celsius rise in temperature. Conversely, the rate of an aging process gets cut in half for each ten degrees Celsius fall in temperature.
This aging issue has been addressed before regarding film resistors.
Applying Arrhenius’ Law to this capacitor situation, we can estimate as follows:
Lifetime = (Lifetime at Max. Temperature) * 2^((Max. Temperature – Degrees)/ 10)
The variable “degrees” is the actual temperature to which the capacitors will be exposed.
For the six cases shown on the Digi-Key chart in Figure 1, we extrapolate six “lifetime” versus temperature curves in the plots of Figure 3. The little dots along each curve are where “lifetime hours” gets doubled for each 10°C drop in temperature from their respective maximums.
Figure 3 Extrapolated lifetimes versus temperature, lifetime hours are doubled for every 10oC drop in temperature. Source: John Dunn
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Capacitors–the old decoupling standbys
- Ceramic capacitors: How far can you trust them?
- Class 2 ceramic capacitors—can you trust them?
- Temp and voltage variation of ceramic caps, or why your 4.7-uF part becomes 0.33 uF
- Power Tips #50: Avoid these common aluminum electrolytic capacitor pitfalls
The post Polymer tantalum capacitors and lifetime appeared first on EDN.
Keysight updates EDA software for 5G/6G design

PathWave ADS 2024 from Keysight offers mmWave and sub-THz capabilities to accelerate 5G and 6G wireless semiconductor design and development. This latest version of the circuit design and simulation software suite provides faster electromagnetic solvers, application-aware meshing algorithms, and expanded Python APIs.
Second-generation 3D EM and 3D planar meshing and solvers leverage algorithm enhancements, mesh optimization, and layout and connectivity improvements for faster simulations. Solver enhancements speed simulation by up to 10 times. Updated layout and verification functions not only enable design sign-off directly from PathWave ADS for LVS, LVL, DRC, and ERC for MMICs, but also streamline module and multi-technology assembly.
PathWave electrothermal enhancements accelerate the validation of dynamic device operating temperatures under different bias and waveform conditions. High-performance compute acceleration and up to 100 times transient speed-up is accomplished using W3051E-based electrothermal dynamic reuse. Other PathWave ADS enhancements include a load-pull data import utility, artificial neural network (ANN) modeling, and Python automation scripting for 5G power amplifier designers.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Keysight updates EDA software for 5G/6G design appeared first on EDN.